Composition stays, motion appears
The reference image anchors the look. Your prompt controls how the camera moves, how the subject animates, and how the atmosphere evolves over the clip.
Upload a still image, describe the motion you want, and let the model bring the frame to life. Camera moves, character animation, and atmospheric motion all become prompt-driven.
These are standalone tools. You can use them directly, or send a prompt here from one of the prompt-building tools above.
The reference image anchors the look. Your prompt controls how the camera moves, how the subject animates, and how the atmosphere evolves over the clip.
If you want a video that matches the style of an existing image, run Image-to-Prompt first, then use the structured prompt as the motion brief here.
Switch between Runway, Kling, and other image-to-video models depending on the kind of motion you need — slow camera glide, character animation, or fast cinematic action.
Every video is saved with the source image, prompt, model, and credit cost — making it easy to compare attempts or hand the best one off to a client.
Drag-and-drop, click to browse, or paste a URL — JPEG, PNG, and WebP are all supported.
Be specific about camera moves and what should animate. "Slow dolly-in, hair drifts in the wind" beats "make it move."
Pick a model, run it, and iterate on the motion brief if the first take misses.
Image to Video is an AI tool that turns a still image into a short cinematic clip. The reference image controls the look; your prompt controls the motion, camera, and atmosphere.
Text to Video generates a clip from a prompt only. Image to Video starts from your uploaded reference and animates it — so the look stays consistent with the original frame.
Runway, Kling, and other admin-approved image-to-video models on the platform are listed in the model picker.
Clip length depends on the model — usually a few seconds. The exact length is shown in the workbench before you confirm the run.