First & Last Frame Control
Set your opening and closing frames, then let Wan 2.7 generate the motion and transitions in between. No more guesswork. You get predictable results with full command over composition and story endpoints.
Direct your vision from start to finish. Generate 1080p content with first-and-last frame control, 9-grid multi-image composition, video-to-video editing, and native audio. Wan 2.7 gives you the power to direct, not just generate.
Main interaction area - no copy needed
Wan 2.7 combines generation and editing in one platform. First-and-last frame control, multi-image composition, instruction-based editing, and native audio give you studio-grade creative freedom without the learning curve.
Set your opening and closing frames, then let Wan 2.7 generate the motion and transitions in between. No more guesswork. You get predictable results with full command over composition and story endpoints.
Upload up to 9 reference images in a 3×3 grid. Wan 2.7 analyzes multiple angles and perspectives at once, producing consistent results with less visual drift between frames.
Transform existing footage using text instructions. Change backgrounds, adjust lighting, or recolor objects while keeping the original motion intact. Edit specific details without starting over.
Tell Wan 2.7 what to change in plain English. "Change the jacket from red to navy" or "replace the background with a rain-soaked street." Iterate on specific elements using natural language.
Keep characters looking and sounding the same across clips. Upload reference images and audio samples. Wan 2.7 binds them across up to 5 inputs without requiring LoRA training or manual alignment.
Generate visuals and audio at the same time. Built-in voice, sound effects, and ambient audio come together automatically. Characters speak with synchronized lip movements. No separate audio production needed.
Export broadcast-ready 1080p clips up to 15 seconds at 24fps. Better skin textures, fabric movement, and lighting. Supports landscape (16:9), portrait (9:16), square (1:1), and custom aspect ratios.
Create without boundaries. Wan 2.7 has no face filters, no content censorship, and no creative limits. Use it for storytelling, animation, marketing, or experimental art.
Create AI videos in three simple steps
Watch demonstrations of first-and-last frame control, 9-grid multi-image composition, video-to-video editing, and native audio generation with Wan 2.7.
Whether you're a storyteller, marketer, animator, or editor, Wan 2.7 gives you studio-grade creative power without the steep learning curve.
Direct narrative arcs using first-and-last frame control. Create smooth transitions, character transformations, and multi-scene sequences. No more guesswork. Perfect for storyboard visualization and concept animation.
Transform existing footage with instruction-based editing. Change backgrounds, adjust lighting, or recolor elements while keeping the original motion. Iterate quickly without starting over.
Generate vertical TikToks, Shorts, and Reels with talking characters and native audio. Upload reference images for consistent character branding across your content library. Export ready-to-publish clips in seconds.
Produce talking avatar ads, product demos, and brand content with exact messaging control. Use 9-grid multi-image composition to maintain brand consistency. Scale output without scaling costs.
Turn product photos into dynamic lifestyle clips. Use first-and-last frame control for product transformations, unboxing sequences, or before-and-after reveals with sound effects that drive conversion.
Prototype cutscenes, trailers, and character-driven narratives. Test dialogue sequences with lip sync before investing in full production. Visualize game scenarios and app user journeys with consistent character design.
Push creative boundaries with zero content restrictions. Create surreal transitions, morphing sequences, and abstract motion art. First-and-last frame control turns conceptual ideas into visual reality.
Generate instructional content with talking presenters, animated diagrams, and multi-shot explanations. Keep instructor appearance and voice consistent across course libraries using subject and voice referencing.
Wan AI 2.7 is Alibaba's latest AI video generation model developed by the Tongyi Wanxiang team. Released in March 2026, it combines generation and editing in one platform. Features include first-and-last frame control, 9-grid multi-image composition, video-to-video editing, instruction-based editing, and native audio generation. It produces 1080p output up to 15 seconds.
First-and-last frame control (FLF2V) lets you set where your clip starts and ends by uploading two frames or describing them. Wan 2.7 generates the motion, transitions, and scene progression in between. No more guesswork. You get predictable results with full command over composition, narrative arc, and endpoints. Perfect for transition sequences, character transformations, and story-driven content.
You can upload up to 9 reference images in a 3×3 grid layout. Wan 2.7 analyzes them simultaneously to understand your subject from multiple angles. This produces better character consistency, less visual drift, and more accurate composition than single-image reference. Ideal for maintaining brand identity or character design across clips.
Wan 2.7 goes beyond basic text-to-video generation. Unlike competitors, it offers first-and-last frame control for predictable endpoints, instruction-based editing to modify existing clips without starting over, 9-grid multi-image composition for consistent characters, and zero content restrictions. It combines generation and editing in one platform.
Yes. Wan 2.7 includes video-to-video editing with instruction-based controls. Upload existing footage and describe changes in natural language: "change the background to a rain-soaked street," "recolor the jacket from red to navy," "adjust lighting to golden hour." The model modifies specific elements while keeping the original motion intact.
Yes. With paid plans, you own full commercial rights to the videos you generate. Use them for social media monetization, advertising campaigns, client projects, marketing content, educational materials, and commercial productions.
Wan 2.7 exports universally compatible MP4 files in 1080p resolution at 24fps. Supported aspect ratios include 16:9 (landscape), 9:16 (vertical for TikTok/Reels), 1:1 (square), 4:3, and 3:4. All videos include native audio synchronized with visual content.
Use subject and voice referencing. Upload reference images and audio samples. Wan 2.7 binds them across up to 5 inputs and keeps visual and vocal identity consistent throughout. No LoRA training or manual character rigging needed. Just provide references and the model handles consistency automatically.
Generation speed depends on complexity, duration, and quality settings. Standard 1080p output with audio typically takes a few minutes. Wan 2.7 is optimized for production workflows, delivering results much faster than traditional pipelines.
No. Wan 2.7 has zero content restrictions, no face filters, and no creative censorship. Create what you want for storytelling, animation, marketing, experimental art, or any other production needs.
Join creators, marketers, animators, and storytellers using Wan 2.7 to produce high-quality content with first-and-last frame control, multi-image composition, and native audio. No steep learning curve.
Try Wan 2.7 Now