Skip to main content

Tongyi Lab's New Video Tool Makes Editing as Easy as Writing a Document

Tongyi Lab Democratizes Video Production with AI-Powered Wan2.7-Video

Image

Imagine being able to edit videos as effortlessly as you compose an email. That's the promise behind Aliyun Tongyi Lab's latest innovation, Wan2.7-Video, which launched this week to address two persistent pain points in digital content creation: technical complexity and creative limitations.

Breaking Down Creative Barriers

The platform represents a quantum leap from conventional video editors. "We've essentially built word processing software for visual storytelling," explains a Tongyi Lab spokesperson. Users can now manipulate every element of their videos - from scene composition to character dialogue - using simple text prompts or by dragging existing media assets into the system.

What sets Wan2.7-Video apart is its multimodal understanding. Feed it text descriptions, reference images, existing video clips, or even audio cues, and the AI generates coherent visual sequences. Need your protagonist to deliver different lines? Just type the new dialogue. Want to transport your beach scene to a snowy mountain? A single command makes it happen.

Hollywood-Grade Editing at Your Fingertips

Professional filmmakers might raise an eyebrow at claims of "one-click" environment changes, but early testers confirm the tool delivers remarkable results. The background replacement feature doesn't just slap on new scenery - it intelligently adjusts lighting, shadows, and perspective to maintain visual consistency.

The editing capabilities border on magical:

  • Object manipulation: Delete unwanted elements (goodbye photobombers!) or swap props without leaving artifacts
  • Temporal control: Adjust pacing and transitions down to individual frames
  • Style transfer: Apply cinematic filters or mimic specific directors' visual signatures
  • Performance tweaking: Modify actors' expressions and movements post-production

Creative Superpowers for Storytellers

Where Wan2.7-Video truly shines is in narrative flexibility. Writers can experiment with alternate plotlines by regenerating scenes instead of reshooting them. Content creators can repurpose successful sequences across multiple projects while maintaining brand consistency.

The "story continuation" feature particularly excites educators and marketers. Start with an existing video clip, then have the AI generate logical extensions - perfect for serialized content or branching narrative experiments.

Key Points:

  • Intuitive interface replaces complex editing software with simple text commands
  • Multimodal input accepts text, images, video clips and audio as creative starting points
  • Non-destructive editing preserves original footage while enabling infinite variations
  • Style replication maintains visual coherence when modifying existing content
  • Real-time collaboration allows teams to work simultaneously on different scene elements

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Qwen App's New Video Model Lets You Remix Clips with Simple Commands
News

Qwen App's New Video Model Lets You Remix Clips with Simple Commands

The Qwen app just leveled up video editing with its Wan2.7 model. Now you can transform footage by typing simple commands - swap out subjects, extend clips, or mimic actions from reference videos. The update brings professional-grade editing tools to everyone's fingertips, making complex video manipulation as easy as having a conversation.

April 3, 2026
AI video editingQwen AppWan2.7 model
News

Kling AI Slashes Video Model Prices: Big Savings for Creators

Kling AI is rolling out deep discounts on its premium video creation tools, offering up to 80% off for members. Starting April 1, the three-month promotion makes advanced video generation more accessible while extending free access to some image tools. This move signals a strategic shift in the AI video market, where companies are now competing on affordability and user experience rather than just raw computing power.

April 1, 2026
AI video generationcreative toolstech discounts
Tongyi Lab's Qwen3.6-Plus Brings Stability to AI Programming
News

Tongyi Lab's Qwen3.6-Plus Brings Stability to AI Programming

Tongyi Lab has unveiled Qwen3.6-Plus, a significant upgrade to its AI programming model that tackles developers' biggest frustration: unreliable task execution. This new version shines in coding tasks and long-context understanding while maintaining impressive cost efficiency. What really excites developers is its seamless integration with popular coding tools and breakthrough visual agent capabilities that can turn design drafts into functional code.

April 2, 2026
AI ProgrammingTongyi LabQwen3.6
Alibaba's New AI Image Model Brings Hyper-Realistic Faces and More
News

Alibaba's New AI Image Model Brings Hyper-Realistic Faces and More

Alibaba has unveiled Wan2.7-Image, a groundbreaking AI model that revolutionizes image generation. Gone are the days of generic 'AI faces' - this technology enables pixel-perfect facial customization down to bone structure and eye shape. It also masters artistic color transfer and can generate print-quality documents with complex formatting. With interactive editing features and multi-subject consistency, this tool is set to transform industries from e-commerce to entertainment.

April 1, 2026
AI image generationAlibabadigital content creation
Google's Veo 3.1 Lite: Affordable AI Video Generation for Everyone
News

Google's Veo 3.1 Lite: Affordable AI Video Generation for Everyone

Google just dropped a game-changer for video creators. Their new Veo 3.1 Lite model brings professional-quality AI video generation down to just $0.05 per second - making high-quality content creation accessible to small teams and individual creators alike. While keeping costs low, Google hasn't compromised on quality, delivering stable 720P videos perfect for social media and mobile apps. This move could democratize video production and reshape the digital content landscape.

April 1, 2026
AI videoGoogle Veocontent creation
Qwen3.5-Omni Ushers in a New Era of AI with Multimodal Mastery
News

Qwen3.5-Omni Ushers in a New Era of AI with Multimodal Mastery

Tongyi Lab's latest AI model, Qwen3.5-Omni, has set a new benchmark with 215 state-of-the-art achievements. This multimodal powerhouse seamlessly processes text, images, audio, and video, outperforming competitors like Gemini-3.1Pro in audio understanding while maintaining top-tier visual and text capabilities. Its innovative Hybrid-Attention MoE architecture enables processing of lengthy audio and video content with remarkable precision. From real-time voice control to personalized voice cloning, Qwen3.5-Omni is redefining how we interact with technology.

March 31, 2026
AI innovationmultimodal AIvoice technology