AI D-A-M-N/Lightricks' LTXV Update: 60-Second AI Video Generation Breakthrough

Lightricks' LTXV Update: 60-Second AI Video Generation Breakthrough

Lightricks Announces Major LTXV Model Update

Lightricks has released a groundbreaking update to its LTX-Video (LTXV) model, now capable of generating 60-second videos from a single image. This advancement shatters the industry's previous 8-second limit, establishing LTXV as the first open-source model supporting real-time streaming of long-form AI videos.

Image

From Short Clips to Full Narratives

The updated LTXV introduces an autoregressive streaming architecture, producing high-quality videos up to one minute in length. Unlike conventional models that output short segments, LTXV streams video in real time during generation, with the first second appearing almost instantly followed by continuous scene construction. This technology ensures smooth motion and coherent narratives, offering creators unprecedented storytelling capabilities.

Users can generate videos at 768x512 resolution or higher (up to 1216x704) at 24FPS by combining detailed text prompts with an initial image. The model's multi-scale rendering first captures rough movements at low resolution before refining details, optimizing both speed and quality.

Real-Time Control and Creative Options

The update introduces dynamic scene control, allowing users to adjust elements like posture, depth, or style during generation. Lightricks' proprietary IC-LoRA technology enables continuous application of control signals (such as posture or edge detection) for precise video detailing. This "generate-as-you-go" capability transforms LTXV from a simple prompt-based tool into a comprehensive narrative platform.

LTXV supports multiple generation modes:

  • Text-to-video
  • Image-to-video
  • Keyframe animation
  • Video expansion

Users can even combine multiple images or short clips to generate complex video content, making the model suitable for applications ranging from social media to educational content and AR visual effects.

Efficiency and Accessibility

The 13-billion parameter model features bfloat16 precision optimization, enabling operation on consumer GPUs (like NVIDIA RTX 4090/5090) with as little as 8GB VRAM. On an H100 GPU, LTXV generates a 5-second video in just 4 seconds - faster than real-time playback. This efficiency contrasts sharply with competitors like OpenAI's Sora, which typically requires multi-GPU clusters.

As an open-source model, LTXV's code and weights are freely available on GitHub and Hugging Face under the LTXV Open Weights License. Lightricks has also released supporting tools including LTX-Video-Trainer and ComfyUI integration workflows to lower the entry barrier for creators.

Industry Impact and Future Developments

The update positions LTXV as a game-changer for independent creators, marketers, and game developers needing rapid long-form content generation. Early adopters report impressive results - one user generated a 768x512 video on an RTX 4080 in just 45 seconds, though some note workflow setup could be improved.

The current non-commercial use restrictions may limit some commercial applications. Lightricks plans to address this with a commercial version supporting 60-second videos on its LTX Studio platform.

Key Points:

  • First open-source model supporting 60-second AI video generation
  • Real-time streaming with dynamic scene control capabilities
  • Operates efficiently on consumer-grade GPUs (minimum 8GB VRAM)
  • Multiple generation modes including text/image-to-video and keyframe animation
  • Open-source availability with supporting tools for community development