Runway Unveils Gen-4 AI Model, Advancing Video Generation with 'World Consistency'

The new video generation model is now available to all paid and enterprise customers

Runway Unveils Gen-4 AI Model, Advancing Video Generation with 'World Consistency'

AI video startup Runway recently announced its new Gen-4 series of AI models on X, showcasing its ability to generate media using just an image as a reference.

According to Runway, Gen-4 sets a new benchmark in video generation, surpassing its predecessor, Gen-3 Alpha. “It excels in generating highly dynamic videos with realistic motion, maintaining subject, object, and style consistency, while offering superior prompt adherence and best-in-class world understanding,” the company shared.

Gen-4 is Runway’s first AI model that claims to achieve "world consistency." Co-founder and CEO Cristóbal Valenzuela emphasized that users can now create cohesive environments, objects, locations, and characters across their projects.

Jamie Umpherson, head of Runway Studios, added, “You can start telling longer-form narratives with actual continuity—generating the same characters, objects, and locations across different scenes, allowing for structured storytelling.”

Providing a behind-the-scenes look at Gen-4’s capabilities, Runway explained that users can direct subjects within a scene while maintaining visual coherence.

The official research page highlights that users can define a specific look and feel, which the model maintains throughout every frame. Additionally, Gen-4 enables regenerating elements from multiple perspectives, making it particularly useful for product photography and narrative storytelling.

The new video generation model is now available to all paid and enterprise customers. A collection of short films and music videos created with Gen-4 can be found on Runway’s behind-the-scenes page.