Runway, the AI startup, has announced its latest AI video model, Gen-4, designed to generate consistent scenes and characters across multiple shots, addressing a common challenge in AI-generated videos.
Runway states on X that Gen-4 offers users more “continuity and control” for storytelling.
The Gen-4 video synthesis model, which is currently rolling out to paid and enterprise users, allows the creation of characters and objects across shots using a single reference image and descriptions of the desired composition, generating consistent outputs from multiple angles.
To demonstrate its capabilities, Runway released a video showcasing a woman maintaining her appearance across various shots, contexts, and lighting conditions. This release follows Runway’s Gen-3 Alpha video generator, which extended video lengths but also faced controversy due to reported training on scraped YouTube videos and pirated films.