Sora Adds "Extensions" for Longer Scene Continuity
OpenAI introduced Extensions, and early creator tests focus on whether longer clips preserve narrative consistency.
Source-linked reporting on AI video models, workflows, and policy.
Get Email BriefingDesk coverage
11 stories in this desk · last updated Feb 24, 2026.
OpenAI introduced Extensions, and early creator tests focus on whether longer clips preserve narrative consistency.
Google’s updates to Veo 3.1 push mobile-first output and stronger image-to-video consistency for social formats.
Luma frames Ray3 as a reasoning model for production-grade cinematic output and agency workflows.
AI and Video topic boards reveal where tool demand is clustering around practical production functions.
ByteDance introduced Seedance 2.0 with claims around longer generation, multimodal control, and synchronized audio-video output.
Kuaishou announced Kling 3.0 on January 31, featuring native 4K 60fps output, up to six camera cuts per generation with visual consistency, and synchronized audio-visual output in a single pass.
Luma AI released Ray 3.14 on January 26, delivering native 1080p output without post-upscaling, alongside major speed and cost improvements.
Google DeepMind released Veo 3.1 on January 13, introducing professional 4K upscaling, native 9:16 vertical output, and Scene Extension technology for narratives exceeding 60 seconds.
ByteDance officially released Seedance 1.5 Pro on December 16, introducing joint audio-visual generation that creates video and audio simultaneously from text and image prompts.
Runway released GWM-1, an autoregressive model family built on Gen-4.5, spanning explorable environments, interactive avatars, and robotic simulation.
Kuaishou unveiled Kling O1, positioning it as the industry's first unified multimodal creation tool that consolidates text, video, image, and subject inputs into a single generation and editing engine.