Chinese regulators summoned Manus CEO Xiao Hong and Chief Scientist Ji Yichao to Beijing, then barred both from leaving the country while the NDRC investigates whether the startup's $2 billion sale to Meta violated export control and investment laws.
Baltimore filed suit against xAI, X Corp, and SpaceX alleging Grok's image generation enabled mass creation of non-consensual sexualized images including an estimated 23,000 images of minors over 11 days.
Senator Marsha Blackburn released a discussion draft of the TRUMP AMERICA AI Act, a sweeping framework that would deny fair-use protection for AI training on copyrighted works and incorporate the No Fakes Act.
The EU AI Act's requirement that all AI-generated content be marked in machine-readable format takes effect August 2 2026, forcing video generation platforms to implement C2PA or equivalent provenance metadata.
The US Senate passed the DEFIANCE Act unanimously in January 2026, creating federal civil remedies of up to $250,000 for victims of non-consensual deepfake images. The bill now awaits House action.
As of early 2026, 46 US states have enacted legislation targeting AI-generated media, creating a fragmented compliance landscape for video generation platforms operating nationally.
Anthropic accused three Chinese AI labs of large-scale distillation attacks on Claude, but the internet flipped the narrative into a debate about the industry's own data practices.
Canada's AI Minister summoned OpenAI's senior safety team after the February 10 Tumbler Ridge mass shooting revealed the company had flagged but did not report the shooter's violent ChatGPT interactions to police months before eight people were killed.
Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to lift safety restrictions on Claude for military use or face being labeled a supply chain risk and compelled to cooperate under the Defense Production Act.
A U.S. court order halted OpenAI’s use of the "Cameo" label for Sora’s character-consistency feature, adding naming risk to an already crowded launch cycle.
ByteDance released Seedance 2.0 on February 7, delivering cinema-grade video with synchronized audio and multi-shot storytelling, but viral deepfakes of Tom Cruise, Brad Pitt, and Disney characters triggered cease-and-desist letters from Disney and condemnation from the MPA within days.
h3h3 Productions and other creators sued Snap in the Central District of California, alleging the company used academic video datasets to train commercial AI features without permission.
Caixin and Reuters reported that a record-setting settlement and surge of 2025 lawsuits have set the stage for court decisions that could redefine how copyright law applies to generative AI.
A wave of California AI laws including AB 621 (deepfake damages up to $250K), AB 2013 (training data transparency), and SB 942 (AI disclosure) went into effect alongside similar legislation in 37 other states.
IPWatchdog analysis identified three landmark rulings from 2025 on AI training and copyrighted content that will shape how courts treat video-generation cases next year.
John Carreyrou and fellow writers filed suit against six AI companies on December 22, alleging unauthorized use of copyrighted works for model training.
Disney announced a $1 billion equity investment in OpenAI and a three-year licensing deal bringing Marvel, Pixar, and Star Wars characters to the Sora video generator.
The MPA called on OpenAI to take immediate action after copyrighted characters from member studios proliferated in Sora 2 outputs despite stated guardrails.
TikTok's December 4 community guidelines update expanded mandatory labeling for AI-generated content, introduced invisible watermarks via C2PA, and gave users controls to limit AI content in feeds.