The IP Gambit: Lionsgate Unlocks Its 20,000-Title Vault to Shield Runway AI from Copyright Lawsuits
This deal aims to save millions in costs and provides a legal shield against copyright lawsuits from public data scraping. However, the move—executed post-strike—heightens the unresolved IP conflict over creator consent for AI training.
OK, the technology still isn't here, but Lionsgate's deal with Runway AI is the first time a major studio has opened its vault—all 20,000 titles—to train a proprietary model, which is still interesting. The capability isn’t the point (this training set is too small for today’s technology), but what the deal structure tells you about how studios are thinking about the training data problem.
This is a partnership, not a licensing deal. Runway gets high-resolution training data from a major studio's library. Lionsgate gets a custom model that only their designated filmmakers can use. Both sides get some legal cover—Runway avoids the copyright issues that come with scraping public content (they're already facing a class-action lawsuit over that), and Lionsgate can tell stakeholders they're only training on content they own.
The stated use case is cost reduction on storyboarding, previsualization, and special effects. Michael Burns, Lionsgate's vice chair, says it'll save "millions and millions of dollars." For a studio that's long positioned itself as cost-conscious compared to bigger rivals, that's the whole point. You can test greenlight decisions by feeding a script into a model trained on your existing franchises and seeing what comes out.
So what does this mean for product teams thinking about their own AI implementations? First, "train on what you own" is one approach when licensing third-party content is complicated or expensive. But it only works if you actually own enough high-quality data to make a useful model. Lionsgate has 20,000 titles—most companies don't have that kind of depth in any domain. And that still wasn’t enough to produce content that is making it to the screen.
Second, ownership gets messy. Lionsgate owns the copyright to finished films, but those films were created by artists, directors, VFX teams, and actors who may not have consented to their work being used to train an AI model. That's why the Writers Guild and SAG-AFTRA spent months on strike last year over exactly this question. The deal doesn't resolve those concerns—it just means Lionsgate decided they could move forward anyway.
Third, proprietary models trained on owned content don't eliminate legal risk—they shift it. You're still making a bet about how courts will eventually rule on AI training. The advantage is that "we only used our own content" sounds better than "we scraped the entire internet," but it's not clear that distinction holds up legally, especially when your content was created by people who never agreed to this use.
The broader industry context matters too. This happened a year after the Hollywood strikes, specifically about AI protections. Runway has a lawsuit pending from visual artists over alleged scraping. And most AI video models still can't produce anything close to production-quality output. Lionsgate is betting the technology will get there before the legal questions get resolved—and that having a proprietary model trained on their library will be worth more than waiting for clarity.

