The AI Act and copyright

The AI Act and copyright

I've seen enough technological disruptions to recognize when public debate focuses on the wrong questions while the real structural changes happen elsewhere.

The case of the European AI Act and copyright is exactly one of those moments.

The mainstream narrative is a distraction

They tell you this story: Europe passed the first comprehensive AI law, protecting authors' rights by forcing AI companies to respect opt-outs and document training data. It's a battle between technological innovation and protecting human creativity. You need to pick a side.

This narrative is technically accurate but strategically useless. It's like describing the pieces on a chessboard without seeing the actual game being played.

The real game is about something much deeper: a fundamental reallocation of economic power in the creativity value chain. And this reallocation is happening through mechanisms that public debate isn't even considering.

The value void everyone ignores

Let's start with a consolidated legal fact: in the European Union, a work generated autonomously by artificial intelligence is not protected by copyright. It immediately falls into the public domain. This isn't a system bug or a legislative gap to fill. It's the direct application of a millennia-old principle: copyright protects human creativity, period.

The European Court of Justice has built granite-solid jurisprudence on this over the years: a work is protected only if it constitutes an "author's own intellectual creation," with "free and creative choices" that imprint a "personal touch." A machine, by definition, has no personality to imprint.

The United States says the same thing. The UK, which for thirty years had a bizarre exception protecting "computer-generated works," is about to eliminate it. There's global convergence: no creative human, no copyright.

This legal choice creates a gigantic economic void. If you invest hundreds of millions to develop an AI model that generates content, and then anyone can freely copy those outputs, how do you recover the investment? How do you compete if a competitor can take for free what cost you a fortune to produce?

The mainstream stops here and says: "See? It's a problem. We need to protect AI investments." But this is exactly the wrong question.

The inversion that actually matters

The right question is: if the output has no protected value, where does the value shift?

The answer is brilliant in its simplicity: it shifts to the input.

If the AI-generated image is in the public domain, the real asset becomes the prompt that generated it. If the algorithm-produced text is freely copyable, what has commercial value is the human instruction that guided that production.

This isn't a technical detail. It's a fundamental systemic inversion.

For decades, the economic model of creativity has been: I create a work → I protect it with copyright → I license it → I earn. The asset was the output.

The new model is: I design an instruction → I protect it as trade secret → I use it to generate volume → I capture value at the service/platform level. The asset is the input and the infrastructure.

This changes everything. We're not talking about adapting the old system to new technology. We're talking about a paradigm shift in where economic power accumulates.

Trade secrets as the new copyright

This is where it gets interesting. The report I analyzed states it clearly: the best protection for a strategically valuable prompt isn't copyright, but trade secret.

Copyright only protects the specific expressive form. A competitor can take your idea, reformulate it, and get the same result without violating anything. Trade secret, instead, protects the information itself, as long as it remains secret.

But this takes us back to a pre-modern protection model. Before the industrial age, medieval guilds protected their know-how through secrecy, not through patents or copyright. The recipe for Chinese porcelain, Venetian glassmaking techniques, alchemists' formulas: all protected through information control, not through public disclosure in exchange for temporary protection.

We're returning to that model. And this has profound consequences for how you structure an organization. No longer a portfolio of registered and licensable works, but vaults of trade secrets protected by NDAs, access controls, information compartmentalization.

The opt-out that doesn't work

Now we come to the part everyone believes is the heart of the matter: the obligation for AI companies to respect copyright holders' opt-outs when collecting data to train models.

Article 53 of the AI Act, operational since August 2, 2025, says that model providers must "identify and respect" the rights reservations expressed by holders. Sounds good on paper. But anyone who's ever implemented complex systems knows there's an abyss between "what the regulation says" and "how it works in practice."

The problems are systemic:

There's no universal technical standard for signaling an opt-out. One creator might put a tag in metadata, another a declaration in robots.txt, a third a note in terms of use. Should AI model crawlers recognize them all? And who verifies?

The technical and financial burden falls entirely on creators. If you're a freelance illustrator or independent musician, you must implement machine-readable mechanisms on your site, continuously monitor whether they're respected, and potentially sue companies with enormous legal teams if you suspect violations. This isn't an equitable system, it's a cost shift that makes the right theoretically existent but practically unenforceable for the majority.

Retroactivity is a huge problem. Do models already trained on billions of works before August 2025 remain? Must they be retrained from scratch respecting all opt-outs? Who controls?

And we arrive at the question nobody asks: who enforces? The European AI Office is supposed to verify that OpenAI, Google, Anthropic, Meta have actually respected every single opt-out in their training datasets? With what resources? With what technical competence to verify the inside of a model that's a black box with hundreds of billions of parameters?

The advantage of those already inside

This creates a gigantic structural advantage for those who already collected enormous datasets in previous years, when these rules didn't exist or weren't enforced.

Google has indexed the web for two decades. OpenAI collected data for years in legal gray zones. These actors already have their trained models. They've already captured the value of that knowledge.

The new rules, however well-intentioned, raise entry barriers for anyone wanting to enter later. A new European competitor wanting to develop a competitive model today would need to:

  1. Acquire or license data respecting all opt-outs
  2. Implement costly verification and documentation systems
  3. Compete against those who've already amortized those costs on global economies of scale

This is a textbook case of involuntary regulatory capture. The rules protect incumbents more than they limit their power.

What's really happening to human creative work

The report presents the issue as a balanced debate: innovation on one side, author protection on the other. But as a pragmatic observer who's seen disruption in different sectors, the direction is clear.

The value of standard creative content is collapsing. Not because AI is "better" than humans, but for basic economics: when marginal production cost tends toward zero, price collapses.

A stock illustration, a generic blog article, an ambient music track, a basic design: anything that doesn't require a unique and demonstrable human contribution will see its market value compressed toward zero.

The only creative works that will maintain high value will be those where the human element is:

  • Demonstrable (you have process documentation)
  • Verifiable (third parties can confirm your contribution)
  • Differentiating (brings something the machine can't replicate)

This means the surviving creative work will necessarily be more sophisticated, more personalized, more "signature." Paradoxically, AI eliminating generic creative work might elevate the minimum level required to be a professional creator.

The invisible systemic pattern

Stepping back and looking at all this through a systems architect lens, the broader pattern becomes clear.

Before: Creator → Protected work → Distribution → Monetization (value captured by author)

Now: Data (often uncompensated) → Model (protected as company IP) → Output (public domain) → Value captured at platform level

Economic power has shifted from content creators to those who control the infrastructure that generates and distributes it. This isn't new: it's the same pattern as the music industry (from musicians to labels and streaming), video (from filmmakers to platforms), text (from journalists to social media).

AI is just the latest chapter of a longer story: the disintermediation of creators and the concentration of value in platforms.

How to operate in this scenario

If you need to navigate this world, some operational principles:

Document the process, not just the output. If you use AI in your creative work, keep detailed logs of every phase. Prompts, iterations, curation choices, modifications. This documentation is your proof of human authorship.

Protect inputs as strategic assets. Your prompt libraries, your workflows, your processes: they're trade secrets. NDAs with collaborators, access controls, compartmentalization.

Exploit the public domain. The growing volume of AI-generated content is a free resource. Use it as raw material, add your human creative layer on top, and you create something protected and sellable.

Invest in your human signature. What differentiates you as a creator is no longer technical execution ability (AI does that), but the vision, taste, cultural context, narrative that only you can bring.

The question that matters

The real debate shouldn't be "do we protect AI outputs or not." That's the debate they want you to have because it distracts from structural issues.

The right question is: who will control the cognitive infrastructure of the future? And in a world where content tends to become commoditized, what economic model sustains the human creativity that feeds that very infrastructure?

These are systemic questions, not legal ones. And they require a deep understanding of how economic power restructures when underlying technological layers change.

The rest is theater.