EU AI Act: What Creative Companies Need to Do Now—In Practical Terms

The EU AI Act is in force—gradually, but binding. What creative agencies and production companies need to review and document now, in concrete terms.

By Thomas Fenkart · 4 min read

EU AI Act: What Creative Companies Need to Do Now—In Practical Terms

EU AI Act: What Creative Companies Need to Do Now—In Practical Terms There are laws you can ignore until they catch up with you. And there are laws where that’s a bad idea. The EU AI Act belongs in the second category—even if it still hasn’t really landed in many creative offices and production companies. The clock has been ticking for a while now. The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and has been applying in phases ever since. The first bans—such as on social scoring or emotion recognition in the workplace—already took effect in February 2025. More obligations will follow in the coming months. If you’re still thinking “we’ll need to read up on that,” you genuinely don’t have much time left. I’m not going to play lawyer here—I’m not one. But as someone who spent years in the creative and production industry and now implements GenAI in software, I’ve had to grapple with it. And what I learned: it’s less complicated than it sounds, as long as you know where to look. What already applies—and what’s still coming? The EU AI Act is built around a risk model. Four levels: prohibited practices, high-risk AI, transparency obligations, minimal risk. For most creative companies, the extremes—what’s prohibited and the high-risk segment—are not very relevant. Video editing AI, music generators, image editing: that’s not what “high-risk” is about. What definitely does apply to the creative sector: the transparency obligations. Under the EU AI Act, these become binding as of 2 August 2026. In practical terms, that means: anyone using AI systems that interact with people—say, AI chatbots on your website or AI-supported customer communication—must ensure users know they’re speaking to a machine. That sounds trivial, but it’s a legal standard that needs to be documented. Even more relevant for creative companies: AI-generated content must be identifiable as such. Deepfakes in particular—and AI-generated texts intended to inform the public—must be clearly and visibly labeled. This affects advertising agencies just as much as film productions using synthetic elements. The GPAI rules—for providers of so-called General Purpose AI Models—apply under the AI Act from August 2025. So if you develop an AI model yourself or build products on top of such models, you’re directly on the hook here: transparency about training data, copyright compliance, and for models with systemic risks, risk assessments as well. The high-risk rules roll out in stages from August 2026 and August 2027. What this means in practice Let me make it concrete. A film production company using GenAI in post—color grading, sound design, VFX assists—probably has less to worry about than you might think. These applications largely fall into the “minimal risk” category, for which the AI Act doesn’t prescribe specific rules. What you should review, though: 1. Transparency for AI-generated content If your output goes public—ad materials, social posts, campaigns—and AI played a significant role, you should clarify internally: Do we have a labeling obligation? From what point is content considered “AI-generated” under the law? That hasn’t been fully settled by case law yet, but an internal policy is a good starting point. 2. Copyright compliance for AI tools What training data did the tools you use rely on? It’s an uncomfortable question—because many vendors aren’t very transparent here. For companies producing AI-generated content professionally, it’s advisable to address this question in writing with tool providers. Not out of paranoia, but because it’s a legitimate risk. 3. Documenting AI use Sounds bureaucratic, but it’s sensible: record which AI systems the company uses and for what purposes. That makes future compliance checks much easier—and it also helps internally to use AI in a structured way. 4. A high-risk check for specific applications If a creative company uses AI for HR decisions (AI-powered recruiting software), access checks, or performance evaluations, then the high-risk track becomes relevant. These are precisely the kinds of use cases where the AI Act sets very specific requirements—kicking in from August 2026 and August 2027 respectively. What I find a bit irritating in the current debate: many companies are waiting for absolute legal certainty before doing anything. They’ll be waiting a long time. Laws are always clarified through practice and court rulings. What does make sense instead: a simple internal AI policy that captures the current state of knowledge. Not as a legal masterpiece, but as a practical document that shows the company takes the issue seriously. And then—update it every six months. That’s enough for now. And it’s better than nothing. Note: This is not legal advice.