5 Reasons AI Projects Don’t Deliver ROI — and How to Do It Right
Why do so many AI initiatives fail despite major investment? Thomson Reuters identifies five core mistakes — and how creative companies can do it better.
By Thomas Fenkart · 5 min read
5 Reasons AI Projects Don’t Deliver ROI — and How to Do It Right A colleague in advertising told me something recently that’s stuck with me. His company had spent over a year investing in AI tools, running workshops, launching one pilot after another. And in the end: nothing measurable. People were still using the tools, but the productivity boost everyone hoped for never showed up. This isn’t an isolated case. An analysis by Thomson Reuters—published in February 2026 and picked up by several outlets—shows that despite widespread AI usage, many organizations aren’t seeing any meaningful return on investment. In it, Jonathan Richard Schwarz, Head of AI Research at Thomson Reuters, identifies multiple factors—technological, conceptual, and organizational—that block successful AI implementations. I compared that with my own observations from the creative industry. And I have to say: the patterns line up. The “More Data” Fallacy—and What Actually Matters The first, and maybe most common, mistake: companies assume that more data automatically leads to better AI results. More training data, more input, more everything. The logic sounds plausible—but it isn’t always. What actually matters is data quality and relevance. An AI model fed mountains of irrelevant or flawed material learns the wrong things. Or—more often—produces outputs that look good on the surface but don’t hold up in substance. For creative agencies, that means in practical terms: don’t just dump every asset you have into an AI system and hope it “learns.” Curate deliberately. What truly represents what we want to produce? The second mistake is closely related: believing that more compute, or a pricier model, will solve the problem. The analysis refers to “jagged intelligence”—a phenomenon where models excel at certain tasks and completely fail at others, in ways that aren’t predictable from the outside. More hardware doesn’t make that issue go away. The Wrong Tool for the Wrong Job Mistake number three is a classic: using the wrong tool for the wrong use case. General-purpose models like ChatGPT or Gemini are impressively versatile—but that versatility is also their weakness when it comes to specialized applications. If you’re a GenAI creative agency building an AI workflow for video production, you need different tools than a marketing team generating social media posts. That sounds obvious, but in practice it gets ignored all the time—because ChatGPT is simply the best-known option that everyone’s familiar with. The reverse is also true: specialized models have their own blind spots. They’re strong in their lane and fragile outside it. AI consulting for the creative industry also means spotting that mismatch early. A Google representative summed it up recently: AI startups that are merely a thin UI layer on top of an existing language model won’t survive in the mid-term. What will remain are companies solving real, specific problems—with deep domain expertise. When No One Knows What to Do with the Results Mistake number four is less technical, but at least as consequential: a lack of AI literacy in the teams that have to work with the outputs. This isn’t about coding skills. It’s about whether copywriters, creative directors, or producers understand how to evaluate AI results, put them in context, and improve them. If someone just copy-pastes an AI-generated text without critically checking it, even the best model won’t help. I’ve seen this in my own projects. The moment AI truly becomes productive isn’t when the tool gets better. It’s when the people on the team learn how to work with it. Prompt engineering is part of that—but really it’s about a much more fundamental understanding: Where is AI helpful? Where do I need human judgment? The analogy that comes to mind: a great cinematographer can’t turn a bad script into a good film. The quality of AI outputs depends enormously on who’s working with them. The Organizational Problem: Silos That Block AI Value The fifth reason is structural. AI projects often fail not because of the technology, but because of how companies are organized. If the IT department rolls out an AI tool without involving the teams who are supposed to use it every day, you get resistance. If decisions about AI investments are made without feedback loops to operational units, you end up with projects detached from reality. If success metrics aren’t clearly defined before the project begins, then in the end no one knows whether it worked. That sounds trivial. It isn’t. Especially in more traditional industries—and historically, the creative industry is one of them—innovation projects are often poorly integrated into day-to-day operations. What helps: small, concrete use cases instead of big transformation programs. Run a pilot with a team that actually has motivation, not the one that offers the least resistance. And above all: define clear goals that are measurable. GenAI ROI for companies doesn’t come from enthusiasm—it comes from disciplined implementation. I’m not saying this to be pessimistic—quite the opposite. The good news is: these five mistakes are avoidable. They don’t require miracle technology or massive budgets. They require clear thinking about what you actually want—and the willingness to start slower than the hype suggests. That’s an unsatisfying piece of advice for anyone who wants impressive results immediately. But it’s the right one.