What Can We Still Believe When the Real and the Artificial Merge?

When we can no longer distinguish between real footage and artificially generated images, there’s more at stake than just media literacy. What’s on the line is nothing less than a new social contract around truth – and the question of whether we help shape it consciously.

By Thomas Fenkart · 6 min read

What Can We Still Believe When the Real and the Artificial Merge?

We’ve reached a point where the camera’s promise – “This really happened” – has started to crack. By the end of 2025, the line between live-action footage and artificially generated video will blur so much that even professionals will struggle. What began a year or two ago with still images now extends to moving images, sound, voices, entire personalities. This is more than just a technological milestone. It’s an assault on a quiet foundational assumption of our society: that there are things we can collectively agree on because they are visible, verifiable, documentable. “We’re not losing reality – we’re losing the convenience of trusting it blindly.” In this article, we’ll ask what we can still believe once that distinction no longer works – and which new mechanisms we’ll need to preserve at least a basic level of shared trust. When Video Evidence Stops Being Evidence For more than a hundred years, one rule held in both film and journalism: the camera doesn’t lie – only the person showing the image might. Of course, everyone knew you could cut, frame, stage. But there was always a bedrock of factuality: there really was a camera, at a real place, at a real time. With generative models, that basic certainty collapses. What we see may never have “taken place” at all. People who never existed give speeches that were never delivered, in rooms that were never built. In a video segment, the media program ZAPP has taken up this issue and shown how quickly we reach a point where we can hardly tell what’s artificial – and how that puts our information culture under pressure. The problem isn’t just the deception itself. It’s the consequence that everything can potentially be dismissed as fake. Anyone who wants to deny inconvenient truths will no longer need counterevidence – the phrase “That’s just a deepfake” will be enough to sow doubt. That flips the burden of proof: it’s no longer about exposing the fake, but about actively proving authenticity. A New Social Contract Around Truth If images and videos no longer work as “self-evidence”, we need new forms of trust. Three levels are emerging: 1. Technical verifiability We’ll need mechanisms that make the origin and alteration of media transparent – a kind of “digital provenance trail”. Initiatives like C2PA are already working on standards where metadata is cryptographically signed: Who recorded or generated something, when, with what, and what was edited? For media organizations, platforms and also SaaS providers in the GenAI space, this means: content authenticity becomes a core responsibility – and will likely become a legal standard down the line. 2. Institutional trust When an individual video is no longer reliably trustworthy, a new question becomes crucial: Who stands behind it? Credible institutions that commit to clear review processes and disclosure obligations will gain in importance. Truth will then include not only the content itself, but its context: editorial oversight, methodology, correction mechanisms. 3. Individual judgment In the end, there’s no way around the fact that we as individuals must become more discerning. Not in the sense of blanket distrust (“Everything is a lie”), but in the sense of a sharper awareness of plausibility, sources, contradictions. Critical thinking is no longer a “nice-to-have” – it’s a core skill on par with reading and writing. These three levels – technology, institutions, individuals – together form a new contract around truth. It only works if all three actively play their part. Opportunity or Loss of Control? What This Crisis Does to Us The idea that we can no longer trust our own perception is unsettling. At the same time, there’s an opportunity here that we as a society have rarely used: the chance to make our own belief-forming mechanisms visible in the first place. We have long been accustomed to granting images and videos a kind of default trust. “I saw it with my own eyes” was a powerful argument – often stronger than source criticism or long-term evidence. With generative media, that comfort evaporates. This can trigger two very different reactions: - Cynical withdrawal: “You can’t believe anything anyway.” Here, skepticism slips into arbitrariness. Those who doubt everything paradoxically become especially easy to manipulate, because in the end the most appealing narrative wins out, not the most plausible one. - Reflective alertness: “I examine how I arrive at what I believe.” This is more demanding, but productive. It leads us to look not only at the content, but also at the conditions of its production: Who benefits? What gaps are there? Which alternative explanations are conceivable? Perhaps this is the real cultural shift we’re currently stumbling into: away from a naïve realism about images and towards a more conscious engagement with narratives, evidence, and context. Mechanisms We Should Be Building Now The key question is not whether generative media will change how we deal with truth – that has already happened. The question is how we can actively counterbalance it instead of merely reacting. Here are a few concrete principles we, as individuals and organizations, should adopt: 1. Provide context by default Anyone publishing content – whether a media outlet, a company, or a creator – should disclose origin, editing, and intent. In a deepfake era, “bare content” without context is a risk. 2. Secure authenticity technically, not just claim it Digital signatures, provenance standards, watermarks for synthetic content: none of this is optional flourish; it’s the new baseline if we want to scale trust. Especially in the GenAI SaaS space, this is part of product responsibility. 3. Establish critical routines For individuals, this means internalizing a handful of questions before believing or sharing something. For example: Who is saying this? What independent sources exist? What would count as evidence to the contrary? For teams, it means setting up editorial or review processes that explicitly consider synthetic content as a possibility. 4. Accept that doubt will be normal We will never again live in a time when a video “just like that” counts as proof. That doesn’t have to be the end of trust – but it is the end of blind trust. A certain degree of constructive doubt will be part of everyday life. When the real and the artificial can no longer be told apart, we lose a set of supposed certainties. But we gain the chance to decide more consciously what we base our trust on: verifiable origin, transparent processes, and a culture that treats critical questions not as a nuisance but as the foundation of living together. And here is the ZAPP media magazine segment (in German): https://www.youtube.com/watch?v=tXsORfMcVyw