By:
February 2, 2026

A lie travels the world on social media before a community note can lace up its shoes.

Last week, the White House shared an image of a protester arrested in Minneapolis. It had been digitally altered to add tears and darken her skin. Several news outlets debunked the image, some citing the use of artificial intelligence detection tools

But the damage was done. Millions had already seen it on X.

AI has unleashed a torrent of photorealistic images and videos in our news feeds. As generative tools proliferate, cheap and convincing fakes will appear in more critical contexts — fast, frequent and often untraceable.

But in the longer term, the greatest danger posed by AI isn’t fake images themselves, it’s the collapse of trust in real evidence. Seeing is no longer believing — and our institutions are unprepared for it.

Days after federal agents killed Alex Pretti in Minneapolis, video surfaced from an earlier encounter he had with DHS. Journalists scrambled to address the “it’s AI comments on social media and lament, “It’s wild seeing how many people in the replies are insisting that this video is AI.”

Restoring trust in the court of public opinion will require rethinking how digital “evidence” is created and handled in the first place. Our success likely depends on a proven legal concept: chain of custody.

Tech and media companies must move immediately to offer digital authentication tools that preserve this chain. It will require coordination from camera and phone manufacturers, social media platforms, browsers and even operating systems. Instead of trying to label AI, we must label what’s real.

So far, tech companies have kept touting labels on generative AI outputs even as bad actors strip that context away. So-called invisible watermarks can be removed. Government attempts to regulate AI have stumbled domestically and proven controversial overseas. Proposed laws to ban AI images in elections face confounding questions about the definition of parody and whether routine edits (like color correction) are permissible.

We’re not going to win an arms race with deepfakes. The human eye can’t spot six-finger glitches when there are none to be found. For every startup promising to identify bogus content, there’s another advancement by fraudsters leading to a false sense of security.

When real evidence is dismissed

Due to previous posts manipulated by AI, it was hard to believe this photo of Nicolás Maduro on Truth Social.

Failure to solve these systemic problems has led to creeping doubt in real evidence. Psychologists call it the liar’s dividend. It causes real harm.

Politicians and even police have learned they can dismiss photos and videos as fake. Whether or not the specific claim holds up, the pattern is now familiar: deny, invoke AI, move on.

When Donald Trump repeatedly mixed up Greenland and Iceland in remarks caught on video, the White House press secretary gaslit a reporter: “No he didn’t … You’re the only one mixing anything up here.”

Google searches for George Orwell spiked this month.

“The Party told you to reject the evidence of your eyes and ears,” he wrote in his dystopian classic Nineteen Eighty-Four. “It was their final, most essential command.”

Orwell might have appreciated how the same command can backfire.

When Trump posted a photo online of Nicolás Maduro in handcuffs, journalists hesitated. It was plausible, but was it real? The New York Times found that AI detection tools were of little help. Meanwhile, actual deepfakes of the Venezuelan president ricocheted around the internet.

When enhancement becomes invention

For decades, television sold Americans a reassuring fantasy about forensic evidence. On shows like CSI, NCIS and Law & Order, a grainy image flickered on a computer screen until an investigator barked the magic word: Enhance! Pixels snapped into place. Reflections appeared where none seemed visible before.

AI puts similar capabilities within anyone’s reach. Outputs have quickly become convincing. The stakes feel low when you restore a family photo or touch up a selfie.

Unfortunately, armchair detectives pushed AI beyond its limits. They tried to “unmask” the ICE agent who killed Renee Good in Minneapolis and “clean up” security footage of Charlie Kirk’s alleged shooter. When results are shared on social media, it fuels online conspiracy theories and drains law enforcement resources with false leads.

When NPR covered the AI “unmasking” of the ICE agent who shot Renee Nicole Good, they were careful to label and both images and the fabricated portion.

Following Kirk’s assassination, a Utah sheriff’s office went on Facebook to promote a “much clearer image of the suspect compared to others we have seen in the media.” They later admitted it seemed altered by AI, which “may distort glasses, shirt decals and make skin appear waxey and ultra smooth.”

This screenshot from the Washington Post shows how it used clear labels to bring attention to AI manipulation of an image of Charlie Kirk’s alleged shooter.

Reframe the problem

We can’t surrender to cleaning up these messes after the fact. Instead, the solution demands a foundational shift to secure evidence from end-to-end. 

Information flows downstream, where it gets polluted with deepfakes or simply false context (sometimes called a cheapfake). Platforms that prioritize clicks over verification often strip away the very metadata audiences and professionals need to judge when and where an image was captured.

The fixes must begin upstream. Trusting a piece of evidence requires a way to recognize it as the same object over time, no matter where it travels.

Fortunately, everything is digital these days. Authenticity metadata can be recorded at the moment of content creation. Cryptography can add a tamper-evident seal.

Let’s get technical

One technique is to register a photo, much like a person in a police fingerprint database. Computers can use math formulas to generate a unique identifying code from any file. This digital fingerprint, often just 64 characters long, is called a hash. It’s short enough to write on a napkin, post on social media, or even chisel in stone. Early researchers published them in classified ads, a precursor to the modern approach of blockchain registration.

A hash doesn’t reveal anything else about the original file, which is good news for privacy. A photo can’t be reconstructed from its hash, much like you can’t be reconstructed from your fingerprint. Yet both are reliable identifiers.

Hashes have been published in The New York Times classified ads since at least 1995. Example from 2009.

Years in the future, you could receive a photograph from a stranger and run it through the same math formula. If the new hash matches one previously registered, it proves an identical and authentic copy. If someone erases a shadow, swaps a license plate or merely brightens a face, the hash no longer matches.

Hashing doesn’t make an image clearer. It makes it certain. It can’t tell you what happened – but it can tell you whether what you’re looking at is the same thing others saw before the arguments began.

Of course, it’s unforgiving when changing a single pixel breaks the match. Real-world implementations require care to record legitimate edits, which are incorporated into a verifiable chain of custody.

Early prototypes

News and human rights investigations have started testing these techniques to authenticate evidence of war crimes. Historians have secured testimonies from 60,000 genocide survivors. Courts, journalists and voters will increasingly need these approaches to reach agreement on what evidence is real, otherwise accountability will break down.

Content Credentials can be applied to real and AI-generated images. Their existence doesn’t prove “truth” but instead helps audiences understand its components — sort of like a nutrition label. Screenshot from contentcredentials.org.

An approach called Content Credentials has gained some industry interest. It works by writing a manifest of provenance metadata, embedded with the original file or saved separately, that can be audited back to a physical camera. Manufacturers like Sony and Leica make it available on a limited number of models, while a free mobile app called ProofMode allows any journalist to try it in the field. Photoshop users can enable it to share a version history of their edits.

Despite that support, widespread adoption remains a hurdle. Experts are hesitant about the technical challenges inherent to digital provenance. Few devices or social media platforms have fully integrated any such standard, which is no small task to develop. Even when an image has Content Credentials today, your email, text message app, or news feed may have no way to display them.

Now what?

We all need the tools to generate reliable provenance information – and to read it. No system can eliminate deception entirely, but when provenance data becomes the norm, its absence will warn audiences to be skeptical.

Tech companies must also give users the privacy controls to opt out of sharing too much data. Society doesn’t need a certificate of authenticity with GPS coordinates for every selfie you take at brunch. Revealing a phone’s serial number could endanger a whistleblower. But it’s important to have these options when you record evidence that could sway a court of law or public opinion.

Newsrooms remain in a bind: expected to verify evidence they did not capture, on platforms they do not control, using tools they did not design. Even without the resources to build their own systems, journalists can begin educating audiences about how provenance works – and why it matters. That awareness, in turn, can push consumers and civic leaders to demand real authentication from tech companies.

Until major platforms display provenance as prominently as likes or views, doubt will remain the default – and trust will continue to crumble.

Poynter is a nonprofit dedicated to keeping journalism strong, relevant, and grounded in sound values.

Join our donor community, the First Amendment Society, to help us continue this critical work.

DONATE
Adam Rose is a fellow at the Starling Lab For Data Integrity, an academic research program based jointly at the Stanford Department of Electrical Engineering…
Adam Rose

More News

Back to News