By:
May 3, 2023

I recently interviewed someone special. Here’s a clip:

So … what just happened here? This was a conversation between me and a version of me generated by artificial intelligence. The audio for “AI Ian” was created by ElevenLabs, while the video for “AI Ian” was created by the Creative Reality Studio by D-ID.

We are being bombarded with AI-generated videos like this one, as well as with synthetic images, AI-generated text and now voice-cloning. It’s time to learn how not to be fooled by this technology.

Voice-cloning and misinformation

First, a little bit about voice-cloning. ElevenLab’s voice synthesis platform lets users generate realistic audio of any person’s voice by uploading a few minutes of audio samples and typing in text for it to say.

This is how viral clips you may have seen — such as a fake Emma Watson reading “Mein Kampf” or Bill Gates saying the COVID-19 vaccine causes AIDS — were created. Synthetic voices have come a long way since their inception, evolving from basic robotic tones to incredibly realistic and personalized voices. In fact, the latest synthetic voices are becoming almost indistinguishable from natural speech.

To create a cloned voice, the computer needs two things: the words we want it to read and a sample of a voice we want it to sound like. This Medium article gives a clear example: If we wanted Batman to read the phrase “I love pizza,” then we’d give the system text that says “I love pizza” and a short sample of Batman’s voice so it knows what Batman should sound like. The output would then be audio of Batman’s voice saying the words “I love pizza!”

It’s obvious that this new technology has some pretty serious misinformation capabilities. A bad actor could use AI voice cloning to create a recording of a politician, journalist or other public figure saying something they never actually said. For example, an AI-generated clip supposedly of Joe Biden mocking transgender people went viral in February but was quickly debunked as a deepfake.

The Pope in a Balenciaga puffer jacket is an example of an AI-generated image. Like voice cloning, AI-generated images also have the potential for misinformation. The fake mugshots of former President Donald Trump you may have seen are also examples of AI-generated images.

Tips to identify AI-generated content

Media literacy is all about understanding context. So, what can you do to avoid being duped by AI-generated content? The first thing you want to do when you see suspicious images or hear any puzzling audio is investigate the user who posted it.

  1. Do a keyword search for the username, display name or any other tidbits from the user’s bio. In the case of the pope image, It’s clear that this isn’t a Vatican photographer. And neither their Twitter nor Instagram provides any evidence that they are a trustworthy source. You should always go deeper.
  2. Next, try to find out where the image or audio came from. For images, try a reverse image search. This is when you use a search engine to find similar or identical images — it might get you to the original source. A search for the pope image took us to a Reddit post, in a community for … MidJourney AI-generated images. For audio, do a keyword search for an exact quote, and if nothing pops up despite it seeming newsworthy, that’s a red flag.
  3. Pay attention to any quirks or inconsistencies in the voice or the image. The audio might have unusual pauses or changes in pitch or tone. It might sound slightly robotic or unnatural. For images, look closely at hands, eyeglasses and teeth. Are there too many fingers? Then zoom in on the background. Are the faces just blurry blobs? Use voice recognition tools that can help you determine if a recording has been artificially generated. 

The best tactic you might take is simply the trusty keyword search. If the recording or image looks suspicious, try to confirm it with other sources before accepting it as true. Cross-reference the information with other news sources, official statements or trusted experts in the field.

Bottom line: Don’t share or forward any recordings or images without first verifying their accuracy and authenticity. 

NOTE TO TEACHERS: Here is the full Is This Legit video created by Ian Fox. In addition, this article is featured in free, one-hour lesson plan that offers students tips on how to identify AI-generated content. The lesson is available through PBS LearningMedia, and includes a lesson summary and a handout, among other resources. 

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tags:

More News

Back to News