By:
February 15, 2023

Watch how fast this artificial intelligence tool can write a fake article, from a fake newspaper, with a fake reporter. 

 

Amazing, but kind of scary, too. This new technology is called ChatGPT. Developed by OpenAI, it can generate thousands of words in minutes on just about anything — even nonsense that sounds extremely legit. This type of AI is very good at spitting out what we’ll call “plausible BS.” And that’s not good if — like us — you’re in the business of truth. 

So, let’s take a look at it.

A viral sensation

ChatGPT came out in late November 2022 and quickly became a viral sensation. It already has more than a million users. ChatGPT was fed millions of pieces of writing from the internet, including from Wikipedia and Reddit. As a result, it can give you some pretty convincing responses to any questions or prompts you throw its way.

It uses complex math to figure out what to say — specifically, a special calculation to pick its next word based on all of the words that have already been generated online until 2021. This helps it make its responses sound more human. It can answer questions or prompts quickly in writing, which raises the question of how it might be used to spread misinformation in the coming years. 

The answer is slightly frightening. Currently, it is completely free to use, so it’s super easy for bad actors to create misinformation. Contrast that with Russian troll farms, which spent more than $1million a month to attempt to disrupt the 2016 U.S. presidential election. 

ChatGPT doesn’t provide any sources for the information it puts out, a clear red flag. I decided to ask it why it doesn’t cite sources, and it said: “I am a machine learning model that was last trained on new data in 2021. I don’t have the ability to provide sources as I don’t have access to the internet. My knowledge is based on the data that was used to train me.”

Persuasive misinformation

The potential for misinformation is huge. In one case, Georgetown researchers used ChatGPT to create misinformation in the form of tweets about climate change and foreign affairs. And, as the article says, “People found the posts persuasive.” 

Another group of researchers asked ChatGPT to write about vaccines “in the style of disinformation.” The system made false claims that a study, which does not exist, published in the Journal of the American Medical Association, “found” that the COVID-19 vaccine is effective in only 2% of people. Both the journal reference and the statistics were completely false. 

ChatGPT has a counterpart, also created by OpenAI, called DALL-E-2, which has been fine-tuned so that it will generate pictures from text descriptions. Using ChatGPT and DALL-E together, users can generate misinformation that looks scarily accurate. 

Some advice

Who better to answer how to detect AI on the internet than ChatGPT itself? I gave it this prompt: Produce several media literacy tips designed to help people figure out if social media content was generated by Artificial Intelligence. 

It spit out this advice: 

  • Check for patterns and inconsistencies: AI-generated content may have patterns or inconsistencies that indicate it was generated by a machine, such as repeating the same phrase multiple times or using the same structure repeatedly.
  • Look for signs of human error: AI-generated content may contain errors or inconsistencies that would not be present in human-generated content, such as awkward phrasing or lack of context.
  • Check the context: AI-generated content may be out of context or not make sense in the context of the conversation.

And then it gave this warning: “It’s important to note that AI-generated content is becoming increasingly sophisticated and may not always be easy to detect.”

Spotting AI content

Some recent developments show promise in detecting AI-generated text. First, its creator OpenAI is working on a way to watermark content from ChatGPT. This would work by hiding a special sequence of letters spread throughout the text. Even if a few are edited, they could be flagged as AI-generated. 

In addition, OpenAI itself has designed a tool called the AI Text Classifier — similar to plagiarism checkers that teachers often use — to help educators figure out how likely it is that a piece of text was created by AI.

At the same time, a Princeton student developed an algorithm to detect the complexity of a text and determine if it was created by AI. Human-written texts tend to be more unpredictable than AI-produced work, making it simple to differentiate bot from human. 

Some limitations

ChatGPT is a powerful AI technology that has the potential to spread misinformation and create confusion in society. But it’s important to recognize its limitations.

If you’re trying to figure out if something you’re reading could be AI generated — that article about Beyonce and the camel for example — ask yourself these three questions from the Stanford History Education Group:

  1. Who is behind the information?
  2. What is the evidence?
  3. What are other sources saying?

Finally, do some lateral reading. Open up tabs on your computer and find out what other credible sources are writing about the topic.

Let’s fight back with our brains. It’s crucial, now more than ever, to sharpen our critical thinking skills while online to be able to identify signs of AI-generated content.

NOTE TO TEACHERS: This article is featured in a free, one-hour lesson plan that teaches students what ChatGPT is and offers tips on how to identify AI-generated content. The lesson is available through PBS LearningMedia, and includes a lesson summary and a handout, among other resources.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tags:

More News

Back to News