The answer is no. Photos that appear to depict those events aren’t real; they are the product of generative artificial intelligence.
Artificial intelligence, also known as AI, has existed for years. But generative AI, a subset of the field, recently has propelled the technology into public view.
Generative AI has had the fastest acceptance and usage growth of any new technology, even surpassing the iPhone, said Tom Davenport, a Babson College information technology and management professor.
Recent investment and advances in the technology have led to better-quality tools that can create realistic images from text prompts, or write a poem or essay on par with what humans can produce.
Generative AI exploded onto the scene in late 2022 when OpenAI, a San Francisco-based tech company, released Dall-E, an image generator, and ChatGPT, an AI chatbot, that allowed anyone to use them to create art or text. Competitors responded in kind, flooding the market with similar products.
The technology is likely to change the way we live and work, and it’s expected to transform a number of industries as companies incorporate it.
Here’s what to know about generative AI and how it’s used.
What is generative AI?
Artificial intelligence is a field of computer science that focuses on training computer systems to use a process or set of rules, called algorithms, to perform tasks normally performed by humans. We interact with the technology when using personal assistants like Apple’s Siri or Amazon’s Alexa, or when we see predictive text offered as we type a search query on Google.
Generative AI is a broad term that describes when computers create new content — such as text, photos, videos, music, code, audio and art — by identifying patterns in existing data.
“It’s AI that creates new content based on past content. It predicts what is most likely to be next in a sequence of words or images or music or anything sequential using machine learning models,” Davenport said.
How it works
Several types of generative AI tools are in use today, including text-to-text generators such as ChatGPT, text-to-image generators such as Dall-E, and others used to generate code or audio.
Valerie Wirtschafter, a senior data analyst in the Brookings Institution’s Artificial Intelligence and Emerging Technologies Initiative, said most generative AI tasks rely on deep learning.
That’s a method of artificial intelligence where computers are trained to use neural networks — a set of algorithms designed to mimic neurons in the human brain — to generate complex associations between patterns to create text, images or other content.
There are different types of deep learning models used to train generative AI tools, but the most widely used are transformers and generative adversarial networks, known as GANs.
Transformers, first described in a 2017 paper by Google researchers, are networks designed to more naturally process language. ChatGPT was built using a transformer-based large language model, a deep learning model trained on massive amounts of data. GPT stands for Generative Pre-trained Transformer. Transformers are also used in other text creation software, including Google’s Bard.
Davenport said transformers help AI better predict text in context, because they help identify the relationships between all the words in a sentence. For example, transformer-based models made it possible to distinguish between words that have more than one meaning, such as “bank,” based on the context in which they were used, he said.
GANs, introduced in 2014, are mostly used for image and multimedia generation. They have two neural networks: a generator that creates an image based on data, and a discriminator that uses machine learning to predict whether the generated image is real or fake, said V.S. Subrahmanian, a Northwestern University computer science professor.
The first one or two generated images may not be good, but the discriminator can easily determine they are fake. Subrahmanian said that with each failure, the generator learns from its mistakes and produces better, more realistic images.
“Generative adversarial networks turned the scales,” Subrahmanian said, because they generate new realistic looking images and videos.
You probably saw on the news A.I. generations of Pope Francis wearing a white cozy jacket. I’d love to see your generations inspired by it.
Here’s a prompt by the original creator Guerrero Art (Pablo Xavier):
Catholic Pope Francis wearing Balenciaga puffy jacket in drill rap… pic.twitter.com/5WA2UTYG7b
— Kris Kashtanova (@icreatelife) March 28, 2023
What’s behind the recent surge in interest?
Although generative AI had been around for a while before its recent ascendence, the technology was in its infancy and its use was limited, Subrahmanian said.
“The rise of generative AI is due to a trifecta of factors,” he said, including advances in deep learning such as generative adversarial networks; much more available data available to train models; and more powerful graphics processing unit computers that help accelerate the training.
As a result, models can “generate high-quality images much faster than would have been otherwise possible,” he said.
When companies such as OpenAI made products available to the general public, it was transformative, experts said.
In September 2022, OpenAI made Dall-E 2 available to anyone, after initially offering it only to users on a waiting list that had grown to more than 1 million people. Competitors launched similar products, including Stable Diffusion and Midjourney.
Interest spiked again in November 2022, when OpenAI launched ChatGPT, allowing anyone to sign up for free to test it and provide feedback during a research preview. In April, the site had more than 206 million unique visitors, according to data analytics company Similarweb, which noted that growth had flattened since its initial launch.
“It was really only when ChatGPT was announced in November that the rest of the world really woke up to it,” said Davenport.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, said ChatGPT was a more accessible and better-behaved chatbot than most users had experienced, explaining the massive surge in public use.
Along with 2022 improvements in image generation capabilities, the release of OpenAI’s latest language model “sparked the current wave of public interest,” Toner said.
ChatGPT, Midjourney and Dall-E are among the most popular generative AI platforms in use, Subrahmanian said. ChatGPT and Dall-E were both created by OpenAI, while Midjourney comes from a research lab bearing the same name. Their rapid adoption has spurred an arms race, with several new companies and products seeking to enter the space.
Who is using it, and what is its potential?
“Generative AI has the power to help ordinary people overcome their weaknesses, and excel in ways they had not previously imagined,” Subrahmanian said. He cited examples such as:
- A non-native English speaker who needs to write a report in English, writes it in her own words and then asks ChatGPT to rephrase it;
- An injured cartoonist could use an image generator to produce work in his style;
- A widow could use the technology to hear her deceased husband’s voice say, “I love you,” again.
Toner said large language models can be used for tasks including summarization, translation and chat, Toner said. Image generators can be used for video game design, graphic design and animation.
“We are still in the early stages of exploring where these systems can and cannot be useful,” she said.
Wirtschafter, of the Brookings Institution, a Washington, D.C., think tank, said the scope of use for text generators such as ChatGPT is vast across education and the workplace. Students and teachers are embracing it, and professionals can use it “as a means of generating new ideas and speeding up work,” she said. She described text generators as acting like “Google on steroids.”
“People use it for speeches, talking points, generating code output, identifying citations, summarizing documents, generating event and article titles, and so on,” she said.
Businesses are also very interested in the technology, Davenport said. Kraft Heinz Co. in August released an AI-generated ketchup ad. Coca-Cola Co. in May released an ad that used generative AI, along with live action and other digital effects, to show a Coca-Cola bottle traveling through an art museum.
Other companies are using text generators to manage their internal knowledge, he said. He cited Morgan Stanley, which has been training GPT using 100,000 company documents to help address questions its financial advisors may have.
Concerns about the technology
Academic and industry leaders have expressed concern about AI’s potential downsides, including large-scale job loss, the rise of misinformation, the ensuing threat to democracy and the potential for AI to outsmart humans.
Nearly three-quarters of companies plan to integrate current and future AI systems into their functioning, leading to valid concerns about the impacts of AI on job security across sectors.
“Existing models are already capable of replacing or augmenting a decent portion of modern-day intellectual labor,” a spokesperson for the Center for AI Safety, a San Francisco-based research nonprofit, said. Future models “will likely be completely capable of doing various white-collar intellectual tasks.”
Though AI threatens certain sectors, the World Economic Forum estimates that it will have a net positive impact on job growth, predicting several million new jobs in education, agriculture and digital commerce and trade, and will increase demand for AI specialists.
“Generative AI is a double-edged sword,” Subrahmanian said. “If ChatGPT can perform a task currently performed by humans faster, better and cheaper, then those individuals’ jobs are at risk. But ChatGPT has already created new jobs such as ones based on prompt engineering. And it can enable people to overcome deficits and qualify for jobs that they did not qualify for before.”
Generative AI could be detrimental because of its lack of accuracy, as PolitiFact found when it put ChatGPT to a fact-checking test. OpenAI recognizes that its technology “still is not fully reliable,” can be “confidently wrong in its predictions,” and may “hallucinate facts and make reasoning errors.” Not only can generative AI contribute to the proliferation of misinformation, but internal reports indicate that ChatGPT can also create it convincingly.
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
— Sam Altman (@sama) December 11, 2022
Some AI experts are also concerned about the dangers posed by future iterations of the technology — a superintelligent “rogue AI” that supersedes human control. Major tech executives and industry leaders such as Elon Musk, Steve Wozniak, Andrew Yang and Rachel Bronson were among thousands of signatories on a March letter asking AI labs to pause development for six months on AI systems to improve safety and oversight of the technology.
The Association for the Advancement of Artificial Intelligence (AAAI) also wrote an April open letter, highlighting AI technology’s social value, while recognizing several key concerns that need addressing through transparency, safety, and engagement in ethics discussions.
But not all experts share these concerns. One of the “grandfathers” of AI, Yann LeCun, recently told BBC News that although AI will change certain aspects of the world, statements about AI threatening humanity are “preposterous.”
At a Senate hearing in May, Sam Altman, the OpenAI’s CEO, urged legislators to regulate the industry.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”