April 28, 2023

Trusting News, a nonprofit that helps newsrooms build trust with the public, recently published a post on Medium that didn’t seem at all out of the ordinary. But there was something different about it: At the end of a story detailing the many ways newsrooms are talking about their approach to covering diverse communities, Trusting News disclosed that artificial information technology was used to write the post.

This text would have been indistinguishable if it weren’t for the editor’s note explaining which specific tool had been used (ChatGPT), why it had been used (because the staff was curious about how journalists might make use of the chatbot), and how it had been used (to summarize published news articles). The disclosure also made clear that a human still edited the final text.

We are in a transformational moment. The rapid adoption of generative AI tools has suddenly created a whole new set of challenges for news consumers trying to decide whether to trust the information they’re bombarded with daily. It also requires news outlets to navigate the ethics of reporting and writing stories with the help of a technology that has been the subject of both breathless hype and terrifying doomsaying.

This new era requires that newsrooms develop new, clear standards for how journalists will — and won’t — use AI for reporting, writing and disseminating the news. Newsrooms need to act quickly but deliberatively to create these standards and to make them easily accessible to their audiences. These moves are important for maintaining trust with news consumers and ensuring accountability of the press.

One of the most effective ways for news consumers to judge whether they should trust a news story is by checking the byline, in addition to asking questions like, “What is the source?” and “How was this information obtained and verified?” Audiences deserve to know when the answers to those questions include artificial intelligence tools.

This is especially true given the growing number of ethical questions raised about the technology: battles over copyright and intellectual property rights; instances of plagiarism; the potential for algorithmically-driven bias; and high-profile cases of the technology “hallucinating,” making up facts and citations to nonexistent sources. News consumers should be assured that safeguards are in place to make sure the content they’re reading, watching and listening to is verified, credible and as fair as possible.

To help news consumers understand what makes reporting trustworthy, journalists should explain all the steps they’re taking to continue to ensure that facts remain at the center of what they do. As journalism practices inevitably evolve, both aided and disrupted by this new technology, transparency is key.

Once newsrooms have decided on guiding standards for using AI technologies, they should communicate with their audience about their decision-making processes and help them understand why they’ve made the choices they’ve made. Otherwise, news consumers could feel like the reasons for journalists’ decisions are arbitrary, or even ill-intentioned.

News organizations have increasingly engaged in conversations with their audiences to explain a whole range of issues, like their election coverage priorities, how crime gets covered, and why changes have been made to language and style guidelines. The same approach should absolutely be taken for explaining the role of AI in the reporting process.

One newsroom leading the way on this front is Wired. The magazine has a page dedicated to explaining how its journalists use AI tools (to suggest headlines or potential cuts to shorten a story, the policy states) and how they don’t (no AI-generated images instead of stock photos, according to the policy). Wired makes it clear to readers that these policies may change as the technology does.

Ultimately, these policies ensure transparency which also helps fuel accountability so they can avoid ethical issues in the first place.

Building trust with audiences, committing to transparency, and ensuring accountability are more important for newsrooms than ever, given the potential for generative AI to be weaponized to spread disinformation at a pace we’ve never experienced before. As the volume and sophistication of false content grows, so does the importance of standards-based news organizations.

In a time when everything feels new, and maybe a bit scary, journalists should follow the examples of organizations like Trusting News and Wired. They should tell audiences about the decisions they’re making and why they’re making them. They should give audiences ways to hold them accountable. No technology, regardless of how sophisticated it is, can replace the trust journalists build with their audiences.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Christina is the senior director of media relations at the News Literacy Project, a nonpartisan nonprofit that teaches people how to identify credible news sources.…
Christina Veiga

More News

Back to News