By:
September 22, 2023

This article was originally published on Northwestern University’s Medill Local News Initiative website and is republished here with permission.

Overseeing the Knight Lab at Northwestern University, Jeremy Gilbert has been exploring the intersection of technology and news media, a topic that has become urgent amid the rocket-like rise of artificial intelligence.

Gilbert has worked with AI since 2009, when he and fellow Northwestern professors and their students created StatsMonkey, a tool to tell automatic baseball stories. During a subsequent stint at the Washington Post, Gilbert devised Heliograf to cover sports, politics and more. Now back at Northwestern as the Knight Chair in Digital Media Strategy at the Medill School of Journalism, Media, Integrated Marketing Communications, Gilbert is working with the next generation of AI technologies to tell stories in new and novel ways based on user needs.

In this conversation, which has been edited for length and clarity (and transcribed using AI but corrected via human ears), Gilbert makes the case for AI’s transformative potential while cautioning against lurking dangers.

Mark Caro: How did AI become the topic of discussion in journalism so quickly when it was barely on people’s radars a year ago?

Jeremy Gilbert: The huge difference this year to last year about artificial intelligence in media and every aspect of our lives is absolutely the fact that OpenAI put a chat interface on top of their existing tool. It’s hard to remember but a year and a half ago, the same underlying technology that powers ChatGPT was available in a tool called GPT Playground. As far as I know, only academics and technologists were playing around with it, and everyone was pretty impressed, but it was fairly inaccessible because you had to really seek it out, and then you had to learn almost another language to communicate with it. In somewhat unexpected ways, when that chat interface launched in November, it absolutely transformed how people could work with large language models (LLMs).

Caro: Do you think the discussion has been driven by the possibility of doing things easier and better or more by the sense of “Hey, we can save a lot of money doing this”?

Gilbert: It’s a third thing: I think the drive to talk about journalism and artificial intelligence is mostly an issue of fearmongering. That doesn’t mean that all the other things aren’t true, that there are huge productivity gains to be realized, that there are potential cost savings to be realized. But the reality is new technologies make us fearful, and, in particular, artificial intelligence has always been bandied about as “Will it replace us?”

I have yet to see a technology that replaces us. I do not believe that generative AI or any other form of artificial intelligence is going to replace humans in most tasks. I do think it will change the way humans do many tasks. So I think it’s fear that drives the conversation.

I think that there are some bad actors who see huge economic gains because they think, wrongly, that they can put out a product that is good enough that people will pay for … wholly automated news. I don’t believe that that’s a viable product. I’m not even saying “Is it a moral or ethical product?” I just don’t think that you can put out a good enough product that people will be satisfied with it.

I think that there is a short-term moment that we’re in right now where Google and other search engines might be tricked into recommending algorithmically written LLM-style stories. But news consumers are going to pretty quickly realize that those are no good, and either Google will deliberately stop recommending those kinds of stories, or consumers will stop looking to Google to discover news. I don’t think in the long term—and in the long term is six to nine months—that publishers who say, “Oh, we’re gonna do LLM-only content,” will be able to get away with it.

Caro: When I interviewed Marty Baron, retired executive editor of The Washington Post, he was matter-of-fact in saying, look, there are jobs that humans aren’t going to do anymore, but he also emphasized that technology is a tool to help us. Do you agree?

Gilbert: Absolutely. There’s a part of me that’s incredibly heartened because I worked for Marty indirectly for seven years, and we did a lot of machine-generated content at The Washington Post. We built a system that we called Heliograf. Heliograf helped us write stories that we didn’t want humans to write, and it helped us discover insights by doing tabulations that we wouldn’t have wanted humans to have to calculate.

When we can use artificial intelligence in a way that helps reduce or eliminate that rote and mundane work — transcribing stories, maybe even doing some kinds of translations, doing tabulations, alerting people to changes in data — those are hugely valuable things. When we spend all of our time saying, “Can we get a system to do what a human already does pretty well?” that’s a waste of computational power. To me, the question is what can the technology help us do that we don’t want to do? And what can the technology do that right now isn’t possible?

Caro: There’s been the idea of having AI cover and summarize meetings so you don’t have to send reporters to them. Can AI summarize the meeting well enough, and is that a good use of it?

Gilbert: We have to acknowledge that there’s a difference between good meeting stories and most meeting stories. I think, honestly, what you’re talking about is how do we cover most meeting stories today? My former boss, Emilio Garcia-Ruiz, who worked for Marty at the Post (and now is San Francisco Chronicle editor-in-chief), was fond of saying, “Worst case scenario: Metro editor sends a reporter to a city council meeting. Reporter calls in. Metro editor says, ‘What happened?’ Reporter says, ‘Not much.’ City editor says, ‘Great, give me 12 inches.’” Like, wait, we’re writing a story, even though not much happened.

The Knight Lab that I oversee right now is doing this already: If you have a transcript, or even video, of a city/county council-type meeting, we can use generative AI to do a much better job of transcription from the video. We can use different forms of AI, and we can understand what’s happening in that council meeting. We can even look at really important cues: Is there one segment that was much longer than any other segment? Is there a time that the volume in the video got much louder? Did they talk about a really large budget number, much bigger than any other? Was the vote much closer on one particular issue?

To me there’s a difference between transcribing the meeting and summarizing it and really doing journalism, really understanding what’s important. What does the audience need to know? Why does the audience need to know that? What do they need that didn’t happen in the meeting to understand what went on in the meeting? The AI can prompt a human to do those things, but it probably doesn’t successfully replace a human. It can certainly summarize what the meeting was about, but that’s not really an act of journalism.

Caro: There are two sides of this coin. The tails perspective would be we’re going to send the computer to cover all the stuff that humans covered, and then companies will eliminate even more jobs. The heads side is we never had enough bodies to cover everything, so you have to prioritize, and AI can broaden our umbrella.

Gilbert: You can’t go to every meeting, so it’s hard to know which meetings you should pay attention to. If AI can help us after the fact pay attention and do original reporting about what happened in a meeting, that’s probably more valuable than the coverage of the meeting itself.

Caro: In a way it points out the importance of an editor. If you have a school board meeting where all these newly elected school board members are going to discuss removing books from the library, as an editor you’ll say, “I’m going to send one of my better reporters to that.” If there’s another meeting where they’re going to discuss whether the stop sign should be an inch larger, maybe then you send the robot.

Gilbert: I don’t think generative AI requires that we reimagine journalism, but it gives us a really good excuse. What we really need to do and do better is say: What does our audience want? What does society need? And then as we try to do that, how can we use generative AI to make the jobs of journalists easier? I think we trick ourselves into a false question when we just say, “Oh, can generative AI do what human journalists have been doing?” Really, the question is were the human journalists doing the right thing in the first place?

Caro: Some of the fear maybe comes from the combination of writer insecurity and who owns everything. The writer insecurity would be, “I’m this really good writer, but readers can’t tell what a good job I did at shaping this, and my editor didn’t fully appreciate the craftsmanship I put into it, and if they don’t think people can tell the difference, they’ll just go with a robot.” Then there’s “My paper is owned by a hedge fund, and it’s all about traffic and clicks and not about the experience of reading it or the quality of the work, so I’m replaceable.”

Gilbert: The underlying economic model of local news of journalism is really a tough one. We have to acknowledge that no one was really paying for accountability journalism, the journalism that I think most of us as journalists hold closest to our hearts. What in reality they were paying for was our monopoly on information distribution.

I want to believe that many people did see value in getting accountability coverage, but as we’ve been unbundled, it’s been pretty dramatic that lots of people believe that they should pay for access to streaming music or streaming television or streaming movies. Far fewer people say they want to pay for access to journalism and especially accountability journalism. So that’s a problem.

A second problem is that hedge funds have seen a moment where they can wring a little bit more profit out of so many of these institutions. And I do think sometimes hedge funds or venture capital look at LLMs and say, “Oh, that’s a much cheaper way to do business.” That’s not a healthy development for media, but I think it should force us to go back and say, “What is it, whether we’re nonprofit or for profit, that our audience really wants and might be willing to pay for with their attention or directly with their money?” And then figure out how we can do what we think is important within those bounds. I at least believe that AI can be a helpful tool to those people.

Caro: Over the past seven years or so, a lot of us have been stunned by the amount of stuff that isn’t true that large portions of the population still believe. This feeds the concern that AI may further fog the idea of what is true and what is not.

Gilbert: Generative AI is potentially a fantastic tool for people who are trying to create mis- and  disinformation, and that’s a real problem. It can make, pretty easily, totally made-up claims look like the product of people with deep expertise. I think we’ll have to figure out as a society how we can get around that.

I mean, we have been seeing that for a long time as journalists no longer became the filter through which politicians and celebrities and other entertainers could reach the public. There’s no reason now that sort of high-status individual can’t go direct to their audience via social media or their own newsletter, their own website. They have all the tools that publishers long had a monopoly on.

What’s going to be the way we know that the information is legitimate? Is it coming through a source you trust? Is it having some other kind of signaling? One of my biggest fears, other than the economic harm, is just the sense that a well-written story will no longer be enough to say, “This is legitimate.” We have to worry that experts and journalists will have a hard time telling real from fake information, and we’ll all have to be more skeptical. But the irony there is that the journalistic training that we all had or should have had or the skills we learned on the job to be skeptical, to verify—if everybody had those ways of approaching information, we’d probably be better off.

Caro: Given that so much of this discussion is being driven by fear instead of a more positive approach, what is being missed out on?

Gilbert: Two things. One, there are reasons to be fearful. I mean, some of these economically inspired actors who are saying, “Oh, I’m going to replace human journalists with LLMs, or generative AI tools,” that’s going to cause a lot of problems. I am not trying to be overly optimistic about generative AI in that way.

I just don’t think we should say “no generative AI” because bad actors are going to use it in bad ways. One, we can’t stop its adoption. Two, we should be spending our time figuring out how to use generative AI in constructive ways instead of worrying about the people who are going to use it in destructive ways. But what’s being missed out when we have these conversations that are fear-driven about generative AI is that we spend our time asking the question, “Can it do the things that we already do?” And we don’t spend enough time saying, ‘What is it that we can do differently?”

For example, if we pull apart the threads of what it means to be a reporter, some of it is about story creation, writing. A lot of it is about gathering information, having an editorial perspective, structuring a narrative. There are lots of things that generative AI can potentially allow us to do in those reporting pieces that we’re not doing today. So how can we use generative AI better to help us sort and sift through data? How can we use generative AI to help us figure out what questions to ask? How can we use generative AI to help people who don’t know how to write a structured query write a structured query and interrogate a large data set? All those things are very valuable.

The other thing that I really, truly believe is that, right now, the model for journalism is a mass media model. It’s a mass media model because the economics require us to show one story to as many people as possible so that we can serve ads alongside it, so that we can beg for subscribers or donations, and that one-size-fits-all approach is generally not very efficient.

Potentially there’s a world that we should be exploring where generative AI is very different. Where a human does the reporting, structures the story, thinks about the prompts, and then maybe creates instead of a one-to-many story, a one-to-one story—but for many people because we can give people exactly the version of the story that they want or need based on the device they’re on, the time of day, their level of fluency with English, their level of familiarity with the topic. To me if we can reinvent a deeper, more personalized, more customized version of journalism that isn’t taking you down a rabbit hole but is really enabling you to more comfortably interact with news, that’s fantastic.

Caro: The new Scorsese movie comes out, and you could have the movie review for the person who doesn’t know anything about Scorsese or the book that his movie is based on, and then there could be the one where you’ve read the book, but you still don’t know anything about Scorsese. And then there could be the one where it’s going to make references to every movie Martin Scorsese has made since “Who’s That Knocking at My Door.” Maybe those are the different versions that people get? Part of that terrifies me, and part of me thinks, well, that’s interesting.

Gilbert: Reporters writing a story for the general public will show it to as many people as possible via the website or the newsletter or however the story gets out in the world. And when they’re sitting at dinner or they’re sitting over coffee with a friend, they don’t tell the story the way they write the story. They tell the version of the story that works for that person based on what they know about them, the setting they’re in, how casual the conversation is. I think potentially LLMs can play that role.

It can play the role of: I’ve done all the reporting, I’ve thought about what order the story elements should be told in, I’ve thought about what’s the most complicated version and the simplest version of the story, and maybe the LLM helps us tell the simple version of the story. Or maybe it takes the one sort of atomic unit of the handwritten story, and it breaks it up and makes it available to people in different forms, like here’s my TikTok version. But I think all journalists know that we can tell these more personal versions of the reporting than we do. We don’t think about it as the end state of our creation process, but maybe we should.

Caro: Do you think generative AI can help solve the economic problems for local news?

Gilbert: I wish that I believed there was a really obvious, really easy-to-implement generative AI solution to the economic woes, especially around local news. I don’t think there is. I think there are some ways that generative AI can help. If you have very limited resources, and you’re writing a story for your website, generative AI can be very useful in saying: Help me get my social promotion written. Help me generate from three or four stories what my email newsletter should look like. Help me figure out how to write a transcript for my podcast. Or if you’re working with local advertisers and you’re running native advertising, generative AI can help you with those things. It is, I think, much more likely to have a large impact on helping in story creation and reporting than it is likely to solve by itself the economic questions. But I do think there are some ways it can help.

Caro: What do you think of the Associated Press guidelines for dealing with AI?

Gilbert: I think their guidelines are pretty reasonable. The Associated Press’s guidelines are not radically different than Medill’s guidelines that we’re working on for faculty and students, which is “be transparent about what you’re doing.” Make sure that you are not using generative AI in manipulative ways. So a photo illustration that looks like it could be real even if it’s clearly labeled, when you release it into the world, could easily be repurposed in a way by others who don’t have the same guidelines. So think about what the worst-case scenario is and how you can mitigate those potential harms.

But also the Associated Press acknowledges this is a world in which AI, especially generative AI, is going to be much more present, and people may not even know when they’re engaging with it. So you could opt in right now to a beta that puts generative AI in Google’s suite of tools: Docs, Sheets, Slides. And when I’m writing, it asks me if I’m stuck and want help, or if I’m setting up a new spreadsheet, do I want to tell it a little bit about what’s going on, and it makes the spreadsheet for me.

I think we’re going to see more and more of that in the same way that today almost every digital device includes spell-check, includes grammar-check just by default. I think in the near future, saying, “Oh, this was created with generative AI” is going to be like, “Oh, we used spell-check on the story.” Of course you did. You should, and if you didn’t, it’s more of a fault than it is a value.

Caro: What else is important to consider with AI and journalism?

Gilbert: I think the big question that is often asked of me is what do local journalists need to do with generative AI? One, you have to be experimenting. It’s very hard to say “I should or should not use generative AI” if you haven’t spent some time trying. And generative AI comes in lots of different forms. It can generate text via chat clients. It can generate imagery. It can generate video. It can generate audio. So experiment with some different forms. Try out some things. See if using generative AI to transcribe your interviews is helpful. See if doing some brainstorming with generative AI helps the output. See if using generative AI to help you interrogate some data works.

The second thing is if you’re a team of two or a team of 10, have some policies about when you can and can’t use generative AI and how you disclose what you’re doing. Beware that your audience probably assumes you’re using these tools even if you’re not. So if you’re not, and you don’t plan to, tell people. If you are, and you’re experimenting with it, tell people. If you’re using it regularly, tell people, I don’t think we can be transparent enough.

And then the third thing is really interrogate the way you work and who you’re trying to serve and try to figure out: Are you working in the smartest, most efficient way? Are you giving your audience what they really need?

Caro: If you could fast forward five years and come up with the best-case scenario for how generative AI has been incorporated into the journalism world, what would that look like?

Gilbert: For me, the best-case scenario for generative AI in media is one where journalists are able to tell stories with important information to large audiences in ways that those audiences want and can absorb the information. So journalism doesn’t have to be the domain of the hypereducated. It doesn’t have to be the domain of the highly literate, or at least the news consumer doesn’t have to be. That instead we have a way of saying we’ve reported the right facts, we thought about the story in the right way, and we’re going to give you the version of the story that you want for the amount of time that you have that enables you to be a better citizen, more engaged in our democracy, more participating. And it saves reporters a lot of drudge work and enables them to do more of what we love: gather and find really good and compelling stories and tell those stories. I would hope that it has really shifted what journalism looks like. And I would hope that it has helped us convince people that there is enough value in what we do as journalists that they want to pay for it directly and indirectly.

I think we’ll start to see the more meaningful, more valuable uses of some of these tools in the next three to nine months. Two or three years from now, it’s hard to imagine now, but I don’t think generative AI will feel so magical then, because it’ll be so common. I think that’s where we’re going.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Caro is an author ("The Foie Gras Wars," "The Special Counsel: The Mueller Report Retold") and former longtime Chicago Tribune culture reporter, columnist and critic.…
Mark Caro

More News

Back to News