November 11, 2022

Now that our Agora report which assesses Oregon’s local news & information ecosystem is out, I finally had a few hours to kick the tires on the generative Artificial Intelligence art technology we’ve heard about to produce any digital image imaginable from the words you type into a prompt. The results are spectacular. If you want to see what the growing community is creating, OpenArt has a collection of artwork generated by the top three AI systems: DALL·E 2, Midjourney, Stable Diffusion.

We’ve heard about generative AI for some time, but I’ve always felt it would impact us in the distant future. But with web-based tools that generate high-quality output now readily available, that future is here. There’s no doubt that generative AI innovations will change how we create and consume media, disrupting the creative marketplace and communication industries. And while text-to-text tools are already assisting writers in producing stories, I’ll focus on text-to-image generative AI.

In the last few months, many of these AI systems have published online tools, accelerating its availability to the public. For example, Midjourney’s beta requires a Discord account to log in, while Stability.ai‘s DreamStudio Lite has a convenient web interface. Each offers an addicting amount of credits to create a several dozen images for free. If you have a powerful machine sitting around, you can also download and install Stability.ai’s open-source, public release to generate art on your local device without paying for the online service. And if you’re tech-savvy, you can train your local model with custom data. This means you can feed your computer hundreds of specific images not in the default AI model, which is helpful if you want want to stay “on brand” on iterations of a visual campaign. Or you can create a series of portraits of non-famous but REAL people. As you can imagine, this raises serious ethical questions.

Controversies abound. I completely understand and appreciate the debate over the ethics of mimicking artists’ styles and copyrighted likenesses. People contributing to this ecosystem should be compensated and credited for their talent and work. For many artists, generative AI art is a threat. But for those who embrace this new digital brush, it can be a powerful tool in their design process.

But what worries me more is the lack of foresight in establishing regulations for deep fakes, AI-powered photos and realistic art that could lead to harm and disinformation. But back to the foundational technology of generative AI, I’m anxious to see how Stable Diffusion’s Creative ML OpenRAIL-M license and de-centralized mitigation strategies will keep users ethically, legally and morally responsible for harming others and distributing disinformation. As usual, we’re incapable of establishing policy ahead of innovation and must constantly catch up. 

Now imagine when the technology is a Photoshop plug-in away. Well, imagine no more. Watch this video (also see below) that plug-in developer Christian Cantrell, former Director of Experience Development and head of Adobe Design Prototyping, tweeted last month.

Cantrell’s Twitter bio states he’s VP of Product at StabilityAI. The Inpainting feature in the Stable Diffusion plug-in for Photoshop is a News Photo Editor’s nightmare. And it can also be an Art Director’s dream. As Cantrell also tweeted: “This is how all advertising and marketing collateral will be made sooner than most of the world realizes.” And yes, there’s a Stable Diffusion plug-in for After Effect to apply the generative AI models on video. Sure, a skilled touch-up expert could do the same. But now you’ve compressed the touch-up time from hours to seconds.

As impressive as the technology is, producing that exact image you imagine still requires knowing how to talk to the algorithms behind the tool. What you type into these prompts tells the AI model what to render. The more you understand the “language,” the closer it can produce the image in your mind. It’s called Prompt Engineering and OpenArt published a Prompt Book to help you understand effective techniques. Of course, there’s already a marketplace for prompts. And apparently it’s a future career. This leads to my next concern.

On top of the ethical and legal questions generative AI tech raises, I fear we’re not ready for this massive change in the media and communication industry. And as educators, how are we preparing our students for such a world? These additional questions come to mind:

  • How do we label and define generative AI content in our publications?
  • When is it appropriate to publish generative AI images, and when is it not?
  • What are the most inspiring and most concerning examples of others using generative AI to render images?
  • What’s the balance between using generative AI to “assist” vs. “make” in the creative process?
  • What ethical and legal boundaries do we need to consider when using and publishing generative AI content?
  • What safe guards, if any, should be in place to prevent the abuse of news photography? 

What questions come up for you?

A mentor early in my career asked this question that sticks with me today: “Would you rather surf the web or make the waves?” Generative AI technology is awe-inspiring and is here to stay. But I hope we can collectively answer these questions together before we drown in the inevitable tidal wave of generative AI images. 

In the meantime, let me share a few of my early experimentations with generated AI art.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Andrew DeVigal holds the endowed chair in journalism innovation and civic engagement and is the director of the Agora Journalism Center, the forum for the…
Andrew DeVigal

More News

Back to News