I’m more optimistic about generative artificial intelligence in journalism than most of my peers. So it pains me to see AI rollouts that are short-sighted — or even offensive — threaten the fragile truce between reporters and the technology.

Newsroom leaders can’t seem to be normal when it comes to AI experiments.

Take the latest dustups at the Cleveland Plain Dealer. The ideas: Use generative AI to write up reporters’ notes to reach more coverage areas, and in a weird escalation, create vertical videos featuring a talking building and avatars of reporters and editors to reach new audiences.

Or at McClatchy, where a new “content scaling agent” repackages articles in new ways for different audiences (think summaries for newsletters.) As that was announced, reporters realized they may not control whether their bylines appear on AI content. And as The Wrap reports, executives were dismissive: 

“If they don’t have the ability in their contract to remove their byline, we’re going to use their name. Now, I’m not asking y’all to get in fist fights with all of them, but in the cases where we have to, they get to decide. If they decide not to, again, they don’t get credit. They don’t. We’re going to do it anyway, but they’re not going to get credit for it.”

In the age of liquid content, leaders at these companies were on the right track in thinking about amplifying existing work in new ways. But the execution was atrocious, and has led to more distrust of the technology — and its promoters (like me) — at a time when we will truly fall behind if we’re not using it.

Then there’s Nota News. The company launched 11 hyperlocal news sites in September 2025, each covering a county identified as a news desert. The mission — bilingual civic reporting for underserved communities — was exactly the kind of thing this industry needs. But Poynter found more than 70 stories lifted from at least 29 local outlets and 53 journalists, run through AI tools and published under Nota editors’ bylines.

All 11 sites shut down by early April. A company that said it wanted to save local news was leeching off the local journalists still doing the work.

Again and again: Decent idea, horrible execution.

These failures are the result of leaders who skipped the boring, hard, necessary work of bringing their organizations along on AI initiatives. Here’s what that work looks like.

Solve a real problem, and explain why

Simply creating more content for the hell of it is like putting a new steering wheel on a Geo Metro. If your AI experiment doesn’t start with a clear problem that your audience or your newsroom actually has, no amount of technology will save it.

Actually talk to your audience

A composite shows AI generated images from Cleveland.com and reader comments in the middle. (Composite by Poynter staff).

“WELCOME BACK, JOSH.” “JOSH IS BACK.” “Hey Josh, welcome back. Tell (Cleveland.com) to stop using AI.”

These are comments on the latest video from Cleveland.com social media producer Josh Duke. People appear to be rejoicing to see a human return to the newsroom’s social feeds. The AI videos were roundly rejected across platforms, indicating that the audience wasn’t considered when planning their AI experiment. 

Don’t dismiss your skeptics            

Ryan Struyk, director of AI innovation at CNN, said this at a recent conference: “Skeptics who come around often become your most productive advocates, because they’ve genuinely thought through the concerns.”

Be overly transparent

I know that you’re damned if you do, and damned if you don’t when it comes to talking about generative artificial intelligence. Audiences want to know if you’re using AI, but research shows they trust you less when you tell them — something that has likely gotten worse as the backlash to the technology grows.

But, if McClatchy leaders had publicly discussed the “content scaling agent” and explained to readers exactly what it was and why they’re using it, they wouldn’t be facing external backlash to complement what reporters internally are saying. Disclosure is essential.

Be ready to answer tough questions 

“We are journalists. That’s how we think. If you’re not ready, we will eat you and your cool new thing alive,” said Kristen Hare, Poynter faculty and director of craft and local news.

This is an industry that’s justifiably skeptical of big promises from tech companies

Get internal buy-in before rolling out AI initiatives

From Sitara Nieves, Poynter vice president of teaching and organizational strategy: “If you don’t invest the time to talk to all parts of the organization before announcing a tool that changes how people work, you’ll spend even more time later cleaning up the fallout. That’s much less fun (also not very productive).

Launching new AI policies requires listening first. Host skip-level meetings to hear concerns from every corner of the organization and treat that feedback as data. Be transparent about which concerns you can address and which are non-negotiable. You’ll also learn who on staff might help you advocate for change and champion it. Ultimately, your teams need to feel that change is being implemented with them, rather than to them.

Great, ethical journalism includes showing your work. The same is true for launching new AI tools.”

Be normal

McClatchy leaders created a scrolling Star Wars-esque video highlighting staff who worked on their AI tool. It gave each of them joke titles, which inexplicably referenced the Matrix (a different movie.)

“I once had a CEO play Coldplay’s ‘Something Just Like This’ when rolling out a new tool,” said Mel Grau, Poynter director of program management. “It was so cringe.”