By Anabelle Nicoud
Two years ago, OpenAI, a relatively unknown start-up from San Francisco, released what was then known as a research preview: ChatGPT. The success of the chatbot took the world by surprise—and insiders from OpenAI too.
Artificial Intelligence (AI) had existed for decades, but suddenly, it became possible to interact in natural language with a model without being a coder.
A million users joined ChatGPT in the first days after its launch, and by two months in, 100 million users had signed up.
By October 2024, 200 million people were using ChatGPT every week. OpenAI is now integrated into major platforms like Microsoft and valued at $157 billion. (source)
For the media, machine learning and automation had already been part of conversations throughout the past decade, and many publishers had already experimented with innovative tools (see here).
Alongside GPT, other large language models (LLMs) like Anthropic’s Claude have emerged, and OpenAI has signed partnerships with several major news organisations. But ChatGPT’s release ignited a new discussion around AI in both internal processes and user-facing tools.
“What did it change? Everything!” laughs Patrick Swanson, a journalist and technologist based in San Francisco. Swanson, who started his career leading social media teams for Austria’s Zeit Im Bilt, has now moved on to consulting and AI literacy with Verso, a consulting lab he co-founded after a fellowship at Stanford with journalist Kaveh Waddell.
AI is now a hot topic in publishing, and it has been a central theme in conversations within major industry associations, including WAN-IFRA’s upcoming AI Forum and AI Study Tour.
New opportunities: brainstorming, personalisation, investigations
Many publishers quickly adopted AI tools for tasks ranging from handling information access requests to developing chatbots trained on trustworthy data for elections, and even providing personalised recommendations. But ChatGPT can also serve as a brainstorming tool or assistant in investigations.
“Large language models (LLMs) are particularly useful for helping journalists process vast amounts of data they don’t have time to manually analyse. For instance, if they receive thousands of public records, LLMs make it easier to ask complex, targeted questions of the data and uncover insights more quickly than traditional methods,” notes Subbu Vincent, Director, Journalism and Media Ethics at Santa Clara University, in Silicon Valley.
Many European publishers were already future-forward in their thinking. For instance, Der Spiegel explores AI for both editorial and business operations.
They formed cross-functional groups to explore AI’s potential in every dimension of the company, and launched in October a full week dedicated to AI upskilling (here).
There are some promising avenues, according to its CPO, Christoph Zimmer. “We experiment with personalisation, and we think about using AI for fact-checking, but also for personalising content delivery to engage our readers better by tailoring it to their preferences.”
Even small and local newsrooms have seen significant AI benefits
While large-scale operations are impressive, small and local newsrooms have also benefited from AI, often with significant impact.
“You’re seeing a lot of innovation come from small local newsrooms, who are just able to, through natural language, create some sort of custom GPT or AI automation to help them work more efficiently and productively. These are the small wins—these are all the low-hanging fruits,” says Nikita Roy, a leading voice in AI and media, who notably documents AI innovations on her podcast Newsroom Robots and speaks at many conferences in the US and worldwide.
One tool that particularly impressed her was Djinn, a news-gathering tool built in a local newsroom in Norway, iTromsø’s. Djinn is automatically connected to government portals’ APIs, and downloads PDFs of new documents each day, summarises them, and shares them with journalists.
“If you’re a housing beat reporter, it’s like pulling out things that might be relevant to you based on what the system has summarised,” she explains.
The significant time savings—reducing a daily two-hour task to just 20 minutes—allow journalists to focus on creating better stories, ultimately driving subscriptions.
Liam Andrew, who works as Technology Lead, Product and AI Studio at the American Journalism Project, echoes this point: “We’ve seen some great examples of AI tools, like those helping newsrooms gather election data or streamline public records requests. By applying AI to automate these tedious tasks, journalists can focus on the deeper investigations that truly matter.”
Andrew also emphasises the importance of experimentation in newsrooms: “The key is to keep trying. AI is most valuable when newsrooms allow themselves to play with different tools and discover what works best for them. It’s not about one-size-fits-all but figuring out the right blend of automation and editorial creativity.”
Limits: A powerful tool, but maybe not always the best option
Can AI do everything? Of course, some media outlets learned the hard way that LLMs can hallucinate (here). And it’s good to remember that chatbots are language models – they can’t reason, or can’t be fully trusted with their answers.
Beyond those limits, the impact of ChatGPT has been both positive and negative, says Florent Daudens, who worked in Canadian newsrooms before joining the tech company Hugging Face as a press lead.
“The positive side is that it made everyone realise the significance of generative AI’s technological advancements—the so-called ‘iPhone moment.’ The negative side is that it creates a distortion in what AI is and what it can do.”
Daudens emphasises that AI isn’t one-size-fits-all. While ChatGPT is a powerful tool, it may not always be the best solution. “ChatGPT is a powerful tool, but applying it to every task is like using a bazooka to plant a nail. You can have smaller, more efficient models that are less expensive and energy-consuming.”
‘This opens up a universe of possibilities’
In 2024, smaller yet high-performing models have started gaining traction.
“There may be a gap in imagination regarding what AI can bring to newsrooms. While LLMs can provide value, the biggest realisation about AI is its ability to use language as a programming tool. This opens up a universe of possibilities—personalisation, investigations, and more. But it’s not ChatGPT that will do this; it’s industrial AI that can bring about a more fundamental change than just chatbots,” he explains.
Overall, AI adoption is still in its early stages in media, believes Nikita Roy.
“I would equate this to being an adolescent stage of AI, where we are still playing around with it, figuring out what it makes us feel, and how it works,” Roy says. “We haven’t gotten to that mature stage of AI adoption in newsrooms yet. When we reach that stage, we will be more focused on pushing the boundaries of what’s possible, thinking more creatively, and using AI for innovation and creativity, not just for productivity.”
Potential impact on journalistic formats
Indeed, the future of AI in news could extend to spatial computing (consider Meta’s smart glasses or Apple’s Vision Pro) and voice-to-text model generation. Should news stories still follow the traditional 600-word format?
This is a consideration Jane Barrett, Reuters’ Head of AI, has at the top of mind. “I’m still thinking about how news experiences will evolve with the way we live in 2024. If driverless cars become widespread, not just in small parts of LA and San Francisco, it will completely change how we interact with news. Imagine sitting in a Waymo that knows it will take you 22 minutes to reach your destination, so it gives you a perfectly timed 22-minute newscast tailored to your interests. The world is changing so much because of AI, and we need to be ready for that.”
For the moment, AI’s full potential can only be realised if journalists are actively involved in these discussions, something that isn’t always happening, notes Nikita Roy.
“You cannot deal with AI through a top-down approach. It has to be bottom-up. You need to be able to solve problems that journalists are having within their newsroom, and that only comes when you’re having conversations with them. AI helps the editorial side and can help produce better journalism. Those outcomes only happen when journalists are involved in the conversation,” Roy says.
Ethics: industry views evolving
Of course, ethical questions also arise when considering AI’s adoption in newsrooms. Many debates centre on the ethics of the models themselves.
“Some people think we cannot get into the ethical burdens that have already been incorporated into the building of the technology—the electricity costs being unsustainable, data center costs, scraping the web without permission, or using copyrighted content,” says Subbu Vincent.
“There’s a stream of thinking that says we can’t get into all that, whether this technology is inherently ethical or not, whether it’s completely hyped. All we can do is intervene from the standpoint of production.”
Others remain deeply skeptical of the technology’s capabilities and its portrayal in the media.
Subbu adds, “Journalists, having seen the criticism they got in 2023 after quoting Sam Altman, Reid Hoffman, and especially Elon Musk about the human species potentially being exterminated, are now more cautious. I’m seeing less of the blind quoting of AI CEOs like we saw before. There is now an acknowledgment that you have to question the claims made by CEOs and not just repeat them without critique.”
Florent Daudens believes the next step in AI literacy in newsrooms will be “trustworthy AI,” and for him, open-source AI is key.
“I think there’s a natural fit between open-source AI and journalism – it aligns with the transparency and ethical principles of the field. In journalism, it’s important to understand the data, the processes, and how conclusions are reached, and open-source AI offers that same transparency. We focus on protecting sources, safeguarding data, and explaining how AI results are derived, which is crucial for journalists to trust the technology they use.”
About the author, Anabelle Nicoud
A freelance journalist and consultant based in San Francisco, Nicoud currently collaborates with The Audiencers newsletter and the Canadian monthly L’actualité.
She has worked with Apple News+ (2022-2024); helped the editorial teams at La Presse (2015-2019) and Le Devoir (2019-2022) with their digital transformation, while leading ambitious editorial projects that have won prestigious journalism awards in Canada and Quebec.
A former journalist for La Presse and correspondent for Libération in Canada, Nicoud is passionate about the impact of technology on the media, she closely follows issues related to the use of artificial intelligence.
The post Two years later: Taking stock of ChatGPT’s impact on the news media appeared first on WAN-IFRA.