AI in Comms: Newsroom Lessons for Business and Government
As artificial intelligence continues to reshape industries, from newsrooms to boardrooms, a fundamental question arises: how can organisations leverage AI to enhance trust, reputation, and productivity, without losing the human element that makes communication credible and meaningful?
That was the central theme of our recent webinar, AI, News & Trust: The Future of Communications, featuring Laura Oliver, senior newsroom consultant and former Reuters Institute contributor, and Hans-Petter Dalen (HP), IBM’s Business Executive for AI across EMEA. Their insights, drawn from deep experience in journalism and enterprise technology, offer strategic takeaways for communicators, business leaders, and policymakers navigating AI’s next wave.
🧠 Beyond Hype: Why We Must Focus on Use Cases, Not Just Tools
HP Dalen opened the conversation with a striking point: we’re talking too much about technology, and not enough about what it’s actually for.
‘If we don't understand how we use it [AI], we don't focus on the use cases, It will it will become a technology-driven evolution. And that's not going to benefit lines of business.’ – Hans-Petter Dalen
HP reminded us that IBM has been in the AI space since the 1960s — long before terms like “generative AI” became mainstream. What’s changed today, he argued, is the accessibility of these tools. But adoption doesn’t equal impact.
‘You’ll see surveys saying that 90% of companies say they use generative AI on a daily basis, and it gives them personal productivity. But personal productivity doesn't benefit your company if you use it for drinking more coffee or you leave early.’ – HP Dalen
📰 Lessons from the Newsroom: Adapting to AI While Defending Trust
Laura Oliver provided a nuanced view from the frontlines of journalism, an industry that has been repeatedly disrupted — first by the internet, then by social media, and now by AI.
‘For far too long at a business level as well, within journalism, the focus has been on where are those marginal kinds of efficiency savings we can make, where there's marginal cost savings we can make … But I think where it's disrupting in a positive way, where it's being used most effectively, is where the newsrooms have identified problems that can help them solve.’ – Laura Oliver
She highlighted a growing tension: AI is helping journalists save time on rote tasks, but there’s little evidence that those saved hours are being reinvested in deeper reporting or investigations. That mirrors concerns in corporate comms: time saved with AI doesn’t automatically mean value gained.
📉 The Misinformation Threat: More Content, Less Clarity?
Both speakers raised concerns about AI’s potential to exacerbate misinformation, particularly when it automates content creation without sufficient editorial oversight.
‘We’re already overwhelmed by volume. AI just makes it easier to flood the zone with misleading content. This isn’t just a media problem — it’s a trust crisis.’ – Laura Oliver
The solution isn’t to resist AI, but to embed strong governance.
HP cited the “biggest AI scandal in Norway,” where a city council used ChatGPT to justify school closures, without checking the accuracy of the generated content. The result? Fabricated references, public backlash, and a crisis in credibility.
‘That wasn’t an AI scandal. That was a human scandal. Nobody verified the output. No governance, no oversight — and that’s the real risk.’ – HP Dalen
⚙️ SLMs and the Rise of the AI Stack
A particularly insightful part of the discussion focused on Small Language Models (SLMs) and the growing need for organisations to develop their own AI stacks.
While most businesses rely on off-the-shelf Large Language Models (LLMs), like ChatGPT or Gemini, these tools are general-purpose and trained mostly on public data. As HP explained, less than 1% of the data used to train LLMs is enterprise data. That’s a massive blind spot.
‘We’re seeing real success with small models — trained in-house, enriched with enterprise data, and governed for accuracy. That’s where the strategic advantage lies.’ – HP Dalen
A case in point: a group of small newspapers in Norway used a custom AI model to scan local planning applications and flag potential stories. The model didn’t generate articles; it simply summarised documents to save journalists' time, freeing them to focus on investigations, a perfect example of AI augmenting human work rather than replacing it.
💡 Newsroom Innovation = Comms Strategy Inspiration
Laura’s experiences working across UK, European and US newsrooms offered valuable lessons for PR and strategic communications professionals.
‘Journalists are using AI tools like summarisation, transcription, and translation. But the smartest use is where it supports better audience understanding.’ – Laura Oliver
One of the most promising applications? Using AI to analyse user behaviour and improve products — something newsrooms are beginning to do with greater precision. Strategic comms teams can borrow this approach to better understand stakeholder engagement, sentiment shifts, and messaging effectiveness.
“AI can help us ask: What makes someone subscribe? What makes someone stay engaged? That kind of insight is gold — for journalists and comms pros alike.” – Laura Oliver
🧠 Teaching Over Tools: The Critical Thinking Imperative
One of the most important themes in the webinar was the need to invest in human capability, not just tools.
As the conversation turned toward journalism education, Laura explained how AI can be overwhelming for students and teachers alike. The key, she said, isn’t just teaching tools, but embedding critical thinking as a core skill.
‘It shouldn’t be about training the tech. It should be about building the thinking skills to assess any new tool or technology that comes in.’ – Laura Oliver
This point was echoed at the enterprise level by HP, who shared a frustration heard often at C-suite level:
‘We talk to frustrated C-levels almost weekly who have invested millions in AI technologies and maybe seen 50k in return, because we make this about technology and not where we apply the technology … Why? Because the focus was on tech, not on training their people to use it effectively.’ – HP Dalen
This mirrors what many of us in strategic communications have long known: trust, reputation, and performance are built by people — and enhanced by technology, not the other way around.
🏛️ What Business Leaders and Government Should Do Now
This discussion wasn’t just about the practicalities. It was a call to action. Here are some of the key takeaways for decision-makers in business and government:
1. Shift the focus from tools to use cases
It’s not about which AI platform you use. It’s about where and how you apply it to solve real problems. Start with the issues you want to resolve - your outcomes, not your vendors, who will sell you anything.
2. Build your AI stack — not just plug into someone else’s
If you want trust, privacy, and performance, you’ll need models that reflect your organisation’s data and values. That means investing in SLMs and enterprise-ready infrastructure. Your data and your people are the foundation of how successful your AI can help you be.
3. Invest in people, not just platforms
From HR to comms to policy teams, the ROI on AI only materialises when your people understand how to use it, what to question, and where to add human value. People shouldn’t see AI as a shortcut to problem-solving. They need to understand how to prompt and have the necessary critical thinking in place so they can verify insights and suggestions made by LLMs and SLMs.
4. Take trust governance seriously
Don’t make headlines for the wrong reasons. Embed fact-checking, source attribution, and ethical review into every AI-supported process, especially those that touch the public.
Whether you are writing government policy or a business plan, delegating your thinking and decision-making to LLMs creates risk. AI will only help you, the human, if you have the necessary critical thinking to verify the work that AI can support you, your organisation and government department with.
5. Learn from journalism
Journalists know how to work under scrutiny, assess sources, and strike a balance between speed and accuracy. AI gives them and their media outlets a better ability to asses data and reach and engage with their audiences. PRs and strategic communicators should mirror this mindset when deploying AI in public-facing campaigns.
And in an era of misinformation, communications teams need to be prepared to react at a greater speed, especially in a future environment where audiences are more likely to form a judgment and opinion based on the output of a query from an LLM.
🚀 Final Word: Reputations Are Built by Humans, Not Machines
As I closed the session, I returned to a point that resonates across sectors:
‘It’s not just about the AI stack — it’s about the human stack. If we want better productivity, better outcomes, and better reputations, we must invest in the people who use the tools, not just the tools themselves.’
The future of communications, like the future of journalism, will be shaped by how well we combine technology with trust, and strategy with scrutiny.
AI can amplify our impact. But only if we anchor it in purpose, governance, and the kind of human judgment that no algorithm can replicate.
This webinar was run in partnership with Folgate Advisors, a community of international senior communicators.
I've worked with governments and leaders in technology and investment to unlock complexity and integrate strategic communications and international stakeholder engagement into their decision-making processes.
Let’s discuss the importance of managing your reputation in an era where AI can make you more transparent than ever before, or empower how negative actors manipulate your audience and how they perceive you.
Please comment, share or subscribe to my LinkedIn Reputation Matters newsletter. Or connect with me on LinkedIn.