Media Literacy in the AI Era: Protecting Trust, Reputation
An Urgent Call for Media Literacy
Media literacy is no longer optional but a crucial means of safeguarding public trust, institutional reputation, and social cohesion in a fast-evolving information landscape.
This call powerfully stood out during yesterday’s House of Lords Communications and Digital Committee session, which convened to hear evidence about the challenges and threats posed by online misinformation and disinformation.
The committee called on two expert witnesses to share their insight and experience. They were:
Dr Mhairi Aitken, Senior Ethics Fellow at The Alan Turing Institute
Professor Sander van der Linden, Professor of Social Psychology in Society at the University of Cambridge
Drawing on their distinct but complementary areas of expertise, they painted a picture of how artificial intelligence (AI), social media platforms, and deeply ingrained psychological biases have been engineered together to intersect and undermine trust in digital content. Taken together, their testimonies suggest that UK citizens—of all demographics—face a complex and growing set of online risks.
According to Professor van der Linden, a key metric from Ofcom indicates that “40% of people say that in the preceding month they’ve seen misinformation in the UK, 90% say that they’re very concerned about the impacts of misinformation, and about 20% say that they’ve seen deepfakes”.
While online falsehoods are not new, the recent explosion of generative AI has made fabricated images, videos, and text more difficult to detect. Dr Aitken explained that a ‘particularly pressing threat’ is the cumulative erosion of trust, warning that “people might increasingly see or hear something fake and believe that it’s real” while also beginning to “lose trust in all content online”. This dual threat—the difficulty of identifying fake content and a growing reflex to doubt everything—sits at the heart of an urgent policy conversation.
After watching the session, the following are issues raised at the select committee hearing, which looked into and discussed the ramifications for society, perception, and reputation, and asked for the experts’ proposals for governments, businesses, and the broader public.
Key Threats: Generative AI and Misinformation
The greatest challenge underscored by both witnesses is the combination of misinformation with generative AI, a technology category that can create new audio, video, imagery, and text with minimal human oversight.
Just a few years ago, misleading social media posts might be produced by so-called ‘troll farms’ or individual bad actors. Now, AI-driven systems can produce and distribute fabricated narratives at incredible speed and scale.
The Proliferation of Deepfakes
Deepfakes—manipulated videos in which a person’s face or voice is digitally forged—present a tangible example of how AI erodes traditional authenticity indicators.
Professor van der Linden noted that roughly 20% of people surveyed in the UK had encountered deepfake material. Moreover, large-scale foreign or domestic actors can easily automate their production. Instead of relying on teams of people to craft convincing fake videos, AI can churn out hundreds of variants with minimal effort.
AI-Driven Micro-Targeting
Other examples of AI-aided manipulation include micro-targeting and ‘nano-targeting.’ By analysing vast quantities of user data—web browsing history, social media interactions, demographic information—AI systems can pinpoint individuals most susceptible to particular narratives.
As the professor observed, while micro-targeting is already a ‘significant concern,’ it may pale compared to what AI-driven nano-targeting can achieve, zeroing in on single individuals with hyper-personalised messages.
Burdens on the Public
Dr Aitken highlighted, a further complication is the expectation that individuals should be able to spot every AI-generated or manipulated piece of content.
People often view low-resolution images on mobile devices, scrolling at speed through a feed of rapidly updating posts.
Even the best ‘tips and tricks’ for identifying AI content—such as looking for distortions in background objects—are moot when technology evolves or when images are compressed, cropped, or quickly shared on ephemeral channels. Asking average users to maintain a constant, high-level vigilance leads to what she termed ‘over-scepticism,’ a corrosive distrust of all media, genuine or otherwise.
Erosion of Trust: Societal and Reputational Implications
The consequences of rampant misinformation and advanced AI tools go beyond a few embarrassing mix-ups on social media. Both witnesses stressed how digital manipulation poses serious, long-term threats to trust, social harmony, and reputation at multiple levels.
Public Health: Misinformation about medical treatments or vaccine safety can undermine public compliance with health guidance, especially when disguised as authoritative.
Democratic Processes: Elections can be swayed if certain voter groups are deliberately targeted with misleading claims. Repeated exposure to conflicting information sows confusion, making it easy to discredit genuine journalism and verified facts.
Incitement of Violence: Professor van der Linden invoked the concept of ‘stochastic terrorism,’ wherein misinformation repeatedly circulates, amplifies societal tensions, and eventually sparks public disorder or violence.
Reputational Harm: At the personal level, deepfake technology can ruin individual reputations by forging compromising images or videos. At the institutional level, businesses and government agencies can lose public goodwill if they are linked—accurately or not—to a scandal or false claim.
Widening Inequality: Evidence shows that minority groups are targeted explicitly with false narratives, intensifying distrust towards mainstream platforms or public agencies and further polarising society.
Much of this erodes trust in news outlets, democratic institutions, and official communications.
Dr Aitken warned that, as public scepticism grows, audiences might respond to legitimate media stories with the reflex: “How do I know that’s not fake?”. The constant drip of dubious content can make all news unreliable, with serious repercussions for policy-making, governance, and business credibility.
Policy Gaps: Current Regulatory Shortcomings
Many nations struggle to regulate digital platforms effectively. In the United Kingdom, there is an ongoing debate about balancing freedom of speech with the urgent need to protect users—especially children and vulnerable populations—from harm.
Online Safety Act Limitations
Witnesses and committee members mentioned the Online Safety Act, which addresses various forms of online harm. However, both Dr Aitken and Professor van der Linden emphasised that, in its present form, the Online Safety Act does not comprehensively tackle misinformation or disinformation. It focuses on issues such as child safety, terrorism, and illegal content but does not give Ofcom or other regulators explicit powers to rein in widespread false narratives unless they meet a stringent legal threshold—for instance, deliberate falsehoods shared to cause harm.
Moreover, the act appears ill-prepared to keep pace with AI-driven developments, leaving significant scope for malicious actors to exploit the technology in ways not subject to enforcement. As one committee member observed, crafting legal definitions for broad terms like ‘misinformation’ or ‘fake’ without risking overreach or conflating legitimate debate with manipulative content is extremely difficult.
Regulatory Coordination and Accountability
Neither Dr Aitken nor Professor van der Linden suggested that government agencies should become arbiters of truth. Instead, they see an ‘accountability gap’ between platforms and the public.
Social media companies often set community standards that ostensibly prohibit hate speech or deliberate misinformation but are rolling back enforcement. Regulators and researchers frequently lack access to the data needed to understand how content is being promoted, and there is inadequate government coordination across various departments (for instance, the Home Office, DCMS, education, and foreign affairs).
Professor van der Linden cited other jurisdictions—like the EU’s Digital Services Act—as a model for improving transparency, setting risk assessments, and imposing fines when companies fail to address harmful misinformation systematically.
Action Agenda: Recommendations for Government
Reflecting on the hearing’s evidence, it is clear that tackling misinformation demands concerted action by government and public bodies, with an emphasis on regulation, education, and coordination.
Expand the Regulator’s Remit
Witnesses proposed strengthening Ofcom’s powers to investigate misinformation. This involves not policing everyday opinions but ensuring accountability when platforms allow the systematic spread of demonstrably false content that can incite harm.
Consider requiring large tech platforms to label AI-generated content more reliably, through digital watermarking. Although Dr Aitken noted that “malicious actors can fairly easily evade” watermarks, systematic labelling would be necessary.
Invest in National Media Literacy Programs
Both experts recommended embedding ‘prebunking’ or ‘inoculation’ approaches into the national curriculum, an idea borrowed from the success of Nordic countries like Finland. Teaching children—at an early age—how to identify common tactics of propaganda and conspiracies can pay dividends in adulthood.
This instruction should be repeated yearly (so-called ‘booster shots’) to reinforce critical thinking and adapt to the evolving media landscape.
In fact, the Finnish model, developed because of how Russia was spreading misinformation was a model I remember teaching during my communications training in markets in South East Asia like Malaysia, Singapore and Indonesia.
Establish Cross-Government Coordination
Various government branches face overlapping challenges: foreign disinformation campaigns, domestic extremist content, health conspiracies, and election integrity. A more structured approach could unify intelligence-sharing and policy interventions.
A central point of contact or cross-department council could help standardise definitions, guidelines, and escalation procedures when misinformation spikes around national events.
Support Trusted Community Organisations
Dr Aitken stressed that local institutions and community groups, already trusted within specific demographics, are prime vehicles for meaningful engagement around misinformation. Government funds or grants could expand their capacity to hold workshops and discussions, addressing the distinct concerns of each community, from public health guidance to political processes.
By pursuing these strategies, government authorities can restore control and resilience to the information environment without impeding fundamental freedoms.
Business Imperatives: Corporate Responsibility
It is not only government agencies that have a responsibility to act. Companies—particularly those that operate online platforms or depend on user-generated content—must shoulder a share of the burden.
What is needed is a collaborative strategy and approach where stakeholders can work together towards a common beneficial aim for all, which is establishin gand rebuilding trust.
Platform Accountability and Transparency
Social media giants can and should do more to highlight suspicious content, verify legitimate sources, and demote material flagged as misleading.
Platforms must share data and cooperate with independent researchers to evaluate the efficacy of algorithms, especially recommendation systems that can amplify polarising material.
Consistency in policy enforcement is crucial. One hearing participant observed that some companies currently have rules, “but they’re not enforcing their own rules.” This rollback undermines trust in the platforms themselves.
Corporate Risk Management
Beyond social media, most businesses face reputational threats if they become the subject of AI-fuelled smear campaigns or manipulated leaks. Implementing robust fact-checking, crisis communication plans, and staff training can guard against these risks.
Larger firms might coordinate with regulators and law enforcement to address repeated attempts to slander brand images or defraud customers through imposter AI chatbots.
Ethical Innovation
AI startups and established tech firms alike should consider it a design principle to embed watermarking or labelling features in generative AI systems by default.
Taking the lead in developing reliable detection tools or in refining watermarking standards can help companies demonstrate leadership in corporate social responsibility.
Public Engagement: Building a Culture of Inquiry
A better-informed and critically engaged public is the best bulwark against manipulative narratives. Dr Aitken and Professor van der Linden recognised the importance of giving individuals the skills to interpret the onslaught of online content while avoiding the trap of ‘over-scepticism.’
Critical Literacy from a Young Age
School-based programmes can enhance pupils’ capacity to question sources, use fact-checking tools, and discuss manipulative tactics. Age-appropriate lessons can demystify how AI can forge realistic text or images.
Encouraging healthy scepticism rather than pervasive cynicism is the goal. Young people should learn how to differentiate credible data from speculation or factual reports from memes designed to provoke strong emotional responses.
Adult and Lifelong Learning
Outside formal education, libraries, community centres, and adult learning institutes could integrate short workshops or modules on digital verification.
Employers could also offer in-house seminars, particularly in businesses prone to reputational risks. In doing so, adults who missed out on formal digital literacy education can catch up and adapt.
Grassroots Awareness Campaigns
Sustained and well-funded public information campaigns can publicise known ‘red flag’ signals of misinformation. They can also direct citizens to reliable fact-checking services or official clarifications on viral claims.
Dr Aitken noted that promoting dialogue within communities encourages a nuanced understanding of AI’s capabilities and dangers. This approach fosters trust, as the information comes from local figures already known to residents.
Securing Our Information Future
The House of Lords Communications and Digital Committee hearing was an urgent reminder that the UK—and every modern democracy—faces a rapidly evolving fight against misinformation.
Generative AI is accelerating the creation of false or distorted content, undermining trust in genuine sources and posing unique challenges for policymakers, businesses, and the public.
Yet, despite the severity of the threats described by Dr Aitken and Professor van der Linden, their testimonies also sketched out a constructive path forward:
Regulation: Expand Ofcom’s remit, or develop new frameworks, so that the willful spread of false content can be scrutinised and platforms compelled to act.
Education: Implement a national media literacy strategy, teaching children from an early age how to detect propaganda and manipulative tactics. Support adult-focused programmes to ensure no segment of the population is left behind.
Coordination: Improve cross-department government collaboration and data-sharing. Recognise that misinformation is not just a digital communications problem; it cuts across security, health, education, and social welfare.
Platform Responsibility: Urge companies to enforce their community standards consistently, label AI-generated content, and partner with external researchers so that harmful content can be identified and demoted swiftly.
Community and Trust: Fund and partner with local organisations to promote engagement on AI, content verification, and resilience-building. Leverage already-trusted voices and institutions to reach different demographics effectively.
Given the complexity of the modern information environment, no single initiative—whether a piece of legislation, a fact-checking partnership, or an educational policy—will suffice. However, by distributing responsibility across governments, businesses, and the public, society can begin to reassert standards of authenticity.
Resisting the lure of cynicism, Dr Aitken encapsulated the challenge: “The deeper threat here is that increasingly, as there is exposure and awareness of AI-generated content, people begin to lose trust in all content online.” The goal is to prevent that sweeping crisis of faith in legitimate information. A thoughtful balance of regulation, community engagement, corporate accountability, and personal awareness can achieve just that. By investing in robust media literacy for all, the UK can empower citizens to question manipulative claims but still recognise—and trust—fact-based reporting and expert opinion.
As organisations and individuals adapt to a world where falsehoods may look as convincing as truth, the stakes have never been higher.
Society stands at a crossroads: either we accept a downward spiral of suspicion, or we collectively commit to equipping each new generation with the knowledge, tools, critical thinking and regulations necessary to maintain a healthy, informed democracy.
The vision that emerged from the select committee hearing points toward the latter. By acting decisively, government bodies, corporate leaders, and citizens can protect credibility and reputations in the era of AI, ensuring that open, evidence-based discourse continues to flourish in the UK’s public sphere.