By ending content fact-checking, Zuckerberg has increased the reputation risk for companies advertising on Facebook and Instagram
Mark Zuckerberg announced yesterday that Meta would end the fact-checking programme and replace it with a Community Notes system, similar to what Elon Musk has in place on X. Zuckerberg also said that the company’s moderation policies around political topics would change, with one policy removed that reduced the amount of political content in user feeds.
These moves were announced in anticipation of the new Donald J. Trump administration's taking office in the US later this month.
However, what was striking was the statement Zuckerberg made when he said, "We're going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more."
His statement confirmed that anything that gets in the way of further monetizing the user-based platform he has built, regardless of any safeguarding issues, will be removed. From now on, in the US at least, human fact-checkers and moderators will be removed, leading to what is likely to be an increase in misinformation and hateful content on his platforms.
Yet, this decision highlights the issue at hand: how the internet has slowly been splitting over the last 15 to 20 years based on regional differences in culture and values.
Former Google CEO and Alphabet chairman Eric Schmidt said in 2018 that in the next 10 to 15 years, the internet would most likely be split in two: one led by China and one led by the United States. The third internet will be designed around EU regulation that supports data privacy.
An opinion piece by the New York Times Editorial Board highlighted how ‘all three spheres — Europe, America and China — are generating sets of rules, regulations and norms that are beginning to rub up against one another. What’s more, the physical location of data has increasingly become separated by region, with data confined to data centres inside the borders of countries with data localization laws.’
But, as we move into an assertive ‘America First’ world, Zuckerberg, like Musk and other tech leaders hoping to push their view of what the internet should look like to other locations around the world, regardless of the damage that lies and that misinformation has had on people.
Exporting Misinformation
We cannot ignore the damage that misinformation has to people and society. Misinformation shared on social media platforms such as Facebook, Instagram, and X has had far-reaching consequences, eroding trust in science, deepening social divides, and inciting hostility toward vulnerable communities.
During the COVID-19 pandemic, platforms became a breeding ground for false claims about vaccines and treatments, which fueled vaccine hesitancy and jeopardized public health. Research from the Center for Countering Digital Hate revealed that nearly 65% of anti-vaccine content on Facebook and Twitter originated from just 12 accounts, dubbed the “Disinformation Dozen.”
But the impact goes beyond public health. Social media has amplified hate speech targeting racial, ethnic, and LGBTQ+ communities, fostering environments of hostility and discrimination. We know that algorithms often prioritise polarising content, magnifying exposure to harmful narratives. Notably, a 2021 Facebook whistleblower exposed internal documents showing that the platform knowingly allowed hate speech to flourish, particularly in non-English-speaking regions where moderation resources were inadequate.
In parallel, the proliferation of conspiracy theories questioning climate science, election integrity, and other crucial topics has further eroded trust in institutions and experts. Despite introducing measures like fact-checking and content warnings, social media platforms have struggled to curb the spread of these narratives. One could argue that this issue has been kept in the long grass in order to maintain a high level of engagement and usage that leads to positive financial returns.
The Wall Street Journal reported Internal research from Meta that revealed that Instagram can have detrimental effects on teenagers’ mental health, particularly among adolescent girls. The findings indicated that Instagram exacerbates body image issues for one in three teenage girls and contributes to increased rates of anxiety and depression.
What we need are platforms where safety and safeguarding are built-in and which could actually generate, for the tech companies, greater financial returns. But of course, US companies think in Quarters not the long term.
The Evolution of Social Media and Its Advertising-Driven Model
When social media platforms like Facebook, Instagram and Twitter (now X) began in the early 2000s, it was all about joining for free and delivering experiences that helped people around the world connect, communicate and share information. Companies like Facebook grew because of advertising revenue, with companies investing heavily in reaching expansive user bases.
Facebook was founded nearly 20 years ago on 4 February 2004, with a first private investment of $500,000 from Peter Thiel, co-founder of PayPal, in exchange for 10.2% of the company. Five years later, it made a profit. Three years later, in 2012, it filed for its initial public offering (IPO) on February 1, 2012, going public on May 18, 2012, raising $16 billion and achieving a market valuation of $104 billion, making it one of the largest IPOs in tech history.
Today, Meta has over 3 billion monthly active users on Facebook and over 1.4 billion on Instagram. They also own WhatsApp.
They have the audiences that brands and companies want to reach and engage with, which is why, for instance, in 2023, Meta Platforms reported a total revenue of $134.9 billion, marking a 15.69% increase from the previous year. Advertising remains the cornerstone of Meta’s income, contributing approximately 97% to the total revenue.
Meta Platforms reported a net profit margin of 28.98%, and the operating margin for 2023 was 34.66%, showing an improvement from 24.82% in 2022.
The US and Canada are the largest markets for Meta, accounting for 39% of its revenue in 2023 at $52bn, with Europe accounting for 23% and Asia Pacific 20%. Don’t look at these numbers without considering the regulatory pushback that might be coming their way from Zuckerber’s announcement and the potential regulatory oversight that might be coming their way.
How did Facebook and the Meta Platforms grow and create the environment that we are now experiencing?
In simple terms, money and advertising.
Facebook has done an excellent dragnet job globally, and advertisers are willing to pay to reach people on its family of apps.
Yet marketing and advertising agencies have been critical in helping Facebook and Meta grow. They have guided brands to optimise their presence on these platforms, further entrenching the reliance on digital advertising.
Advertisers and their agencies need Facebook and Instagram as much as they need their advertisers and spending. They both need audiences that stick and stay on Facebook and Instagram for as long as possible. Their relationship is symbiotic.
Keeping users on the platform requires the platform and content on it to be designed in such a way that it creates regular daily usages - addiction, an issue that has been identified by numerous medical researchers, including this 2023 paper published in the American Journal of Law and Medicine and which Scott Galloway highlights in his latest Pivot podcast with Kara Swisher.
The Proliferation of Misinformation and Disinformation
The expansive reach of social media has directly and indirectly facilitated the spread of misinformation and disinformation. Algorithms designed to maximise user engagement now amplify sensational or misleading content, posing significant risks to the societal well-being and brand integrity of companies exposed to malicious content.
As the dominant social media platform, Facebook has become central to the sharing and amplifying of information and misinformation, especially when shared in Groups or directly through its Messenger App.
The platform's algorithms, designed to prioritise user engagement, have inadvertently or not favoured sensational and misleading content, allowing misinformation to flourish. This engagement-centric model has led to the proliferation of emotionally charged narratives, often outpacing efforts by fact-checkers and moderating entities, who will now be gone, to mitigate harmful content.
Research confirms this and shows that misinformation circulating on Facebook can and has swayed public opinion, eroded trust in democratic institutions, and amplified political polarisation, something that Zuckerberg has long disputed.
Yet, to counter that perception, Zuckerberg 2020 created an independent Oversight Board to oversee and review Meta’s decisions on content moderation, ensuring accountability and transparency in managing complex cases involving free expression and harmful content.
Membership of the board included experts in law, human rights, and journalism. It acted quasi-judicially, making binding decisions on content disputes and offering policy recommendations to guide Meta’s moderation practices.
Yet, like other activities, the Oversight Board was a tactic designed to deflect attention from regulators and stakeholders.
Meta’s Transition to Community-Driven Moderation: Potential Risks
In a significant policy shift weeks before Donald Trump’s incoming administration, Meta's announcement aligns with the values of the new government.
Adopting a Community Notes system, like X, but not like Wikipedia, puts the responsibility of users or the platform to flag and contextualise potentially misleading posts. This is in the US, where freedom of speech is enshrined in the Constitution.
Equally, it is worth remembering the protection that companies that Meta and X have from Section 230 from the US Communications Decency Act as publishers of people’s posts on their platforms.
Removing content moderation in the US and then working with the incoming government to lobby international markets to follow suit or not penalise Meta risks, putting local and global businesses at risk of being associated with misinformation and hateful content. Is this what they want?
There is an increased risk of a brand's reputation being damaged by advertising on Facebook or Instagram because their ads may appear alongside or within content containing misinformation or hate speech.
The proposed absence of robust moderation may result in a more volatile content and digital advertising landscape, making it challenging for brands to ensure their advertisements are placed in appropriate contexts.
If brands get caught out, any association can lead to public backlash and negative publicity.
Recommendations for Businesses Navigating the Evolving Digital Landscape
Businesses, brands and governments that have a presence on Meta platforms are likely to see a change in content on Facebook and Instagram.
The reduction of human moderation increases the risk of brand content becoming compromised by hateful content that risks damaging the advertiser's brand. As a result, companies should do the following:
Conduct Comprehensive Risk Assessments:
Regularly audit digital advertising campaigns to identify potential brand safety issues.
Diversify Advertising Channels:
Explore alternative platforms beyond Meta and X to mitigate risks associated with policy changes.
Advocate for Transparency:
Support initiatives calling for greater transparency in content moderation practices on social media platforms.
Remember, while Zuckerberg might be embarking on a global campaign against ‘censorship’, your advertising spending gives you influence to ensure that the platform delivers reach without risk. Be more vocal, and lobby Meta like you would lobby your government.
Engage in Industry Collaboration:
Participate in industry groups to advocate for responsible digital advertising practices and effective content moderation.
Work and engage with other advertisers who share your concerns. To a company like Meta, money is the influence that can shape their product around your wants and needs.
Adapting to a Fragmented Digital Ecosystem
The internet is fragmented, driven by regional regulations and platform policy shifts, and presents complex business challenges.
By adopting proactive strategies and remaining vigilant, companies can navigate this evolving landscape, safeguarding their brand integrity while being more aggressive in getting Meta and others to ensure that Facebook in their region is what fits in that market.
Marketing and advertising budgets are the influence that brands will need to use if the platforms they have a presence on are to remain safe for advertisers and audiences.