My thoughts on strategy, communications and digital and technology, and how it’s creating opportunities and transforming service delivery in businesses and governments.

Sign-up to my RSS feed or follow me on Twitter or LinkedIn.

Listen to my podcast on Acast, Apple Podcasts, Google Podcasts or Spotify.

How to manage the reputational risks of algorithms

This summer’s A-level exam results chaos in the UK have increased concerns about the use of algorithms in decision-making. Whether in the public or private sector, algorithms are part of everyday life. But what do you know about them, how have they been designed and is there bias in the decisions that they contribute to?

Unless organisations are more transparent on the influence algorithms have in decision-making we run the risk of limiting the confidence that society has in them and the organisations that use them.

Artificial intelligence (AI), machine learning (ML) and algorithms have been seen as mystical for some time. They are pitched to us and decision-makers as essential solutions that can solve complex problems by just crunching high volumes of data.

However, take a step back from the pitch and perception and remember that the Achilles heel of algorithms are the humans who create them. If variables are missed out in the design, discovery and alpha stages, then there is a chance that organisations that use them can discriminate against you. Unless all factors are included there is a high chance that they will not be fair. As a result, the reputation of the organisation using them can be damaged.

In the case of the A-level exam results, their crisis started in late March 2020 when, due to the cancellation of exams because of the Covid-19 pandemic, Ofqual was instructed to "ensure, as far as is possible, that qualification standards are maintained and the distribution of grades follows a similar profile to that in previous years".

With limited time available, Ofqual got to work. But part of the algorithm that they created included a variable where students would be marked up or down based on the historical performance of each centre (school or college) in the subject being assessed, regardless of teacher assessment and prediction. This led to almost 40% of students being downgraded. In simple terms, if a student did good at a school that had historically been poor, regardless of them excelling, their grade was held back. They became hostage to history.

In their rush to design a solution that tried to avoid grade inflation, the Ofqual team failed to remember that exams are taken by individuals. The public outcry that followed led to heavy political damage.

The government blamed the exam downgrades on a ‘mutant algorithm’, but even this excuse highlighted a lack of understanding of the technology.

While Government has best practice guidelines on the creation and testing of AI and algorithms, there appears to have been an inability to communicate and educate upwards of what an algorithm is and the risk of rolling it out without proper testing.

The UK’s Chartered Institute of Public Relations in August 2020 released an ethics guide to AI in public relations. Some of the points they raised should be considered, given the impact that algorithms can have on trust and reputation.

So how do you manage risk in the design and deployment of AI and other advanced technologies? Here are my 3 essential points to consider:

Educate stakeholders about the benefits, workings and risks of algorithms

Have a risk management mentality and use this to assess the viability of the algorithm. Consider the impact and possible bias and discriminations that might not have been resolved during the alpha stage of the design and development of the algorithm.

Speak truth to power, to those who are ultimately responsible for the decisions and outcomes of AI and algorithms. Make sure they understand the technology, the risk as well as the benefits.

Question your design and data team. You are there to support, protect them and help them deliver effective solutions that are adopted and seen as fair.

Carry out a full impact and risk assessment on the impact of algorithms

To secure adoption of algorithms you need to establish trust in these and associated processes. Engage with all stakeholders where appropriate to gain their knowledge, which equally helps to manage and minimise risk.

Ensure that reputational risk is part of your team’s impact assessment during the Discovery and Alpha stages of design and development.

Be open and transparent about the design of your algorithm

Algorithms used in the private sector have to be risk tested to ensure that the users of them in decision-making are not liable to possible legal action. Algorithms are also an intellectual property with an attached value.

Public sector algorithms must be owned by the Government and be open and transparent to ensure public confidence in how decisions are made.

Publish ethics guidance used in the designing of your public sector algorithm.

The UK’s COVID-19 track and trace App is out

The UK’s COVID-19 track and trace App is out

Social media and the BBC World Affairs team

Social media and the BBC World Affairs team