My thoughts on strategy, communications and digital and technology, and how it’s creating opportunities and transforming service delivery in businesses and governments.

Sign-up to my RSS feed or follow me on Twitter or LinkedIn.

Listen to my podcast on Acast, Apple Podcasts, Google Podcasts or Spotify.

How to Build Trust with Ethical AI in Life Sciences: Essential Strategies for Healthcare and Technology Leaders

How to Build Trust with Ethical AI in Life Sciences: Essential Strategies for Healthcare and Technology Leaders

Today, the morning after the UK Government’s International Investment Summit, I attended an event at the London Guildhall that discussed AI & Data in Life Sciences: An Ethical Conundrum?

Organised by BiteLabs Healthtech and with a Keynote by The Lord Mayor, Michael Mainelli, the event brought together Hatim Abdulhussein, CEO at Health Innovation Kent Surrey Sussex, Sage Revell,  Partner at Brown Rudnick LLP, Rav Seeruthun, CEO at Health-Equity.AI, and Haris Shuaib, CEO at Newton's Tree.

The event focused on the rapidly evolving world of healthcare and life sciences, the groundbreaking advances of artificial intelligence (AI) and how it can be leveraged to improve diagnostics and predict patient outcomes.

One of the critical issues that the use of AI by life sciences and healthcare providers faces is that of perception and trust, with issues focusing on bias, accountability, and data privacy bring critical. These create reputational issues for healthcare providers and developers of AI solutions, which need to be addressed.

Below are five key insights and recommendations on how healthcare and life sciences companies can ethically integrate AI while building trust and safeguarding their reputation.

1. Addressing Tendency in AI to Build Ethical Foundations

Key Insight: AI models in life sciences risk perpetuating biases, particularly when built on non-representative data. Historical underrepresentation in clinical trials and healthcare data sets continues to influence AI-driven outcomes, disproportionately affecting underserved populations. As an example, clinical trials in the US and Europe now require representation across gender and ethnic lines, a shift that AI developers must align with to ensure the AI solutions they build and are used by medical practitioners are deliver advisory that relates to the patient.

Recommendation: Companies should actively work to collect and use more diverse, representative data sets to ensure their AI tools do not reinforce existing health disparities. This commitment to inclusivity should be clearly communicated to stakeholders, demonstrating that the organisation values equity in healthcare outcomes. Collaborating with regulatory bodies, such as the FDA or EMA, to align AI models with new standards for diverse trial representation will further bolster trust.

2. Building Accountability: AI as an Assistive Tool, Not a Replacement

Key Insight: One critical concern was accountability - what happens when an AI system makes an error in diagnosis or treatment? AI-driven decision-making cannot be left unchecked, and there must be clear lines of responsibility.

Recommendation: Healthcare companies should ensure that AI tools serve as support systems for human decision-makers rather than as replacements. Clear protocols must be in place that define when and how AI recommendations are reviewed by qualified professionals. Furthermore, establishing a transparent error-management process that outlines liability will reassure patients and regulatory bodies that patient safety remains the top priority.

Trust-Building Strategy: Publicly commit to maintaining human oversight in all AI-related healthcare processes. Actively engaging healthcare professionals in AI workflows will enhance their trust in AI systems, which is critical for patient adoption as well.

3. Transparency in Data Handling: The Cornerstone of Patient Trust

Key Insight: Data privacy remains a fundamental concern for patients, particularly when AI platforms use their health data. Patients are wary of how their personal information is handled, especially when private-sector companies are involved. The willingness to share data varies significantly, with older patients often more trusting and younger, more tech-savvy individuals being cautious.

Recommendation: Healthcare companies must be transparent about how they collect, store, and use patient data. This includes offering patients clear and easy-to-understand information about data anonymisation processes, storage security, and the specific purposes for which their data will be used. Additionally, adopting privacy-enhancing technologies and making them a core part of patient communication can further alleviate concerns.

Trust-Building Strategy: Implement patient-centred consent frameworks and ensure that these are visible across all patient touchpoints. Additionally, in the building of an AI-powered service, the product and experience should designed around the user, mitigating fears they might have. It is also critical to communications and engagement is designed based on the need to build trust and confidence. Language,  reputation and the experience of the service provider matter.

4. Ethical AI Development: Aligning with Societal Goals to Build Reputation

Key Insight: Trust is built when AI solutions are seen as contributing to societal good, not just corporate profit. The healthcare sector is uniquely positioned to use AI to address public health disparities and improve patient outcomes.

Recommendation: Companies should align their AI strategies with societal health objectives, such as reducing disparities in access to care or improving outcomes for vulnerable populations. By developing AI solutions that address these broader public health challenges, companies can enhance their reputation as contributors to the public good, rather than purely profit-driven entities.

Trust-Building Strategy: Companies should publicly commit to ethical AI use, highlighting the positive impact their technologies will have on society. Regularly publishing transparency reports and engaging in ongoing open and transparent public dialogues on the ethical use of AI will further reinforce this position.

5. Proactive Engagement with Ethical and Regulatory Frameworks

Key Insight: Regulatory frameworks are struggling to keep pace with AI innovation, leaving companies to navigate uncertain legal and ethical terrains. However, companies that voluntarily adhere to ethical standards, even in the absence of strict regulations, gain a competitive edge in building trust.

Recommendation: Healthcare and life sciences companies should develop internal ethical guidelines for AI use, even when regulations are lacking. These guidelines should be embedded as part of the culture within each company. Being proactive in adopting ethical AI practices will not only mitigate risks but also enhance the company’s reputation for corporate responsibility. Collaborating with cross-industry ethical boards, engaging patient advocacy groups, and contributing to the development of AI ethics frameworks will further position the company as a responsible innovator.

Trust-Building Strategy: Think strategically and engage in continuous public and private dialogue with regulators, patients, and industry bodies to shape future AI regulations. Companies that take a leadership role in ethical discussions are more likely to be seen as trustworthy partners by patients, healthcare professionals, and government stakeholders.

Conclusion: Building Trust Through Ethical AI Integration

As AI becomes more deeply integrated into healthcare and life sciences, companies must proactively address the ethical challenges it presents. By committing to transparency, accountability, and societal good, they can build trust with patients, healthcare providers, and regulators alike. AI, when used ethically and responsibly, can deliver immense value, but only if companies actively engage and invest in the building of trust and safeguarding their reputation.

In the UK’s Industrial Strategy Green Paper, which was published yesterday, the Government identified the country’s life sciences sector as one that ‘holds enormous potential to drive economic growth and productivity.’ Additionally, the Green Paper states that, ‘the UK’s life sciences sector is built on a strong foundation, with over 6,800 businesses in 2021/22 that generated over £100 billion in turnover. The UK is also home to four of the top 10 global universities for life sciences and medicine, and with the expertise of the NHS, the UK is a global hub for innovation.’ To realise this potential requires the building of trust by technology companies that are looking to leverage AI to deliver improved healthcare services.

For companies in the life sciences and healthcare sectors, the message is clear: ethical AI use is not just a regulatory requirement - it is a reputational asset. Investors require confidence that, as a start-up, companies are investing not just in due diligence but also in the building of their reputation. Those who prioritise transparency, accountability, and social responsibility will be best positioned to lead the industry and gain the trust of their stakeholders.

BBC Cuts to Hardtalk and Newsnight: How Axing Respected Journalism Damages the UK’s Global Reputation and Media Ecosystem

BBC Cuts to Hardtalk and Newsnight: How Axing Respected Journalism Damages the UK’s Global Reputation and Media Ecosystem

Maximising the UK Industrial Strategy: A Guide for Business Leaders, Investors, and Policymakers

Maximising the UK Industrial Strategy: A Guide for Business Leaders, Investors, and Policymakers