How to address AI bias in healthcare

Antonina Burlachenko

by Antonina Burlachenko

AI bias in healthcare R32n5cpm

The following article addresses the challenges when it comes to AI bias in healthcare, and how to overcome them. You can jump to watch our masterclass on managing AI bias in healthcare here instead.

In recent years, AI has transformed the healthcare industry. Now, powerful tools can assist with diagnosing diseases, predicting patient outcomes and streamlining administrative processes. 

But as AI becomes increasingly integrated into healthcare systems, concerns regarding AI bias are growing. Bias in AI, especially within the healthcare world, can have significant implications, potentially leading to disparities in care, misdiagnoses and unfair treatment of certain demographics.

Here we’ll look at the issue of AI bias in healthcare, its impact, and what can be done to mitigate these biases.

What is AI bias?

AI bias is the presence of systematic and unfair discrimination in AI models and systems. It occurs when algorithms make predictions, decisions or classifications that are skewed due to imbalanced data or flawed design.

In healthcare specifically, AI is used in a variety of applications, including diagnostics, predictive models, personalized treatment plans and clinical decision support.

When AI models are biased, they produce inaccurate results that disproportionately affect certain patient groups based on factors like race, gender and socioeconomic status. For example, researchers found that an algorithm used on over 200 million people in US hospitals heavily favored white patients over black patients.

Bias vs Fairness

Sources of AI bias in healthcare

There are several ways bias can manifest in AI models used in healthcare. Typically, when speaking of bias people focus on data biases. Some of the primary sources include:

  1. Historical inequities: First of all, AI systems only pull in data from what already exists. Meaning they inherit biases from historical healthcare data, where unfortunately certain demographics have historically been underserved or misdiagnosed. For example, if healthcare providers have historically underdiagnosed or misdiagnosed certain conditions in minority populations, the AI model trained on this data is likely to continue these biases, leading to inaccurate treatment recommendations.
  2. Data imbalances and preprocessing: AI models are mainly trained on large datasets. If these datasets aren’t representative of the diverse population they’re serving, the model can develop biases. It’s imperative that data, which is used to train and test ML models, is independent and well-represents the intended real-world population. With that said, fairness and bias are not always the same, so sometimes we have to introduce bias in the data/model, to ensure fairness of the end product (like in a case when we have to compensate for some historical inequalities). Similarly, if the model relies on data that includes biased decision-making, the AI will replicate and exaggerate those biases.
  3. Feature selection and algorithm design: Here we have to select the most informative features useful for model inference while watching out for bias sensitive data (like race, age, gender, etc). For example, if a model uses a biased feature such as ZIP (post) code or socioeconomic status, it may indirectly reinforce existing inequalities in access to healthcare. Analyzing how much each feature contributes to the model prediction is also a useful technique to uncover hidden biases. The design of AI algorithms and modelling process can lead to potential model overfitting or underfitting, leading to unusable models, which won’t be able to generalize properly on the production data.
  4. Interpretability and transparency: Many AI models are considered "black boxes," meaning they make decisions without providing a clear rationale. This lack of transparency makes it difficult to identify and correct biases within the system(s). If healthcare professionals cannot understand how an AI system arrives at a particular decision, it can be challenging to ensure the model is fair and accurate. On top of that, if the deployer or developer of the marketed AI product isn’t able to monitor its performance post-product, it may lead to unidentified issues and negative impact on the users.
Types of bias

The impact of AI bias in healthcare

The impact of AI bias in healthcare is already far-reaching. Inaccurate predictions, misdiagnoses and unequal access to care can exacerbate existing health disparities and lead to poorer outcomes for vulnerable populations. Some of the consequences of AI bias in healthcare include:

  • Health disparities: Bias in AI systems can contribute to the widening of health disparities between different demographic groups. If AI models are more accurate for certain groups but less accurate for others, it will result in unequal treatment and care — particularly for historically marginalized groups.
  • Patient safety: Biased AI models compromise patient safety. For example, an algorithm that fails to accurately identify heart disease in women because it was trained primarily on data from men can lead to missed diagnoses and delayed treatment. Inaccurate AI predictions put patients at risk of unnecessary suffering, complications or even death.
  • Loss of trust: If patients or healthcare providers perceive AI systems to be biased, trust in these technologies will erode and reduce the willingness to use them. As it is, 60% of Americans would be uncomfortable with providers relying on AI in their own health care. This is extra concerning in the context of AI’s potential to improve healthcare outcomes through better diagnostics and treatment recommendations.
  • Ethical (and legal) concerns: AI bias raises ethical issues related to fairness, discrimination and accountability, which have legal ramifications too. Healthcare providers, developers and regulators need to navigate these concerns to ensure that AI systems comply with laws such as the Health Insurance Portability and Accountability Act (HIPAA) and the Civil Rights Act, which protect patients from discrimination.

AI bias in healthcare examples

Real-world examples of AI bias in healthcare illustrate the potential harm these biases can cause.

One of the most notable cases occurred with an AI system used to predict which patients would benefit most from extra care in hospitals. Researchers discovered that the algorithm was biased against black patients. It was designed to prioritize patients with high healthcare costs but because black patients typically receive less medical care and have lower healthcare costs on average, they were underrepresented in the system's predictions.

This led to fewer resources being allocated to black patients, in turn widening healthcare disparities.

Another example is AI-driven products used for diagnosing conditions like skin cancer. Studies have shown that AI algorithms used in dermatology tend to perform less accurately on people with darker skin tones compared to those with lighter skin. This is because the datasets used to train the algorithms are often predominantly composed of lighter-skinned individuals, leading to less accurate diagnoses for people of color.

Mitigating AI bias in healthcare

Addressing AI bias in healthcare is critical to ensuring that these technologies fulfill their promise of improving patient outcomes and reducing health disparities. Several strategies can help mitigate bias in AI models:

  • Representative data: One of the most effective ways to reduce bias in AI healthcare is to ensure that training and testing data is diverse and representative of the entire population. Healthcare providers, researchers and developers should actively seek out data that includes diverse demographic groups like racial minorities, women, elderly patients and individuals with various medical conditions. Domain expertise and analysis of a specific use case are crucial for identifying all relevant data requirements. 
  • Bias auditing/testing: Regular audits and testing of AI systems are key to identifying and correcting biases. This evaluation should happen throughout the AI system lifecycle – starting with data quality control through to the bias assessment during the implementation phase (like feature impact review), and proper verification and validation on the subgroups of the population. By evaluating the performance of AI models across different demographic groups, developers can ensure that the systems provide equitable results. This process should be ongoing, with continuous updates to the models as new data becomes available.
  • Transparency: AI systems in healthcare should be designed with transparency and interpretability in mind. If healthcare professionals can understand how an AI system makes decisions, they can better identify potential biases and intervene when necessary. Explainable AI models can also help build trust with patients and healthcare providers. Any remaining unmitigated risks or performance limitations need to be clearly communicated to the end users. 
  • Collaboration and oversight: Collaboration between AI developers, healthcare professionals, regulators, and policymakers is crucial for addressing AI bias in healthcare. Oversight bodies can provide guidance and ensure that AI systems are deployed in a way that minimizes harm and maximizes benefits.
  • Ethical AI design: AI systems should be designed with ethical considerations at the forefront. This includes incorporating fairness as a core principle, ensuring that the models are regularly updated to reflect the evolving understanding of health disparities, and prioritizing patient welfare.

How to manage AI bias in healthcare

AI has the potential to revolutionize healthcare by improving diagnosis, treatment and patient care. But for these technologies to be truly beneficial, it’s essential to address the issue of AI bias in healthcare.

Bias in AI systems can perpetuate existing healthcare disparities and harm vulnerable populations. To learn how to address and solve this, watch our on-demand webinar on managing bias in AI-enabled medical devices.

Share

AI bias in healthcare Rar8n5cpm
Antonina Burlachenko
Head of Quality and Regulatory Consulting, Healthcare

Antonina is the Head of Quality and Regulatory Consulting at Star, with expertise in medical device regulations, software development lifecycle, quality assurance, project management, and product management. She is a certified lead auditor for ISO 13485 and ISO 27001 and supports our clients in regulatory strategy definition, QMS and ISMS implementation and certification.

Harness our Healthcare capabilities

We are passionate about improving healthcare outcomes with digital products that are a pleasure to use

Explore our expertise
Loading...
plus iconminus iconarrow icon pointing rightarrow icon pointing rightarrow icon pointing downarrow icon pointing leftarrow icon pointing toparrow icon pointing top rightyoutube iconPlay iconPause iconarrow pointing right in a circleDownload iconResume iconCross iconActive Badge iconActive Badge iconInactive Badge iconInactive Badge iconFocused Badge iconDropdown Arrow iconQuestion Mark iconFacebook logoTikTok logoLinkedin logoLinkedIn logoFacebook logoTwitter logoInstagram logoClose IconEvo Arrowarrow icon pointing right without lineburgersearch