Holy grail or Achilles heel?
Digital healthcare executives see both opportunity and risk in AI and are proceeding with caution, despite trust and bias concerns.
The combination of biased humans and institutional datasets heightens the risk of algorithmic bias. For instance, race, ethnicity, and socioeconomic status already influence health outcomes due to institutional biases.5 If an algorithm results in poorer outcomes for these groups, it’s challenging to determine whether the bias is pre-existing, from the algorithm, or both.
Deep learning algorithms offer benefits but pose substantial risks. Their "black box" nature makes it hard to understand how AI arrives at its conclusions, complicating bias identification and correction.
As legislators increasingly recognise AI bias, so companies must exercise caution. To what extent can AI be trusted?
In the US, support is growing for the Algorithmic Accountability Act[3], which would require companies to assess their AI for unfair, biased, or discriminatory outputs. Similar regulations are being proposed in Europe and China[4].