[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]
For SIAM News:
Medical care routinely involves life-or-death decisions, the allocation of expensive or rare resources, and ongoing management of real people’s health. Mistakes can be costly or even deadly, and healthcare professionals—as human beings themselves—are prone to the same biases and bigotries as the general population.
For this reason, medical centers in many countries are beginning to incorporate artificial intelligence (AI) into their practices. After all, computers in the abstract are not subject to the same foibles as humanity. In practice, however, medical AI perpetuates many of the same biases that are present in the system, particularly in terms of disparities in diagnosis and treatment (see Figure 1).
“Everyone knows that biased data can lead to biased output,” Ravi Parikh, an oncologist at the University of Pennsylvania, said. “The issue in healthcare is that the decision points are such high stakes. When you talk about AI, you’re talking about how to deploy resources that could reduce morbidity, keep patients out of the hospital, and save someone’s life. That’s why bias in healthcare AI is arguably one of the most important and consequential aspects of AI.”