Teaching AI to “Do No Harm”

[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]

Is There an Artificial Intelligence in the House?

For SIAM News:

Medical care routinely involves life-or-death decisions, the allocation of expensive or rare resources, and ongoing management of real people’s health. Mistakes can be costly or even deadly, and healthcare professionals—as human beings themselves—are prone to the same biases and bigotries as the general population.

For this reason, medical centers in many countries are beginning to incorporate artificial intelligence (AI) into their practices. After all, computers in the abstract are not subject to the same foibles as humanity. In practice, however, medical AI perpetuates many of the same biases that are present in the system, particularly in terms of disparities in diagnosis and treatment (see Figure 1).

“Everyone knows that biased data can lead to biased output,” Ravi Parikh, an oncologist at the University of Pennsylvania, said. “The issue in healthcare is that the decision points are such high stakes. When you talk about AI, you’re talking about how to deploy resources that could reduce morbidity, keep patients out of the hospital, and save someone’s life. That’s why bias in healthcare AI is arguably one of the most important and consequential aspects of AI.”

[ read the rest at SIAM News ]

The threat of AI comes from inside the house

My other SIAM News contributions are necessarily math-focused. This one is a bit different: a review of a very good and  funny popular-science book about machine learning and its failures.

[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]

The Threat of AI Comes from Inside the House

For SIAM News:

Artificial intelligence (AI) will either destroy us or save us, depending on who you ask. Self-driving cars might soon be everywhere, if we can prevent them from running over pedestrians. Public cameras with automated face recognition technology will either avert crime or create inescapable police states. Some tech billionaires are even investing in projects that aim to determine if we are enslaved by computers in some type of Matrix-style simulation.

In reality, the truest dangers of AI arise from the people creating it. In her new book, You Look Like a Thing and I Love You, Janelle Shane describes how machine learning is often good at narrowly-defined tasks but usually fails for open-ended problems.

Shane—who holds degrees in physics and electrical engineering—observes that we expect computers to be better than humans in areas where the latter often fail. This seems unreasonable, considering that we are the ones teaching the machines how to do their jobs. Problems in AI often stem from these very human failings.

[Read the rest at SIAM News…]