AI art is popular, but it is ethical?

The linked article is for SIAM News, the magazine for members of the Society for Industrial and Applied Mathematics (SIAM). However, even though the main audience for this magazine is professional mathematicians, this article contains no mathematics whatsoever, but does contain possibly the worst pun I ever have contributed to a published article.

[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]

The Ethics of Artificial Intelligence-Generated Art

For SIAM News:

In recent months, many people have begun to explore a new pastime: generating their own images using several widely-distributed programs such as DALL-E, Midjourney, and Stable Diffusion. These programs offer a straightforward interface wherein nontechnical users can input a descriptive phrase and receive corresponding pictures, or at least amusingly bad approximations of the results they intended. For most users, such artificial intelligence1 (AI)-generated art is harmless fun that requires no computer graphics skills to produce and is suitable for social media posts (see Figure 1).

However, AI algorithms combine aspects of existing data to generate their outputs. DALL-E, Stable Diffusion, and other popular programs pull images directly from the internet to train their algorithms. Though these images might be easily obtainable—from the huge Google Images database, for example—the creators have not always licensed their art for reuse or use in the production of derivative works. In other words, while publications like SIAM News obtain permission before disseminating restricted-license images, popular AI algorithms do not distinguish between pictures that are freely usable and those that are not.

Read the rest at SIAM News
Advertisement

Teaching AI to “Do No Harm”

[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]

Is There an Artificial Intelligence in the House?

For SIAM News:

Medical care routinely involves life-or-death decisions, the allocation of expensive or rare resources, and ongoing management of real people’s health. Mistakes can be costly or even deadly, and healthcare professionals—as human beings themselves—are prone to the same biases and bigotries as the general population.

For this reason, medical centers in many countries are beginning to incorporate artificial intelligence (AI) into their practices. After all, computers in the abstract are not subject to the same foibles as humanity. In practice, however, medical AI perpetuates many of the same biases that are present in the system, particularly in terms of disparities in diagnosis and treatment (see Figure 1).

“Everyone knows that biased data can lead to biased output,” Ravi Parikh, an oncologist at the University of Pennsylvania, said. “The issue in healthcare is that the decision points are such high stakes. When you talk about AI, you’re talking about how to deploy resources that could reduce morbidity, keep patients out of the hospital, and save someone’s life. That’s why bias in healthcare AI is arguably one of the most important and consequential aspects of AI.”

[ read the rest at SIAM News ]

The threat of AI comes from inside the house

My other SIAM News contributions are necessarily math-focused. This one is a bit different: a review of a very good and  funny popular-science book about machine learning and its failures.

[ This blog is dedicated to tracking my most recent publications. Subscribe to the feed to keep up with all the science stories I write! ]

The Threat of AI Comes from Inside the House

For SIAM News:

Artificial intelligence (AI) will either destroy us or save us, depending on who you ask. Self-driving cars might soon be everywhere, if we can prevent them from running over pedestrians. Public cameras with automated face recognition technology will either avert crime or create inescapable police states. Some tech billionaires are even investing in projects that aim to determine if we are enslaved by computers in some type of Matrix-style simulation.

In reality, the truest dangers of AI arise from the people creating it. In her new book, You Look Like a Thing and I Love You, Janelle Shane describes how machine learning is often good at narrowly-defined tasks but usually fails for open-ended problems.

Shane—who holds degrees in physics and electrical engineering—observes that we expect computers to be better than humans in areas where the latter often fail. This seems unreasonable, considering that we are the ones teaching the machines how to do their jobs. Problems in AI often stem from these very human failings.

[Read the rest at SIAM News…]