Skip to Main Content

The algorithms carry out an array of crucial tasks: helping emergency rooms nationwide triage patients,  predicting who will develop diabetes, and flagging patients who need more help to manage their medical conditions.

But instead of making health care delivery more objective and precise, a new report finds, these algorithms — some of which have been in use for many years — are often making it more biased along racial and economic lines.

advertisement

Researchers at the University of Chicago found that pervasive algorithmic bias is infecting countless daily decisions about how patients are treated by hospitals, insurers, and other businesses. Their report points to a gaping hole in oversight that is allowing deeply flawed products to seep into care with little or no vetting, in some cases perpetuating inequitable treatment for more than a decade before being discovered.

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.