
Mount Sinai AI tool detects bias in health datasets: Study
Researchers at New York City-based Icahn School of Medicine at Mount Sinai Health System have developed AEquity, a tool designed to detect and mitigate biases in datasets used to train machine-learning algorithms.
The team tested AEquity on multiple health data sources, including medical images, patient records and the National Health and Nutrition Examination Survey. The tool identified both well-known and previously overlooked biases; the findings were published Sept. 4 in the Journal of Medical Internet Research.
AI tools are increasingly used in healthcare for diagnostics and risk prediction, but flawed or imbalanced data can perpetuate disparities in patient care, according to a Sept. 4 news release from the health system. AEquity can assess inputs such as lab results and imaging, as well as outputs such as diagnoses and risk scores. The research team said the tool is compatible with a range of models, including those used in large language models.
The post Mount Sinai AI tool detects bias in health datasets: Study appeared first on Becker’s Hospital Review | Healthcare News & Analysis.