top of page

Uncovering ethical biases in publicly available fetal ultrasound datasets

  • ITGH
  • Jul 1
  • 1 min read

By Maria Chiara Fiorentino, Sara Moccia, Mariachiara Di Cosmo, Emanuele Frontoni, Benedetta Giovanola & Simona Tiribelli 



Abstract


We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.


Continue reading in Nature

 
 
 

Коментарі


Коментування цього посту більше не доступне. Зверніться до власника сайту, щоб дізнатися більше.

Subscribe to the Institute's newsletter

Get the latest news on technology and health policy research.

Thanks for submitting!

  • Twitter
  • Linkedin
bottom of page