While interest in the application of machine learning to improve healthcare has grown tremendously in recent years, a number of barriers prevent deployment in medical practice. A notable concern is the potential to exacerbate entrenched biases and existing health disparities in society. The area of fairness in machine learning seeks to address these issues of equity; however, appropriate approaches are context-dependent, necessitating domain-specific consideration. We focus on clinical trials, i.e., research studies conducted on humans to evaluate medical treatments. Clinical trials are a relatively under-explored application in machine learning for healthcare, in part due to complex ethical, legal, and regulatory requirements and high costs. Our aim is to provide a multi-disciplinary assessment of how fairness for machine learning fits into the context of clinical trials research and practice. We start by reviewing the current ethical considerations and guidelines for clinical trials and examine their relationship with common definitions of fairness in machine learning. We examine potential sources of unfairness in clinical trials, providing concrete examples, and discuss the role machine learning might play in either mitigating potential biases or exacerbating them when applied without care. Particular focus is given to adaptive clinical trials, which may employ machine learning. Finally, we highlight concepts that require further investigation and development, and emphasize new approaches to fairness that may be relevant to the design of clinical trials.

Multi-disciplinary fairness considerations in machine learning for clinical trials / Chien, Isabel; Deliu, Nina; Turner, Richard; Weller, Adrian; Villar, Sofia; Kilbertus, Niki. - (2022), pp. 906-924. (Intervento presentato al convegno FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency tenutosi a Seoul, South Korea) [10.1145/3531146.3533154].

Multi-disciplinary fairness considerations in machine learning for clinical trials

Nina Deliu
Methodology
;
2022

Abstract

While interest in the application of machine learning to improve healthcare has grown tremendously in recent years, a number of barriers prevent deployment in medical practice. A notable concern is the potential to exacerbate entrenched biases and existing health disparities in society. The area of fairness in machine learning seeks to address these issues of equity; however, appropriate approaches are context-dependent, necessitating domain-specific consideration. We focus on clinical trials, i.e., research studies conducted on humans to evaluate medical treatments. Clinical trials are a relatively under-explored application in machine learning for healthcare, in part due to complex ethical, legal, and regulatory requirements and high costs. Our aim is to provide a multi-disciplinary assessment of how fairness for machine learning fits into the context of clinical trials research and practice. We start by reviewing the current ethical considerations and guidelines for clinical trials and examine their relationship with common definitions of fairness in machine learning. We examine potential sources of unfairness in clinical trials, providing concrete examples, and discuss the role machine learning might play in either mitigating potential biases or exacerbating them when applied without care. Particular focus is given to adaptive clinical trials, which may employ machine learning. Finally, we highlight concepts that require further investigation and development, and emphasize new approaches to fairness that may be relevant to the design of clinical trials.
2022
FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency
clinical trials; adaptive clinical trials; health informatics; machine learning for healthcare; fairness in machine learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Multi-disciplinary fairness considerations in machine learning for clinical trials / Chien, Isabel; Deliu, Nina; Turner, Richard; Weller, Adrian; Villar, Sofia; Kilbertus, Niki. - (2022), pp. 906-924. (Intervento presentato al convegno FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency tenutosi a Seoul, South Korea) [10.1145/3531146.3533154].
File allegati a questo prodotto
File Dimensione Formato  
Deliu_Fairness_2022.pdf.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 771.8 kB
Formato Adobe PDF
771.8 kB Adobe PDF
Deliu_Frontespizio_2022.pdf.pdf

accesso aperto

Note: Frontespizio ed Indice contributi
Tipologia: Altro materiale allegato
Licenza: Creative commons
Dimensione 648.53 kB
Formato Adobe PDF
648.53 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1653956
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? ND
social impact