Evaluating Fairness Criteria That Address Disparities in Diabetes

Chunlei Tang, PhD
Department of Medicine
Division of General Internal Medicine and Primary Care
Poster Overview

Objective: We aim to compare the performance of different fairness criteria on their ability to identify disparities in temporal modeling of minority populations with diabetes.

Methods: Our corpus includes a population of 459,280 diabetes patients who visited Mass General Brigham between July 1st, 2017, and June 30th, 2018. We employed a delayed model to evaluate how static fairness criteria interact with temporal factors of diabetes, including biological and clinical indicators. This was compared to an unconstrained one-step feedback model that served as our baseline, which quantifies the long term impact of the classification on different groups in the population. We characterized the delayed impact of these fairness criteria and identified where they exhibit qualitatively different behavior.

Results: Both fairness criteria can lead to all possible outcomes (i.e., improvement, stagnation, and decline) in natural parameter regimes. Our results demonstrate that the comparator does not promote improvement over time, and may cause harm in cases where an unconstrained objective would not.

Conclusions: The delayed model can help foresee the impact a fairness criterion would have if enforced as a constraint in a classification system.

Scientific Abstract

Background: Population disparities in diabetes and its complications exist worldwide. Fairness in machine learning has predominantly been studied in static classification settings of a synthetic corpus to correct for negative impacts from certain groups. However, there exists a concern for how decisions change the underlying population over time.

Objective: We aim to compare the performance of different fairness criteria on their ability to identify disparities in temporal modeling of minority populations with diabetes.

Methods: Our corpus includes a population of 459,280 diabetes patients who visited Mass General Brigham between July 1st, 2017, and June 30th, 2018. We employed a delayed model to evaluate how static fairness criteria interact with temporal factors of diabetes, including biological and clinical indicators. This was compared to an unconstrained one-step feedback model that served as our baseline.

Results: Both fairness criteria can lead to all possible outcomes (i.e., improvement, stagnation, and decline) in natural parameter regimes. Our results demonstrate that the comparator does not promote improvement over time, and may cause harm in cases where an unconstrained objective would not.

Conclusions: The delayed model can help foresee the impact a fairness criterion would have if enforced as a constraint in a classification system.

Clinical Implications
Fairness in machine learning has predominantly been studied in static classification settings of a synthetic corpus to correct for negative impacts from certain groups. Addressing the concern for how decisions change the underlying population over time is meaningful.
Research Areas
Authors
Chunlei Tang, Joseph M. Plasek, Li Zhou, David W. Bates
Principal Investigator
David W. Bates

Explore Other Posters

Leave a Reply

Your email address will not be published. Required fields are marked *