This study evaluates the fairness of machine learning (ML) clinical decision support tools (CDSTs) in predicting clinical outcomes after anatomic total shoulder arthroplasty (aTSA) and reverse total shoulder arthroplasty (rTSA) for patients with different demographic attributes.
Background: Machine learning (ML)-based clinical decision support tools (CDSTs) make personalized predictions for different treatments; by comparing predictions of multiple treatments, these tools can be used to optimize decision making for a particular patient. However, CDST prediction accuracy varies for different patients and also for different treatment options. If these differences are sufficiently large and consistent for a particular subcohort of patients, then that bias may result in those patients not receiving a particular treatment. Such level of bias would deem the CDST “unfair.” The purpose of this study is to evaluate the “fairness” of ML CDST-based clinical outcomes predictions after anatomic (aTSA) and reverse total shoulder arthroplasty (rTSA) for patients of different demographic attributes.
Methods: Clinical data from 8280 shoulder arthroplasty patients with 19,249 postoperative visits was used to evaluate the prediction fairness and accuracy associated with the following patient demographic attributes: ethnicity, sex, and age at the time of surgery. Performance of clinical outcome and range of motion regression predictions were quantified by the mean absolute error (MAE) and performance of minimal clinically important difference (MCID) and substantial clinical benefit classification predictions were quantified by accuracy, sensitivity, and the F1 score. Fairness of classification predictions leveraged the “four-fifths” legal guideline from the US Equal Employment Opportunity Commission and fairness of regression predictions leveraged established MCID thresholds associated with each outcome measure.
Results: For both aTSA and rTSA clinical outcome predictions, only minor differences in MAE were observed between patients of different ethnicity, sex, and age. Evaluation of prediction fairness demonstrated that 0 of 486 MCID (0%) and only 3 of 486 substantial clinical benefit (0.6%) classification predictions were outside the 20% fairness boundary and only 14 of 972 (1.4%) regression predictions were outside of the MCID fairness boundary. Hispanic and Black patients were more likely to have ML predictions out of fairness tolerance for aTSA and rTSA. Additionally, patients <60 years old were more likely to have ML predictions out of fairness tolerance for rTSA. No disparate predictions were identified for sex and no disparate regression predictions were observed for forward elevation, internal rotation score, American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form score, or global shoulder function.
Conclusion: The ML algorithms analyzed in this study accurately predict clinical outcomes after aTSA and rTSA for patients of different ethnicity, sex, and age, where only 1.4% of regression predictions and only 0.3% of classification predictions were out of fairness tolerance using the proposed fairness evaluation method and acceptance criteria. Future work is required to externally validate these ML algorithms to ensure they are equally accurate for all legally protected patient groups.
Learn more about our ‘Best’ Joint Replacement