Background and objective: In everyday clinical practice, medical decision is currently based on clinical guidelines which are often static and rigid, and do not account for population variability, while individualized, patientoriented decision and/or treatment are the paradigm change necessary to enter into the era of precision medicine. Most of the limitations of a guideline-based system could be overcome through the adoption of Clinical Decision Support Systems (CDSSs) based on Artificial Intelligence (AI) algorithms. However, the black -box nature of AI algorithms has hampered a large adoption of AI-based CDSSs in clinical practice. In this study, an innovative AI-based method to compress AI-based prediction models into explainable, model-agnostic, and reduced decision support systems (NEAR) with application to healthcare is presented and validated. Methods: NEAR is based on the Shapley Additive Explanations framework and can be applied to complex input models to obtain the contributions of each input feature to the output. Technically, the simplified NEAR models approximate contributions from input features using a custom library and merge them to determine the final output. Finally, NEAR estimates the confidence error associated with the single input feature contributing to the final score, making the result more interpretable. Here, NEAR is evaluated on a clinical real-world use case, the mortality prediction in patients who experienced Acute Coronary Syndrome (ACS), applying three different Machine Learning/Deep Learning models as implementation examples. Results: NEAR, when applied to the ACS use case, exhibits performances like the ones of the AI-based model from which it is derived, as in the case of the Adaptive Boosting classifier, whose Area Under the Curve is not statistically different from the NEAR one, even the model's simplification. Moreover, NEAR comes with intrinsic explainability and modularity, as it can be tested on the developed web application platform (https://near dashboard.pythonanywhere.com/). Conclusions: An explainable and reliable CDSS tailored to single -patient analysis has been developed. The proposed AI-based system has the potential to be used alongside the clinical guidelines currently employed in the medical setting making them more personalized and dynamic and assisting doctors in taking their everyday clinical decisions.

An innovative artificial intelligence-based method to compress complex models into explainable, model-agnostic and reduced decision support systems with application to healthcare (NEAR)

Sperti, Michela
Co-first
;
Fassino, Davide;Gallone, Guglielmo;De Filippo, Ovidio;Iannaccone, Mario;D'Ascenzo, Fabrizio;De Ferrari, Gaetano Maria;
2024-01-01

Abstract

Background and objective: In everyday clinical practice, medical decision is currently based on clinical guidelines which are often static and rigid, and do not account for population variability, while individualized, patientoriented decision and/or treatment are the paradigm change necessary to enter into the era of precision medicine. Most of the limitations of a guideline-based system could be overcome through the adoption of Clinical Decision Support Systems (CDSSs) based on Artificial Intelligence (AI) algorithms. However, the black -box nature of AI algorithms has hampered a large adoption of AI-based CDSSs in clinical practice. In this study, an innovative AI-based method to compress AI-based prediction models into explainable, model-agnostic, and reduced decision support systems (NEAR) with application to healthcare is presented and validated. Methods: NEAR is based on the Shapley Additive Explanations framework and can be applied to complex input models to obtain the contributions of each input feature to the output. Technically, the simplified NEAR models approximate contributions from input features using a custom library and merge them to determine the final output. Finally, NEAR estimates the confidence error associated with the single input feature contributing to the final score, making the result more interpretable. Here, NEAR is evaluated on a clinical real-world use case, the mortality prediction in patients who experienced Acute Coronary Syndrome (ACS), applying three different Machine Learning/Deep Learning models as implementation examples. Results: NEAR, when applied to the ACS use case, exhibits performances like the ones of the AI-based model from which it is derived, as in the case of the Adaptive Boosting classifier, whose Area Under the Curve is not statistically different from the NEAR one, even the model's simplification. Moreover, NEAR comes with intrinsic explainability and modularity, as it can be tested on the developed web application platform (https://near dashboard.pythonanywhere.com/). Conclusions: An explainable and reliable CDSS tailored to single -patient analysis has been developed. The proposed AI-based system has the potential to be used alongside the clinical guidelines currently employed in the medical setting making them more personalized and dynamic and assisting doctors in taking their everyday clinical decisions.
2024
151
1
10
Artificial intelligence; Clinical decision support systems; Explainability; Precision medicine; Risk prediction
Kassem, Karim; Sperti, Michela; Cavallo, Andrea; Vergani, Andrea Mario; Fassino, Davide; Moz, Monica; Liscio, Alessandro; Banali, Riccardo; Dahlweid, ...espandi
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0933365724000836-main.pdf

Accesso riservato

Tipo di file: PDF EDITORIALE
Dimensione 4.57 MB
Formato Adobe PDF
4.57 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2007455
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact