Explainable Time-To Predictions in Multiple Sclerosis
Loading...

Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier Ireland Ltd
Open Access Color
Green Open Access
No
OpenAIRE Downloads
OpenAIRE Views
Publicly Funded
No
Abstract
Background: Prognostic machine learning research in multiple sclerosis has been mainly focusing on black-box models predicting whether a patients' disability will progress in a fixed number of years. However, as this is a binary yes/no question, it cannot take individual disease severity into account. Therefore, in this work we propose to model the time to disease progression instead. Additionally, we use explainable machine learning techniques to make the model outputs more interpretable. Methods: A preprocessed subset of 29,201 patients of the international data registry MSBase was used. Disability was assessed in terms of the Expanded Disability Status Scale (EDSS). We predict the time to significant and confirmed disability progression using random survival forests, a machine learning model for survival analysis. Performance is evaluated on a time-dependent area under the receiver operating characteristic and the precision-recall curves. Importantly, predictions are then explained using SHAP and Bellatrex, two explainability toolboxes, and lead to both global (population-wide) as well as local (patient visit-specific) insights. Results: On the task of predicting progression in 2 years, the random survival forest achieves state-of-the-art performance, comparable to previous work employing a random forest. However, here the random survival forest has the added advantage of being able to predict progression over a longer time horizon, with AUROC > 60% for the first 10 years after baseline. Explainability techniques further validated the model by extracting clinically valid insights from the predictions made by the model. For example, a clear decline in the per-visit probability of progression is observed in more recent years since 2012, likely reflecting globally increasing use of more effective MS therapies. Conclusion: The binary classification models found in the literature can be extended to a time-to-event setting without loss of performance, thus allowing a more comprehensive prediction of patient prognosis. Furthermore, explainability techniques proved to be key to reach a better understanding of the model and increase validation of its behaviour.
Description
Roos, Izanne/0000-0003-0371-3666; Van Wijmeersch, Bart/0000-0003-0528-1545; Kalincik, Tomas/0000-0003-3778-1376; Kermode, Allan/0000-0002-4476-4016; D'Hondt, Robbe/0000-0001-7843-2178; Reddel, Stephen/0000-0002-0169-3350; Mrabet, Saloua/0000-0003-2718-1828; Lugaresi, Alessandra/0000-0003-2902-5589
Keywords
Explainable Artificial Intelligence, Survival Analysis, Multiple Sclerosis, Disability Progression, Longitudinal Data, Multiple sclerosis, Disability progression; Explainable artificial intelligence; Longitudinal data; Multiple sclerosis; Survival analysis, Longitudinal data, Disability progression, Explainable artificial intelligence, Survival analysis, Male, Multiple Sclerosis, Time Factors, Prognosis, Machine Learning, ROC Curve, Disease Progression, Humans, Female
Fields of Science
Citation
WoS Q
Q1
Scopus Q
Q1

OpenCitations Citation Count
N/A
Source
Computer Methods and Programs in Biomedicine
Volume
263
Issue
Start Page
108624
End Page
PlumX Metrics
Citations
CrossRef : 1
Scopus : 2
Captures
Mendeley Readers : 6
SCOPUS™ Citations
2
checked on Feb 14, 2026
Web of Science™ Citations
1
checked on Feb 14, 2026
Page Views
1
checked on Feb 14, 2026
Google Scholar™


