Title:  Towards Interpretable Machine Learning for Healthcare

Committee: 

Dr. Mitchell, Advisor

Dr. Romberg, Co-Advisor       

Dr. Heck, Chair

Dr. Chau

Abstract: The objective of the proposed research is to develop novel interpretable algorithms that provide the explainability and accuracy required for healthcare applications. Methods that improve the current state-of-the-art for interpretable algorithms are developed for biomedical text, time-series, and tabular data. The first aim generates a new algorithm for improved interpretability of aggregate clinical cohort study effect sizes for clinical question answering or meta-analysis. Published clinical cohort journal articles are ranked and summarized, and the key elements (patient population, intervention, and outcome) are extracted. Results are displayed in a user-friendly interface that expedites interpretation by a factor of 660 compared to traditional processes. The second aim develops novel algorithms for complex time-series data to accurately interpret sleep staging, epileptic seizure detection, and multifactorial amyotrophic lateral sclerosis disease dynamics. Two generalizable methods are developed that overlay clinical domain features with the latent space of deep neural networks to provide interpretation. Results emulate the high accuracy of black-box deep learning while providing the clinical interpretability of diagnostic classification outcomes. The third aim evaluates interpretable machine learning algorithms for small or sparse, tabular data sets ascribed to a rare disease. Specifically, generalizable algorithms are evaluated to predict pediatric acute leukemia relapse, decipher chemotherapy-induced infection risk, and analyze alignment with clinical expectations. In summary, this research delivers an innovative set of interpretable algorithms that increases the credibility and efficacy of artificial intelligence for identifying disease causes, elucidating cures, and optimizing patient care.