Name: Sweta Parmar
Dissertation Defense Meeting
Date:
 Friday, December 2, 2022

Time: 3:00 PM

Location: JS Coon Room 148 or Virtual (https://gatech.zoom.us/j/91877961783)

Zoom Meeting ID: 918 7796 1783

 

Advisor: Rick Thomas, Ph.D. (Georgia Tech)

Dissertation Committee Members:
Sashank Varma, Ph.D. (Georgia Tech)
Karen Feigh, Ph.D. (Georgia Tech)
Elizabeth Whitaker, Ph.D. (Georgia Tech Research Institute)
Jamie Gorman, Ph.D. (Arizona State)

 

Title- Model Blindness: Investigating a model-based route-recommender system’s impact on decision making under model misspecifications

Abstract: Model-Based Decision Support Systems (MDSS) are prominent in many professional domains of high consequence, such as aeronautics, emergency management, military command and control, healthcare, nuclear operations, intelligence analysis, and maritime operations. An MDSS generally uses a simplified model of the task and the operator to impose structure to the decision-making situation and provide information cues to the operator that is useful for the decision-making task. Models are simplifications, can be misspecified, and have errors. Adoption and use of these errorful models can lead to the impoverished decision-making of users. I term this impoverished state of the decision-maker model blindness. A series of two experiments were conducted to investigate the consequences of model blindness on human decision-making and performance and how those consequences can be mitigated via an explainable AI (XAI) intervention. The experiments implemented a simulated route recommender system as an MDSS with a true data-generating model (unobservable world model). In Experiment 1, the true model generating the recommended routes was misspecified to different levels to impose model blindness on users. In Experiment 2, the same route-recommender system was employed with a mitigation technique to overcome the impact of model-misspecifications on decision-making. Overall, the results of both experiments provide little support for performance degradation due to model blindness imposed by misspecified systems. The XAI intervention provided valuable insights into how participants adjusted their decision-making to account for bias in the system and deviated from choosing the model-recommended alternatives. The participants' decision strategies revealed that they could understand model limitations from feedback and explanations and could adapt their strategy to account for those misspecifications. The results provide strong support for evaluating the role of decision strategies in the model blindness confluence model. These results help establish a need for carefully evaluating model blindness during the development, implementation, and usage stages of MDSS.