Title: Human-centered Explainable AI 

  

Date: September 21, 2023 

Time: 4:00pm – 6:00pm EST 

Location (fully virtual): Zoom Link | Meeting ID: 890 1760 7101 Passcode: 860583 

 

  

Upol Ehsan 

PhD Student in Computer Science 

School of Interactive Computing 

Georgia Institute of Technology 

  

Committee: 

Dr. Mark O. Riedl (advisor) – School of Interactive Computing, Georgia Institute of Technology 

Dr. Munmun De Choudhury – School of Interactive Computing, Georgia Institute of Technology 

Dr. Sashank Varma – School of Interactive Computing and School of Psychology, Georgia Institute of Technology 

Dr. Q. Vera Liao – Microsoft Research 

Dr. Michael Muller – IBM Research 

  

Summary: 

If AI systems are going to inform consequential decisions such as deciding whether you should get a loan or receive an organ transplant, they must be explainable to everyone, not just software engineers. Despite commendable technical progress in “opening” the black-box of AI, the prevailing algorithm-centered Explainable AI (XAI) view overlooks a vital insight: who opens the black-box matters just as much as opening it. As a result of this blind spot, many popular XAI interventions have been ineffective and even harmful in real-world settings.  

To address the blind spot, my dissertation introduces and operationalizes Human-centered XAI (HCXAI), a holistic sociotechnical and human-centered paradigm of AI explainability. 

 

Thesis statement: With a focus on non-AI experts, this dissertation demonstrates how Human-centered XAI: 

  1.  
  2. expands the design space of XAI by broadening the domain of non-algorithmic factors that augment AI explainability 
  3. enriches our knowledge of the importance of “who” the humans are in XAI design  
  4. enables resourceful ways to do Responsible AI by providing proactive mitigation strategies through participatory methods 

It contributes 1) conceptually: new concepts such as such as Social Transparency that showcase how to encode socio-organizational context to augment explainability without changing the internal model; 2) methodologically: human-centered evaluation of XAI, actionable frameworks, and participatory methods to co-design XAI systems; 3) technically: computational techniques and design artifacts; 4) empirically: findings such as how one’s AI background impacts one’s interpretation of AI explanations, user perceptions of real AI users, and how AI explanations can negatively impact users despite our best intentions.  

 

The proposed work takes a participatory approach to extend Social Transparency into Radiation Oncology, a high-stakes and complex domain. The goal is to extend Social Transparency conceptually and practically while gaining a deeper understanding of the XAI needs of Radiation Oncologists to inform HCXAI design of future systems. 

 

The dissertation expands the XAI discourse from an algorithm-centered perspective to a human-centered one. It takes a foundational step towards creating a future where anyone, regardless of their background, can interact with AI systems in an explainable, accountable, and dignified manner.