Title: Augmenting Visualizations with Contextual Knowledge for Analysis and Communication in Human-centered Machine Learning
Date: Thursday, December 8, 2022
Time: 1:00 PM - 3:00 PM EST
Location: TSRB 334 (VIS Lab)
Grace Guo
Human-centered Computing Ph.D. Student
School of Interactive Computing
Georgia Institute of Technology
Committee
Dr. Alex Endert (Advisor; School of Interactive Computing, Georgia Institute of Technology)
Dr. John Stasko (School of Interactive Computing, Georgia Institute of Technology)
Dr. Clio Andris (School of City and Regional Planning, Georgia Institute of Technology)
Dr. Jessica Roberts (School of Interactive Computing, Georgia Institute of Technology)
Dr. Bum Chul Kwon (IBM Research, Cambridge)
Abstract
Visually augmenting charts and graphs has been widely explored in visualization as a means to convey richer and more nuanced information to audiences. They have been used to support analytic tasks, guide readers through narrative stories, enhance comprehension and recall, and provide playful commentary. In particular, prior studies have demonstrated how augmentations can be an effective means for enhancing charts and graphs with contextual information, based on an analyst's domain expertise, prior knowledge, and human context. The process of visualizing contextual knowledge can thus be broken down into the steps: 1) identifying human-centered data facts, 2) specifying generation criteria, 3) visualizing relevant data, and 4) augmenting the visualization to convey the data facts.
In this proposed line of research, I aim to study how visualizations developed for human-centered machine learning (HCML) might be augmented to better incorporate contextual knowledge into the HCML process. HCML has been defined as a field of research that considers humans and machines as equally important actors in the design, training, and evaluation of co-adaptive machine learning scenarios, emphasizing the need to consider human context, user feedback, and domain knowledge when developing and evaluating machine learning systems. To this end, a large number of prior work in HCML have focused on how ML algorithms, and post-training results in particular, should be explained and visualized to users. However, many of these studies have focused on details of the ML model, emphasizing tasks such as exploring and understanding results, diagnosing errors and refining the ML process. In contrast, it is not well understood how elements of human context, user feedback, and domain knowledge can help users make sense of ML results in relation to their existing contextual knowledge about the data domain. Visual augmentations present a potential approach for addressing this gap.
In this work, I first conducted a user-centered design elicitation study to define a design space of visual augmentations, with a focus on understanding user considerations during the process of creating these augmentations. I next propose to explore how augmentations might be used to analyze and communicate ML model outcomes through two design studies in the domains of 1) online education, and 2) causal inference.