Our research to this point has approached the problem with two main efforts. In the first, we perform user studies with multiple data series and tasks specific to their comparison. The manner of comparison (e.g. placing charts side-by-side or overlaying them) is varied within each experiment, while other factors, such as encoding, are fixed. The results of these experiments suggest a complex interaction of factors, with different comparative arrangements providing benefits for different combinations of tasks and encodings. For example, in line with prior intuition, we find that superimposing two charts makes judgment of differences easier. However, somewhat surprisingly, we find that animation is even more effective for this task. Further, while “stacking” one chart above another vertically performed poorly for judgement of differences, it was the most effective layout when the task was determining which series had the largest mean or the widest range. It is thus difficult to provide broad, practical guidance for comparisons as done previously for encodings. While, in theory, all combinations of comparative layouts, encodings, tasks, etc., could be empirically tested, this would not be practical.
Our second effort thus works toward such guidance by seeking to find latent factors that may cut across explicit features such as arrangement and mark type. To this end, we introduce perceptual proxies, which are simplified visual operations, or shortcuts, that we can theorize are used by the visual system to perform more complex tasks. These could be, for example, choosing the series with the longest bar or with the largest convex hull area instead of computing the true mean value. We propose an initial list of candidate proxies and compare their predictions to actual user responses from our prior studies. However, as these data were not generated to discriminate proxies from true answers, it is not ideal for this task, as the differences between proxies and true answers are often subtle.
In work we propose here, we will further investigate proxies by conducting additional user studies with more adversarial datasets, which will intentionally have large differences between the task metric (e.g. mean value) and various proxy metrics (e.g. convex hull area). The hope is that these datasets can catch the visual system in the act of using these proxies. Additionally, we theorize that different individuals may apply different proxies, whether based on predisposition or prior experience with visual analytics. We will investigate this by applying latent, generative models to participant responses, allowing multiple strategies to emerge if they exist. Ultimately, we hope to generalize findings from focused, empirical studies such that, given an intended task, we can provide designers, practitioners, and, potentially, automated systems with evidence-based recommendations for creating effective visual comparisons.
Dept rep: Dr. Marine Carpuat
Members: Dr. Hector Corrada Bravo
Dr. Adam Phillippy
Brian Ondov is a third year PhD student in Computer Science at the University of Maryland. After receiving his BS in Computer Science from Rensselaer Polytechnic Institute, Brian began his career as a game developer, working on franchises such as Guitar Hero and Spiderman. Despite the perks of the industry, he was quickly drawn into the world of research, studying Bioinformatics at the Georgia Institute of Technology and working for a government contractor. He is now pursuing a doctorate to branch out further into Computer Science, with interests including Human-Computer Interaction and Natural Language Processing. He is supported by the National Institutes of Health, where he has opportunities to collaborate with leading biological and medical researchers.