Okoe Florida International University A crowdsourcing framework for automated visualization evaluation httpvizlabcsfiuedugraphunit A proof of concept GraphUnit GraphUnit is a webservice that supports a semiautomatic evaluation of graph visualizations it leverages crowdso ID: 651932
Download Presentation The PPT/PDF document "Radu Jianu (with Mershack" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Radu Jianu (with Mershack Okoe)Florida International University
A crowd-sourcing framework for automated visualization evaluation
http://vizlab.cs.fiu.edu/graphunit/Slide2
A proof of concept: GraphUnitGraphUnit is a web-service that supports a semi-automatic evaluation of graph visualizations; it leverages crowdsourcing, a library of graph-tasks linked to benchmark graph data sets, automatic user-study deployment, and result collection and analysis.
If a web-visualization is available,
GraphUnit
lets you configure and deploy an online user study of it in about 30 minutes.
http://vizlab.cs.fiu.edu/graphunit/Slide3
Running a study with GraphUnithttp://vizlab.cs.fiu.edu/graphunit/Slide4
Running a study with GraphUnitSlide5
Running a study with GraphUnit
http://vizlab.cs.fiu.edu/graphunit/Slide6Slide7Slide8Slide9
Running a study with GraphUnitReturn to see intermediate and final results (charts, R-statistical analyses, raw results);
GraphUnit will create a unique URL where you can access them
http://vizlab.cs.fiu.edu/graphunit/Slide10
Fine printSlide11
GraphUnit’s ingredients
Task taxonomy
Prototypical tasks
“Are two nodes connected?”
Input: two nodes
Answer: yes/no
Specific benchmark data set
Task instances
“Are
nodes
N1 and N2 connected?”
Input: N1, N2
Answer: Yes
Study design choices
Study protocol
(e.g., alternate conditions)
Data analysis
Interface methodsSlide12
The next step: VisUnit
Graph ModulePrototypical tasksBenchmark data and task instances
Interface methods
Set and group module
Multidimensional module
Vector field module
Glyph moduleSlide13
The next step: VisUnit Problems:Real data-sets
are not ‘pure’; they are often combinations of graph data, multidimensional data, spatio-temporal data
Solution: prototypical tasks are still valuable; stream-line the process of registering one’s own data set and task instances
How do we design
VisUnit
to be sufficiently
flexible to replicate existing user study designs?
E.g.: Answers might not come in widget form -> use the interface methods to accept any answer from a user (e.g., a selection of a data object, or a click on the screen)
http://vizlab.cs.fiu.edu/graphunit/Slide14
Why now?Controlled user study designs and data analyses become more standardized
Lam et al. (2012)
http://vizlab.cs.fiu.edu/graphunit/Slide15
MotivationControlled user study designs and data analyses are standardizedLam et al. (2012)
Evaluated tasks are becoming increasingly standardized into task taxonomiesGraphs (Lee 2006), Multidimensional (Valiati
2006),
Group+Graph
(
Saket
2014)
http://vizlab.cs.fiu.edu/graphunit/Slide16
MotivationControlled user study designs and data analyses are standardizedLam et al. (2012)
Evaluated tasks are becoming increasingly standardized into task taxonomiesGraphs (Lee 2006), Multidimensional (Valiati
2006),
Group+Graph
(
Saket
2014)
Online crowdsourcing has been validated as a mechanism to run user studies; crowdsourcing implements ‘human macros’
Heer
and
Bostock
(2010),
Kittur
et al. (2008), Bernstein et al. (2010)
http://vizlab.cs.fiu.edu/graphunit/Slide17
MotivationControlled user study designs and data analyses are standardizedLam et al. (2012)
Evaluated tasks are becoming increasingly standardized into task taxonomiesGraphs (Lee 2006), Multidimensional (Valiati
2006),
Group+Graph
(
Saket
2014)
Online crowdsourcing has been validated as a mechanism to run user studies; crowdsourcing implements ‘human macros’
Heer
and
Bostock
(2010),
Kittur
et al. (2008), Bernstein et al. (2010)
Visualizations are migrating to the web
D3,
WebGL
http://vizlab.cs.fiu.edu/graphunit/Slide18
Benefits Evaluating visualizations is important (and lacking?): Lam et al. (
2012) showed that 42% of 850 major vis papers between 2002 and 2012
reported an evaluation
.
Conducting user
studies is challenging, time consuming, and
expensive
Standardized benchmark evaluation can lead to comparable results and in turn to user study results that can aggregate over time
Can we move evaluations from after the design process to within the design process?
http://vizlab.cs.fiu.edu/graphunit/Slide19
Questions?http://vizlab.cs.fiu.edu/graphunit/