Click the Rate My Visualization button, then upload your visualization (jpg, gif, png) and walk through each checkpoint to score your visualization and identify its strengths and places where you could make improvements. We’ll give you resources to keep growing your skills.
Want All Checkpoints In One Place?
Add your details here and I’ll send you a PDF of the Data Visualization Checklist.
How Do I Rate Effectively?
To get oriented to the checklist and its guidance, watch the training below.
In this training, I rate a NASA graph and you can download it here to rate it with me.
Sound is a little muffled in the first five minutes, but hang in there.
Can I Access My Score Again At A Later Date?
After you rate your graph you’ll see an option to make your results public. That would allow others to see your graph and its scores. And it’s totally optional. Sharing to the public lets others get inspired by cool examples and lets us continue to conduct further research to validate the checklist.
If you check the box to make your results public, your graph will show up in the Recent Submissions section. If you don’t check the box, no one but you will see your results.
When you see your scores, you will have the option to download a PDF of your results and you’ll get a unique URL to your scores that no one else will have.
What Makes This Checklist Credible?
In 2014, Stephanie Evergreen and Ann Emery developed the checklist based on Stephanie’s extensive review of relevant research and the practical experience of both Stephanie and Ann as data designers.
They used the checklist with many clients and in hundreds of workshops over the years. In 2016, they launched a slightly modified checklist based off of feedback and implementation tests.
Sena (Pierce) Sanjines checked the validity of the modified checklist using cognitive interviews (i.e. what do people think when they use it to rate graphs) and found raters’ understanding and use of the checklist aligned with the intended purpose of the tool.
The interviews also highlighted parts of the checklist that were confusing or interpreted differently by different raters. With input from Stephanie, Sena developed guidelines for raters’ to help address common areas of confusion and/or ambiguity in the checklist. The training is drawn from those guidelines.
Sena also tested the checklist as a part of her dissertation to see if it had any reliability as an instrument – that is, whether the checklist does its job and can be trusted as a reliable tool. In academic terms, she looked at inter-rater reliability (IRR) through Intraclass Correlation (ICC) using a two-way average measures for consistency with mixed effects (k-14, n=5).
In non-science language, fourteen raters scored the same five graphs and gave a total score to each graph (calculated as a percent of total points possible, minus any items that were not applicable). To get the reliability, Sena checked if the graphs were scored consistency by the different raters.
Sena’s reliability results, in academic terms: ICC(2,14)=0.874 (average measures), or about 87% of the average variance across raters was due to (can be explained by) the tool. Note, the ICC estimate for single measures, i.e. individual raters, was lower (0.580), meaning if we generalize results to an individual rater we can estimate a moderate reliability in their ratings.
In slightly-less-academic terms: If you look at the average way a bunch of people rate a graph (specifically 14 in this case), the study says we can estimate around 87% reliability, but if you look at just one person’s rating of a graph, we can estimate around 58% reliability.
The more commonly reported reliability score is the 87% average measures and anything falling between 0.75 and 0.9 is considered “good” reliability.
Who Is Behind This Site?
Dr. Stephanie Evergreen has written three books on data visualization and design. She leads workshops and designs high-impact data visualizations for a wide range of clients. She is a frequent keynote speaker with a popular blog. Dr. Evergreen also conducts online training through the Evergreen Data Visualization Academy.
Dr. Sena (Pierce) Sanjines is a practicing evaluator in the state of Hawai’i. Her dissertation was on the relationship between the use and quality of data visualizations and the use of written reports.
Ann K. Emery is a data visualization speaker who co-authored an early version of the Data Visualization Checklist with Stephanie Evergreen.
Jennifer R. Marsack built some example Show Me graphs.