Blog

Peer Review for Visual Aids?

We migrated this website to a new platform, and are working to correct formatting errors in older blog posts as a result. If you encounter an error, please send an email to scholarslab@virginia.edu. Thanks!

How frustrating is this: You sit down to take in some form of scholarly work (be it a book, an article, or a talk) and you find yourself increasingly confused with a bombardment of information from graphs and figures and maps which don’t make sense because they either have too much or too little information contained within them or the information is poorly labeled (if at all).  Or even worse, you are the person writing the book/article or giving the talk and instead of fielding questions on your scholarly processes, you are repeatedly explaining to the audience what your visual aids actually represent.

A picture may be worth a thousand words, but if it is not a language your audience speaks, where have your efforts gotten you? Typically, when I read a scholarly article, my first read-through goes as follows: I read the abstract, I look at each one of the figures/maps/tables/graphs and their annotations, and I read the conclusion.  Its not until the second read-through that I examine the bulk of the text.  I think that words sometimes have the unfortunate tendency to obfuscate the true findings of research and, truth be told, I like to find out if I draw the same conclusions from the provided data as the author(s) do.  My process stumbles when I encounter articles with figures/graphs/maps etc. which have either a glut or a dearth of information contained in them, making non-intuitive to the uninitiated reader.  Some highlights:  A map of a state containing rivers, waterbodies, and watershed boundaries (the focus of this particular article) AND all of the major roads and highways (NOT the focus of the article).  All in gray-scale.  Add in the point locations and names of the state’s twelve most populous cities and cram it into a box three inches tall by five inches wide.  The focus of the article was on modeling and delineating the major and minor watersheds of the area in order to develop a best management practice for cooperating water districts.  Needless to say, that point was lost in the shuffle.  Another example which is all too common: a graph depicting change over time of 10 or more constituents using various dotted, dashed, and solid lines of variable thickness.  With that amount of information crammed into a single visual aid, the results are simply lost in the shuffle.

We have writing clinics and public speaking critique sessions, why don’t we have a peer evaluation system for visual aids?  I think that many people (myself included) fall into a habit of having our material critiqued solely by our close working group.  While this is certainly a necessary step in the writing process–the people most familiar with our work are the ones most likely to pick up on the esoteric flaws–many scholars neglect to obtain peer review from individuals tangential to or completely outside of their small fields.  I would say that one of our main objectives as scholars is to use our work to excite interest from members of the scholarly community inside and outside of our focused area.   In my opinion, an important step towards this goal is to make our visual aids more accessible to the curious non-expert.

I would like to see our scholarly community develop this type of peer-review network where we can utilize the human resources around us to improve our intellectual contribution to all of our respective fields.  We could have minds from a variety of fields of study working collaboratively to improve the accessibility (and therefore the use) of our collective body of knowledge.   I think the concept has amazing potential.

Cite this post: Wendy Robertson. “Peer Review for Visual Aids?”. Published February 04, 2009. https://scholarslab.lib.virginia.edu/blog/peer-review-for-visual-aids/. Accessed on .