I am working on reviews for two DH platforms—the international DH2020 conference, and the new Reviews in DH journal. Below, I’m sharing my notes on how to work through writing a review—questions to ask myself, things I want to make sure to include, reminders on how to pay attention to personal bias. These aren’t intended to be guidelines for other folks, so I’ve left out things that I know I will do without having it written down. These personal reminders are followed by some guidelines from the two platforms that I wanted to keep directly visible while reviewing. If you have similar questions, reminders, or notes you use when reviewing that you’d be willing to share, please let me know!
Amanda’s notes on reviewing
- What is the abstract’s argument, in one sentence? (Can I tell? I should communicate my take on the abstract’s argument in the review, in case it’s not what the author intended.)
- Everyone is smart and has ideas worth sharing; we’re trying to determine how to distribute limited resources (conference slots, audience attention) to make the best possible conference, which should include things like diversity and representativeness of the speakers, amplification of those new to the field and those who are under-resourced and under-listened to elsewhere, those with lived experience the field needs to hear.
- Write as if the author will read it, even if they won’t; write as you would talk directly in-person to the author
- My goal is to support the author in getting to share their work with the conference/journal community, and to support them in doing their best work.
- The conference’s goal for reviewers is to assist in assessing quality of proposals, not to be the arbiter of whether something is accepted or not (the Program Committee aka PC does that). They want me to “provide expert information” to the PC, and “helpful, constructive feedback to authors, which can strengthen the quality and intellectual rigor of the conference” (DH2020 reviewer guidelines)
- Note that “support the author doing their best work” is thinking that can as easily underlie a bad review: be careful about doing this, e.g. don’t get stuck on how this could be a different project of more personal interest to me—evaluate this project. I should be careful about my assumptions of what is best or what I think the conference should accept vs. what the author and conference organizers think. My interests and biases can act as gatekeepers, and I am a reviewer, not a gatekeeper.
- List the good, innovative, interesting pieces (at least 3 things)
- Indicate where non-overlap of fields makes me worse at knowing how to evaluate!
- Evaluate how the approach fits the scholar’s stated goal; suggest different or additional approaches, only if those seem like a potentially better fit or interesting to the scholar (but see cautions above about how this can easily be bad)
- If you have issues with the abstract’s stated goal, be conscious and thoughtful about that; if you think they are real concerns, indicate your concerns and the reasons you could be wrong (starting with: this scholar has been working on/thinking about this longer than me and knows their subfields entirely better), and discuss alternatives. These are more likely opportunities for the author to clarify, than they are to be actual concerns with the methodology.
- Some useful prompts:
- “This submission makes me wonder…”
- “This submission taught me…”
- “I look forward to see how the author will…”
Reviews in DH reviewing
* an assessment of the humanistic claims and evidence * an assessment of the technology used/developed * an analysis of the project and its place within existing scholarship and technological practices * an evaluation of the project in relation to professional guidelines for evaluating digital scholarship like those provided by the MLA and AHA, including, where relevant: engagement of new audiences; funding, awards or other recognition; adoption and use of the output by other scholars; and citations of the project in scholarship or press identification of interesting, outstanding or problematic issues.
The DH2020 reviewer guidelines remind me to pay attention to tacit assumptions about format, use of limited words, citation style and use, especially how these may be impacted by different norms in different geographic regions: “Please note that the reviewer guidelines do not penalize submissions based on length, but do require all submissions to show “explicit engagement with relevant scholarship, with references”; (references can be in any format at this stage, including naming other projects or scholars in the text of the submission)”.
Quoting from DH2020’s reviewer guidelines:
- suggest concrete ways in which the proposal may be strengthened; show how you are interested in the author’s work
- no purely negative comments (address directly with PC, not in review)
DH2020 asks reviewers to evaluate abstracts following these guidelines:
* Overall organization and clarity of proposed submission (20%) * Explicit engagement with relevant scholarship, with references and justifications displaying knowledge of the current state of appropriate fields (30%) * Thematic relevance to “carrefours/intersection”; Native American, Indigenous, and First Nations Studies; public digital humanities; or the open data movement (10%) * Clear theoretical, methodological, or pedagogical framework and explicit statement of purpose (20%) * Applicability, significance, and value of the theoretical, methodological, and/or practical contribution to the digital humanities generally (20%)"