Prove It – Peer Review

This Saturday I mailed in the final, edited copy of my manuscript, The Trickster in Ginsberg: A Critical Reading, which is going to be sent to the printers on May 4th! (WAHOO!) This is an incredibly exciting time for me not only because the thing is finally, finally out of my hands but it’s also because it’s so near to fruition. This July I’ll actually have in my own two hands a hard copy of a full-length, academic, peer reviewed book that I worked for 4 years to create.

Image

And, I think, perhaps out of all of these things, one of the aspects I’m proudest of is the fact that the book is peer reviewed. I know there are many out there who find peer review elitist or otherwise unnecessary, simply a means of gate-keeping or some such thing, but I really do believe that — while we should definitely reconsider who we authorize to be these reviewers and who we do not — peer review can be an incredibly valuable tool that cannot only improve a work of scholarship but also help keep scholars working hard to create dynamic, involved, and well-researched scholarship (rather than simply working hard to get published).

Consider, after all, the recent and mind-blowing case of the Carmen Reinhart & Ken Rogoff paper — an economics paper that essentially defined our current austerity crisis — which was not peer reviewed and was found to have been based upon faulty numbers and assumptions. But, even despite this example, some writers, like Jared Bernstein of Salon, continue to argue that peer review would be too “time-consuming” and even “limiting.” Bernstein explains that “a lot of what’s important in economics is fairly simple analysis of trends — descriptive data — without the behind-the-scenes number crunching” and that it would “raise the bar unnecessarily high” to demand that “the presentation of descriptive data published by reliable sources” be peer reviewed — even though it was assuming the validity of information from presumably reliable sources (such as two Harvard economists) sans peer review that has gotten us into this mess to begin with! And this type of mistake, this refusal to question our basic assumptions, as we’ve now seen, can have tremendous and potentially detrimental real-world consequences. The simple truth to this is that peer reviewers exist to challenge innovative ideas and scholarship in order to test their merits, validity, basic assumptions, and data — not to keep them from getting published despite the quality of these factors. Yes, peer review can be a hard process (that’s sort of the whole point), and one that is currently fraught with inconsistencies and issues of its own, but it’s definitely not a process we should simply give up on.

What’s more, Bernstein even ends his blog-article with:

**Like most in my field of think tank work, anything beyond a blog, especially with serious number crunching, is reviewed by as many outsiders as I can get to read it, but the difference is that they don’t have the final say on publication.

In other words, he must clearly see the value and importance of being peer reviewed, otherwise, why reassure us, his readers, that his work typically is subjected to such scrutiny? Moreover, he inaccurately asserts that peer reviewers tend to have “the final say” in what gets published and what doesn’t. This is not usually the case — at least it hasn’t been in my experience. In fact, it is the editors and publishers’ call on what gets published, not the peer reviewers. I know from my experience as both an author of a peer reviewed book and as a reviewer for an academic journal that, while peer reviewers are able to give substantive feedback, suggest changes (big and small), and even put forward their own opinions regarding whether or not the paper should be accepted or rejected, the final decision is typically based upon a team of reviews and is ultimately left up to the editor/publisher to decide whether to move forward or not with the project based upon said reviews. Also, given our current age of self-publication mania, why shouldn’t a publisher get to make such decisions regarding the quality of scholarship when working off the recommendations and opinions of other scholars in the field?

Now, all of this being said, there are definite issues with the inconsistency and general lack of standardization when it comes to what constitutes peer review and what does not. As Richard Smith, Chief Executive of UnitedHealth Europe, explains in his “Peer review: a flawed process at the heart of science and journals,” when they conducted a study of peer reviewers by inserting “major errors into papers that [they] then sent to many reviewers,” they quickly found that “nobody ever spotted all of the errors” and concluded that peer review “is not a reliable method for detecting fraud because it works on trust” (179). Moreover, as both Bernstein and Smith recognize, there is also the issue of potentially biased or untrustworthy peer reviewers — reviewers who may seek to simply slow down a “competitor” from publishing by giving them terrible reviews or who may even attempt/desire to steal ideas from the authors they’re reviewing. Of course, as Smith goes on to suggest, the issue at hand should not be whether or not to “abandon” peer review as a practice and standard “but how to improve it” (180, emphasis added).

Of course, these issues are harried even further when we begin to consider the new possibilities opened up by digital humanities and the Internet.

In her article, “Humanities 2.0: Promise, Perils, Predictions,” Cathy N. Davidson begs the question: “How does one put value on a source when the refereeing is performed by someone who has not been authorized and credentialed as a judge?” (711) In other words, who is authorized? How did they become authorized and how do we know? Who’s judging the judges and how? And what do we do as readers, writers, and researchers to ensure that our projects are reviewed by people who are actually knowledgeable and engaged within the field and topics in question? It’s a sad reality that the MLA actually has to specify within their  Guidelines for Appointment, Reappointment, Promotion, and Tenure Committees that…

Engage Qualified Reviewers. Faculty members who work in digital media or digital humanities should be evaluated by persons practiced in the interpretation and development of new forms and who are knowledgeable about the use and creation of digital media in a given faculty member’s field.”

I thus agree with Davidson — “The very concept of peer review needs to be defined and interrogated. We use the term as if it were self-explanatory and unitary, and yet who does and does not count as a peer is complex and part of a subtle and often self-constituting (and circular) system of accrediting and credentialing (i.e., ‘good schools’ decide what constitutes a ‘good school’)” (711).

In other words, while peer review can be an exceptionally useful tool for helping authors to catch mistakes, improve arguments, and questions basic assumptions, we must now question our own basic assumptions regarding the peer review process. How can we improve its weaknesses? How can we standardize this process and help make it more open to new disciplines such as Digital Humanities? How can we update this process so as to keep it current with modern and developing studies? How can we speedy this process so as to keep it from placing undue burdens upon scholars?

Works Cited:

Bernstein, Jared. “How to prevent future Reinhart-Rogoff meltdowns.” Salon. 22 April 2013. Accessed on 29 April 2013. http://www.salon.com/2013/04/22/how_to_prevent_future_reinhart_rogoff_meltdowns_partner/.

Davidson, Cathy N. “Humanities 2.0: Promise, Perils, Predictions.” Modern Language Association. 123.3 (2008): 707-717. http://fredgibbs.net/courses/digital-history/readings/Davidson-Humanities2.pdf.

“Guidelines for Evaluating Work in Digital Humanities and Digital Media.” Modern Language Association. Last modified 25 April 2013. Accessed on 29 April 2013. http://www.mla.org/guidelines_evaluation_digital.

Smith, Richard. “Peer review: a flawed process at the heart of science and journals.” Journal of The Royal Society of Medicine. 99.4 (2006): 178-182. http://jrs.sagepub.com/content/99/4/178.full.

Advertisements

One thought on “Prove It – Peer Review”

  1. This was posted by a great fellow writer:

    It’s truly amazing to me that defenders of the status quo insist that any form of review or regulation must produce unqualified success in every instance for the policy changes to be valuable. They begin their defense of the status quo by recognizing that the current system is imperfect, then demand perfection of any potential change. Yet none of them can point to a single perfect system, even if they analyze every policy ever invented by mankind. Perfect systems are impossible. That’s the only immutable fact there is where human behavior is concerned. Yet we are expected to tolerate enormous flaws in current policies and systems until we can arrive at a perfect solution? It’s past time for us to confront such poseurs by asking them to identify a single perfect system or policy as proof that their notions of perfection are even possible. The first question commentators such as Bernstein should be asked is “Name a perfect policy”. While they’re hemming and hawing, the intelligent and realistic people can go about their business of incremental improvement.

    Mike Brewer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s