Aaron Barlow writes in his ePMedia Journal article Peer Review and a Proposal for Revision that peer review (how academic articles and books are deemed worthy of publication)
“is a cumbersome, time-consuming process that can act as a brake on new and exciting thought. In addition, it may no longer even be necessary, but replaceable by more efficient and useful Internet vetting procedures.”
Imagine this as a parallel: If Kos applied the academic version of peer review, he would display on DailyKos only those diaries that are approved by a small group of his friends. There would be no other diaries. It wouldn’t be the democratic process that we all associate with “Big Orange” today — where diaries can be posted by any registered user. Why, given today’s technological possibilities, should academic publication be any different?
More below the fold.
Barlow goes on to make several other important points about the weaknesses of the academic system of peer review and why it needs to be chucked or, at least, revised. For example, he says that:
…peer review promotes inbreeding. Writing in the blog of if:book: A Project of the Institute for the Future of the Book, Ben Vershbow states:
It’s unfortunate that the accepted avenues of academic publishing — peer-reviewed journals and monographs — purchase prestige and job security usually at the expense of readership.
Barlow summarizes solutions including the stuff about which we bloggers know much — institutionally sponsored collaborative journalism software, such as Scoop, Soapblox, Civic Space.
Who knew that we bloggers could be offering the wave of the future to academia?
What do you think, should peer review be given a hand, the finger, or the boot?
Xposted at ePluribusMedia Community Academic Censorship: Peer Review. This is the second in Barlow’s series Responding to Criticism: The State of Education in America.
Peer review is useful. Very useful. It means that work gets reviewed by experts in the field, and appropriately high burdens of proof are placed on contentious or revolutionary results. The mechanism may need some tweaking, but the basic concept is sound. It’s slow, but it’s slow on purpose. A wrong direction doesn’t just waste time, but money and opportunities.
Wikipedia, Slashdot, and Daily Kos are excellent arguments against discarding peer review. Wikipedia is expressly anti-expert, and experts in a field who do try to contribute there are often shouted down by masses of uninformed. Slashdot’s moderation system likewise rewards ignorant opinions couched in expert language, which results in comment threads being filled with pro-Microsoft or Objectivist garbage.
Daily Kos is the most relevant instance of a problem. The volume of material is so large that worthwhile but controversial diaries simply get lost in the noise or, worse, are deleted with prejudice by the site’s owner. Dissenting opinions are shouted down, and when debate is permitted, moving goalposts are employed to ensure that the dissenters can never “sufficiently” prove their case.
Certainly, I don’t want to get rid of review by people in the field. What I want, and what I think we can establish, is something far beyond what a Wiki or dKos can offer–and that is real review and evaluation after publication. Rather than restricting publication, I would like to see development of a system or careful evaluation (followed by acceptance or rejection–or by revision) that is open and public.
That would be a lot of fun and would expand the universe of academic discussion.
I think this would cause more trouble than it’s worth. You have to restrict publication in the academic realm, or the credibility of your journal as an information source quickly becomes shot. Issuing retractions doesn’t really help. I’m sure you know how much trouble is caused now by conservatives, fundamentalists, and creationists justifying their stances by bogus papers published in bogus journals, even though these papers have since been discredited. The fact that they were published (even in a discredited journal) is used as evidence of their benefit.
Imagine how much harder these would be to discredit if they were published in established journals.
Some of the research that I do in my field goes under the name of meta-analysis: quantitative reviews of the literature for a particular research question. Usually my job (and/or my undergrad research assistants’ jobs) is difficult enough without having the prospect of wading through god knows how many retractions.
that is real review and evaluation after publication. Rather than restricting publication, I would like to see development of a system or careful evaluation (followed by acceptance or rejection–or by revision) that is open and public.
Perhaps I am missing something.
I am expert in a narrow but well-defined scientific area for which perhaps two hundred scientists worldwide have the sufficient expertise in order to make meaningful judgment.
Under the current peer review process, the journal editor selects two, perhaps three members from this peer group to review a submission.
I have the option (in advance) of suggesting that potential reviewer A or B may be biased (perhaps a competitor or someone I’ve recently offended).
If I strongly object to the conclusions of the 2-3 reviews that are returned, the process begins again with new reviewers selected by the editor. Generally speaking, there is a lengthy appeals process by which a manuscript may be re-evaluated by multiple reviewers, including a meta-review by the editorial board if things get really heated.
What is missing here is the subjective judgment of the editor, which is crucial to the review process. Even if “public” is restricted to extend only to the 200 member group of peers, not every vote is equal. There are all kinds of nuances involved in the selection of appropriate reviewers and how to weight their opinions appropriately for final decision, especially in contentious situations.
The current system is designed precisely to avoid having to sift through hundreds of comments to separate the wheat from the chaff. In the sciences, timely publication is often crucial, so a noisy public forum would be unacceptable.
I can’t even imagine my horror at absorbing the additional workload of getting equations or graphs online to make a comment in review.
Hmm. The notion of a “public” revision process is definitely intriguing. In my experience, this already happens “behind the scenes” long before an article is ever submitted for publication. I send my paper to X who provides feedback, or I present it at conference Y where I also receive feedback, and by the time it even gets submitted, it’s been subjected to quite a bit of “peer review.”
You could argue, yeah, I send the paper to those who are likely to be sympathetic with the basic argument (but who are nevertheless “experts” in the field, otherwise why bother?), so there’s perhaps a certain bias there.
But my experience of academia is that presentations at CONFERENCES have the function of the kind of “public” review you’re talking about–and there you certainly have no control over your “audience.” A conference is generally where you take the paper out for a test run, no?
I realize that there are a lot of academics posting on dKos, but I still have a hard time seeing it as a “legitimate’ <duck!> academic publication venue. It’s certainly not something I’d post on my CV.
I haven’t studied in detail the kind of peer review that’s going on over at Political Cortex, but from what I’ve seen, the quality of the writing is substantially improved (say in comparison to what’s appearing on dKos).
That’s been about my experience as well. I have a new data set that I’m excited about. I present it at some relevant academic conference, and pretty much hash out whatever needs to be hashed out there. Usually from a conference presentation I can get a good feel for how reviewers might react, and I can fine-tune my research prior to submitting a manuscript.
On my own blog, and sometimes here and Big Orange I’ve worked out a couple ideas that I had or summarized briefly some new findings from my research, checked out the comments, and used that as one means of feedback.
What I like about peer review as far as professional publication is that I have some confidence that the people reviewing my work know what is involved in the sort of research I do. Generally my experiences with the peer review process both as a reviewer and as an author have been positive – the process is thorough, thoughtful, professional. By the time I have a manuscript in press, I know it is the best that it can possible be at that point. As a social scientist I am only as good as my data, and I’d sure as heck prefer that what I publish is accurate in the academic journals. More broadly, the sciences themselves are only as good as the validity of their published findings. I’d much prefer to deal with the relatively minor hassle of time lags between completing my research and going to press than deal with having to wade through too much useless info.
Well, James, you’ve seen enough of what I’m about (and as I’ve stated, my published writing does NOT differ all that much from my published academic writing, xcept for the footnotes and the fact that I generally restrict myself to only one appearance of the “f-word” in my academic writing. lol), and the advantage I see in the peer review process is the legitimation through “expert review”–that is, people who are familiar enough with the discourse and the basic issues have said: OK, this is a valuable contribution to the discourse, however “polemical” or whatever it may be.
Because believe me, the controversy comes no matter what. But it’s nice to be able to say, as the defense of last resort: hey, you may not like it, but there’s an editor and there are 2-3 reviewers who obviously DO, so take it up with them. 😉
When there is no editor and no “quality control” standard, then a writer of particularly controversial topics and viewpoints is basically left out in the cold to fend for him/herself–and to an audience that may not be as “well-versed” in the issues as those “expert” reviewers and editors.
I have to agree with Egarwaen. I spent four years as editor of a peer-reviewed academic journal and don’t know how I’d have been able to “judge” the merit of the contributions without peer review.
Most peer review is anonymous, and while that anonymity doesn’t always work (so for example, if an author has previously presented the paper at a conference, the chances are good that the reviewer who receives it may in fact know whose work s/he is reviewing), anonymity is one measure that hinders the kind of “cronyism” and “buddy-system” so clearly, clearly operational on sites like dKos.
Otoh, anonymity of the blogs–in the form of posting under fictitious screen names–is, I think, one thing that contributes to some of the “shouting down” of dissenting opinions not only on dKos, but on places like DU as well. I sometimes wonder how much of this would go on if people were forced to put their real names (and their TENURE!) on the line when engaging in some of these shouting matches.
I don’t see how peer review would actually work on a site like dKos, tho. “Peer” review means that you are being reviewed by your “peers”–that is, by people with a level of experience, knowledge and expertise that are at relative parity. How would you even begin to establish and seek to match “peer” with “peer” as long as the anonymity, fictitious screenname thing is in place?
Matching “reviewers'” expertise with writers was one of my main jobs as editor of the journal: I certainly couldn’t have expected a medievalist to be in a position to adequately assess the value of a work on contemporary popular culture, or vice versa.
You might be able to establish that based on previous postings, but I dunno. My thinking: the blogs are the blogs, academic publishing is another.
That’s the other key. Peer reviewers have typically worked very hard and made contributions of some kind to get where they are. And while there have been instances of reviewers dropping papers with merit because of personal bias towards the topic, you’re going to get that with any classification system. And such papers will usually eventually hammer their way through if they really do have merit.
Another problem with wide-spread high-speed”democratic” review: science by media. The same wonderful processes that’s given us cold fusion, nemesis, the moon landings hoax BS, and who knows what else…
In my four years as editor (150-200 submissions/yr), I saw it happen twice. In both instances, it was pretty transparent (based not least of all on the disparity between the assessments of two anonymous readers). I took it to the senior editor, who agreed that personal politics was likely involved, and we sent the papers to new readers. Simple as that.
I have a hard time putting dKos on a par with established academic or even “established” internet publications. I have seen first hand how easily a controversial piece is simply deleted if the debate surrounding it gets out of hand.
That cannot happen with a controversial essay in an established journal. Love it or hate it, the article’s “on record”.
I had an article reviewed by someone who clearly knew who I was, and I could tell who he was (based on diction as much as on content). So, yeah, the anonymity can be pretty limited.
That said, I’ve also been successful figuring out who some blog posters really are based on various hints they’ve put into their posts. So blog anonymity can also be limited.
Sure, I’ve also seen the anonymity of the blogs “reversed” by seeing early versions of an essay posted say on DU and then seeing the same thing appear under that person’s name elsewhere on the net.
But I don’t think it should have to be a guessing game–it should at the very least be a “level playing” field (that is, if you know who I am I want the favor of knowing who you are).
Calling people fucking assholes and shouting them down with racial slurs, slander, etc. is not acceptable behavior in any profession I know of aside from perhaps all-star wrestling.
If people knew that there professional reps were on the line, a lot of that wouldn’t go on.
I’ve always been critical of that “anonymity” game and even more critical of it now. Sure I’ll “chat” with anonymous screennames, but any serious discussion of ideas or policy (esp controversial ones), no go: I want to know who I’m talking to before I get into it with them. I mean, jeez, how you gonna know it’s not Karl Rove you’re talking to, really?
I think some level of anonymity is a good thing, and should be preserved. It’s very useful for those that have unpopular things to say.
On the other hand, a good method of defining an online identity and associating it with a real life identity would be very handy.
If you manage to find out how to support both, there’s a lot of people who want to know.
I’ve always had unpopular things to say–even in my academic writing and in fact in my classroom–and I’ve never felt the need to hide behind the veil of anonymity. Anything else would be, in my view, extremely cowardly.
A big advantage of peer review that the readers are relieved from reading a lot of crap. Thanks to peer review, published articles contain much less errors and inaccuracies. Reviewers make articles more comprehensible and readable. Repeated “discoveries”, wrong theories, insulting expressions and outlandish nonsense are sorted out. and Reading a science article is not the same as reading an opinion piece. You wish to understand exactly what the author means, and to learn objective facts or conclusions.
So peer review in very convenient and saves a lot of time, for readers. Just imagine a journal on relativity theory which would publish anything submitted. 99% of it would be not worthy to read: all the same space-wrap fantasies, once more discovered formulas, plenty of errors which you wouldn’t notice at once…
Peer review is not an ideal process, but in practice it gives an informative and ethical environment. You can’t say that every scientist always behaves honestly, but compared with the “real world”, and given high stakes, scientific communication is remarkably ethical.
Even if it happens that your ingenious idea is ‘censored’ by “Nature” or “Science”, you have plenty of other options. Good ideas win sooner or later.
By the way, with the internet age, several automated e-print servers quickly appeared, like arXiv.org, where scientists (mainly physisists and mathematicians) can put their articles (or, more practically, preliminary versions of the articles), on the web without peer review. This is a very popular way to make your articles and results available to public, and it’s getting increasingly popular so. The ultimate standard is still peer reviewed articles. But putting up an “electronic preprint” on such a server is universally acceptable as a reference material (until the paper is published) or priority proof.
On DailyKos, there was a peer review suggestion, quite interesting one. The idea is very simple: before submitting a diary, you must review a few other diaries, with the recomendation of course. That might indeed help to reduce the number of dumb, shorty or repeated diaries there, and give more prominence to original, not-so-catchy, but still interesting pieces. Besides, reviewers might suggest corrections or even additional URL-links, so to make certain short comments obsolete. I have an impression that with few simple rules (reviewers may choose the diaries they wish to review; a diary should be rejected only if it is clearly inappropriate or not so informative.) Of course, that would change dKos experience, but perhaps mainly for better.
Imagine this as a parallel: If Kos applied the academic version of peer review, he would display on DailyKos only those diaries that are approved by a small group of his friends.
Not really. What he’d do is take each proposed diary, and send it to a couple of people who are experts in the subject of the diary without telling them who wrote it, and get them to evaluate it. And then he’d decide whether to post it, reject it, or ask for revisions based on their evaluations.
Not really what you want from a blog. But still a pretty good system for academic work.
But not broadening.
I am a supporter of the peer review process. I know it is not perfect, but part of the strength of science is that it is slow to respond to changes. Comparing it to blogs belies the very different purpose that science serves. Take Intelligent Design, for instance. In an unmoderated forum, a few voices can redirect, confuse, or deface a Wiki-style forum. Science needs the peer review process to maintain its strong centering effect through political and educational rough spots.
Collaborative software has its place in Science, but it would be for hashing out experimental approaches and preliminary data sharing between research groups. The journals need peer review. If you have problems with a reviewer, you can work with the editor to change reviewers or send your article to another journal.
I have no connection at all with the peer review process so perhaps my perspective here may be seen by some as extraneous, but I’ll proffer it anyway.
It strikes me that, absent a mechanism that addresses the value and legitimscy of “outside the box” thinking, peerr review could, in many instances, be a process that, unconsciously or not, gives untoward weight toeither traditionally accepted ideas or even particular positions held by “favored peers”.
While I don’t envisage the sort of “free for all, Tower of Babel- type cacophany” that would arise from a paerticipatory blog-style vetting process as being practically feasable, I do believe there’smerit in broadening the sphere within which such “review” does take place, if only to avoid the possibility of creeping academic “tunnel-vision”.
Wouldn’t you know it. I comment on an academic subject and my post is full of typos.
I have to remember; “First preview, then post, not the other way around”.
Yes, peer review is slanted towards tradition. Most professors acknowledge this. Most that I know also agree that it’s a good thing. It raises the standard of proof for revolutionary ideas, ensuring that they really do make sense before they’re accepted by the community at large.
This is important because any idea that you’d dub “non-traditional” is almost certainly going to have a massive effect on other research. And you don’t want to do that by accident.
I understand and agree with your point about bias toward tradition and the positive effect such bias can apply towards strengthening the diligence and thoroughness of the research presented in new papers.
My central point though has to do with whether relying on tradition as a guide might benefit from being augmented by a process designed to consciously examine whether there is an “outside the box” component to one’s paper that, by acknowledging such a component, would serve the purpose of helping to overcome whatever over-reliance on tradition we might have that inadvertently prevents us from apprehending the intrinsic value of said paper’s conclusions. (A very awkward sentence that.)
Simplified; once tradition said the world was the center around which the sun and the stars revolved, and it took an inordinately long time for the scientific perspectives of Copernicus and Galileo to take root precisely because tradition, (and, of course, religious authoritarianism, itself also a representation of tradition), refuse to consider even the possibility of such new “outside the box” perspectives.
Yes. And leaving aside the religious persecution angle, that was a good thing. Their theories caused a major uproar and considerable change throughout the scientific community. If they’d be wrong, that would have been a disaster. But even worse… What if they’d been just almost right? If their theories had been accepted too fast, but had a subtle but significant flaw that could be detected with instruments of the time? The backlash against those theories could’ve discredited heliocentrism for decades or centuries.
Scientists are, ideally, very cautious and deliberate. New ideas should take some time to prove themselves and be accepted.
Also remember that a piece of research that hasn’t been published yet – especially a revolutionary one – isn’t necessarily unknown. One purpose of conferences and such is so scientists can find out what their peers are working on and thinking about. A promising, revolutionary theory will generate a lot of attention and discussion.
Of course, what exactly qualifies as promising is another question entirely. That tends to generate a lot of debate.
I’m not saying that the process is perfect. But it works, and is hard to improve upon.
Again, I understand your point though here I’m notsure I agree completely that the length of time it takes to embrace a newdiscovery is automatically a good thing. And of course, it’s hard for me to see any intrinsic legitimacy in rejecting the validity of new ideas out of hand in the process of defending tradition as was certainly the case with Copernicus and Galileo.
I’m not arguing against the idea of peer review. I think it makes perfect sense for others in one’s own bailliwick to review and analyze one’s work. I also agree with you that even so, peer review may not be a perfect template for maximizing the exposure of new and important discoveries.
What I am saying is that a conscious recognition that thinking beyond the bounds of precedent and established methodologies or protocols might be a way to augment peer review and in doing so improve an already good system.
One subtle element has been lost in this discussion of peer review, and it’s one of the elements that argues most strongly for its retention, at least in academic environments. I think that it is helpful to look at peer review as a system to which a scholar poses a question (“What do you think of my work?”) and which in turn provides a response (“Here’s what I [that is, the system] think[s]….”). The important nuance here is that the response that the system offers is not purely binary. Instead, it usually comes in the form of quantitative comments and critiques, acknowledgment of what works in a particular piece of research and suggestions for what might need to be improved upon. The system is in place in order to impose a certain threshold of quality upon scholarship that is considered for publication in a particular journal, but the key is that it acts not simply as a gatekeeper, but as a part of the scholarly investigative process. Research that is rejected is not over, nor is it necessarily discredited. It may simply be that it is not yet developed to the point where it merits being granted the authority and credibility that accompany publication.
So, what does the blog system have to offer? Isn’t it just peer review in a more democratic form? Threads like this one do accomplish some of the same goals as peer review. But there remain some important differences. First, the piece of intellectual production that precipitated this discussion and review is already public and, as one of our colleagues noted above, has already been granted a certain degree of credibility by virtue of its being available to a wide audience. Second, those of us who are doing the reviewing are not necessarily experts. Many of the people who have chosen to participate in this discussion are indeed academics. But how many of us are students of the history of the university or are experts in the practices of contemporary publishing or are well-versed in the theory of the dissemination of knowledge? Perhaps most importantly, what structural devices are in place to ensure that those of us with some relevant expertise are able to make that authority count for something. Anti-authoritarian attitudes may be useful when standing in opposition to oppressive governmental apparatuses. But they are not productive when it comes to rational critiques of ideas because they allow those who are able to sell ideas to gain credence instead of ensuring that those ideas that merit discussion can earn their place in the intellectual discourse. Peer review is suspicious of ideas that challenge established theories. But this is not merely because academics seek to preserve some old-fashioned way of thinking. Rather it is because established theories became established because they made sense to a lot of people who knew a lot about the theories’ topics at a particular moment and the peer review system helps to guarantee that new theories advance due to their merits and not just because they are splashy or sexy.
The peer review system is slow and it does not always work perfectly (someone mentioned the Sokal affair, a brilliant example of what happens when people allow themselves to be so swept up in who’s saying something that they neglect to think about what’s actually been said), but it does have an important regulatory role to play in maintaining a minimum level of quality in intellectual discussions and it has an essential role in fostering the development of new ideas that challenge the existing theories.
Let’s go to a real world case where peer review is performing an essential (and progressive) political function: the battle over “Intelligent Design.”
One of the reasons that those defending our schools’ science curricula from this nonsense have had the success they have had is the fact that Intelligent Design has never managed to pass the test of peer review.
In fact, ID provides us with an even more interesting case study for what this diary proposes. Because a single ID-oriented article has appeared in a peer reviewed journal. It turned out, however, that the author had managed to get his article published only be evading peer review (he had a friend on the editorial board). The journal ended up with egg on its face, and publicly retracted the story. Actual scientists reviewing the article post-publication found it full of error. (You’ll find details of this story here.)
The problem is that, despite the fact that this article was quickly discredited and disavowed by its erstwhile publisher, the ID crowd is, to this day, referring to it proudly as proof that the scientific community endorses Intelligent Design.
Without peer review, all issues that are amenable to scholarly discussion would be reduced to “he said, she said,” which is actually often code word for whoever has the most money and media access gets to determine the truth.
At any rate, nothing today prevents plenty of pseudo-scholarly drivel to be published outside of peer reviewed journals and get wide audiences, as the successes of both ID and The DaVinci Code nicely indicate.
Peer review is essential in fields that are so specialized that there are a few people who are qualified and understand the research well enough to provide a review (I’m thinking about theoretical math, that sort of thing.)
The ID case is a perfect example of why peer review is useful across the spectrum of academic disciplines.
If people want to publish cutting edge research that more traditional peer reviewed journals would not accept, there are avenues for this as well. It may not get you tenure, but it will get your ideas out.
there is NO comparison between what goes on at kos and the process of peer review. Not everything is a democratic process, either. Why should someone who has no knowledge of a subject be considered equal to someone who has spent a lifetime maintaining current knowledge in a field?
Other areas of expertise aren’t a democratic process either. For instance, why should I listen to someone who has no experience with electrical work when I want to do wiring in my house, instead of someone who has been working as an electrician for years?
The Social Text furor a few years back also indicates the importance of peer review. You can read the original paper and much of the bru-ha-ha right here.
Briefly, Adam Sokal a physics professor wrote a completely bogus paper on how physicists do research. He submitted it to Social Text and they published it without bothering to send it to any actual physicist for peer review. The short of it is: Social Text ended up with much egg on their faces.
In this case the peer review process was not properly done and a paper that should never have been published was.
In my experience, as an self-funded independent researcher, the myth of the lonely genius crushed by the weight of the scientific establishment is so much hogwash. If one has a case and can make the case then the paper will be published. If one is a wild-eyed loony unable to make a case then getting papers published is well nigh impossible — as it should be.