EVENTS | VIEW CALENDAR
Peer review and the life sciences
In my previous column (“Red tape rising,” June 2014 issue), I expressed concern with the amount of time scientists devote to the search for funding and complying with regulations as the percent yield of funding drops precipitously. The result has been a downward spiral in research efficiency and diminishing enthusiasm for young people pursuing careers in academic research. This is especially troublesome in life-sciences research. Some of us see a bubble that also links to underutilized research facilities and an accelerating number of second- and third-tier journals.
The concept of “publish or perish” remains very much alive, but in the sciences should be expanded to “publish and get funded or perish.” The perishing part includes difficulties with promotion and the long latency to build a sustainable research program attractive to other scholars. The National Institutes of Health have been very aware of this and have experimented with schemes to facilitate startup of worthy young scientists. Alas, their budget has been flatlined for so long that there is not enough to go around. Many senior investigators have lost funding for long-established programs. Realignment is occurring that over time will likely deprive medical research of a generation of talent.
Given that this publication focuses on commercial life sciences efforts, let me remind readers of the concept of peer review and where it plays a role. With few exceptions, most industrial R&D people are not “put up for promotion” by the use of a half dozen or more supporting letters from competitors. It is hard to imagine that GSK would ask for letters from Merck, Pfizer, Novartis and Amgen to support promotion of one of their own. In academia this is the standard operating procedure, given that faculty members in a given department operate very independently. Many have very little overlap with the scholarship of a colleague who is a candidate for elevation in rank. By going outside for supporting references we also, to a degree that varies, avoid local biases.
Likewise, budget to support a project team in biopharma is not granted by scoring the reviews from competitive firms, but this is exactly the way government grants are judged. The process is highly biased against real innovation, especially when funds are tight. It also takes a huge amount of time: a year in round numbers. There can be more waiting than doing. Of course, there also are grants for small businesses, and most will be familiar with the “valley of death” between the basic science and the blocking and tackling needed to translate it to proof of concept on a larger scale. Funding clinical validation is far more costly than developing ideas. The process of academic research, given its integration with education, does not allow for the routine steps required to build confidence through the many repetitive experiments required for validation.
This brings us to a third element of peer review: the acceptance of papers for publications in journals. Today, virtually anything in science can be published somewhere in the hierarchy from Science and Nature on down to pretenders. Typically between two and four reviewers are selected by an editor with topical expertise who sends the submitted manuscript out to other experts (or at least practitioners).
These days, the process is speeded up substantially with online systems to both send out the manuscript for review and send the comments back. The bubble of researchers and journals has degraded the system. The work is more complex than 30 years ago, far more is submitted, details are left out, reviewers are unhappy they are frequently ignored and editors need more raw materials to sustain their product. More publications are derivative, reviewers are unpaid and busy, the best of them being the busiest. They can be snookered. Opportunities for bias have engendered a longstanding debate. Reviewers are not revealed to authors and, in some cases, the authors are not revealed to reviewers. While English is the accepted language of science, more and more scientists have not mastered its mysteries.
Given the pressure to publish anything, the limited time of skilled reviewers and the proliferation of journals, we have a widely acknowledged quality problem. Published work is often not reproducible and rarely, but still too often, a product of scientific fraud. Both are taking down our collective reputation as scientists. Too much is published too fast with too little care and too much money at stake. Companies have been started and funded based on irreproducible academic science. We also have many academic institutions wanting to be competitive at research, well beyond what is either needed or possible. The acknowledged higher-education bubble is linked to the quality problem as all strive to be in the top 10 percent, or at least above average. I suppose these complaints will correct over time as some bubbles burst.
To paraphrase Churchill’s notion of democracy, “peer review is the worst form of review except for all the others that have been tried.” To make it work requires people of character who can give it time and see beyond their petty biases. There are many areas of humans evaluating humans that are quarreled about. Good examples are annual performance reviews and the notion that there can be an objective means to evaluate teachers. We’d like to substitute metrics for judgment. While we make mistakes, judgment generally works well, but is never fair in the minds of those who get the short end of the stick.
-------------------------------------------------------Peter T. Kissinger is professor of chemistry at Purdue University, chairman emeritus of BASi and a director of Chembio Diagnostics, Phlebotics and Prosolia.