Many of us know the expressions “If you can’t measure it, you can’t improve it” or “You get what is measured.” As a graduate student five decades ago, I first heard ”Publish or perish.” That truth goes back much further. If you look it up, you will find references back to the 1930s. This makes sense. If research is funded, we owe the funders a description of what happened—what was done and what the conclusions were. Many complain about the bias of publishing only what we see as a good result, particularly with respect to clinical trials. All results are owed to both sponsors and participants. There we publish too little.
A recent announcement from mainland China disparaged the excessive use of publications to judge academic promotions and the allocation of research funds. The Ministry of Education and the Ministry of Science and Technology made it clear that reliance on Science Citation Index papers was excessive.
The publish-or-perish notion was adopted from Western habits as a strong academic infrastructure developed in China. The concept was taken further by compensating authors with cash bonuses for papers accepted in top-tier global journals. In a way, these were sales commissions to enhance the prestige of China as a science power. Mission accomplished!
Counting publications is appealing in its simplicity and we all do that. The capacity for words and numbers no longer depends on paper and ink. Thus, the expectations are dramatically inflated, not unlike scores in sports events. In China, what a ministry suggests is likely to be done. A similar announcement in the United States would likely result in yawning.
In academia, we don’t respond well to authority. We feel competent to evaluate colleagues on quality as well as quantity. This we do with support by outside peer reviews in the form of letters from recognized scholars. Because of the breadth and depth in science and engineering departments, very few local deciders dig deeply, and the simple counting thus continues.
A consequence of this system may well relate to irreproducible scientific papers going viral, often from the most prestigious journals. The peer-review system rewards (funds) what is comfortable and makes groundbreaking innovation harder to validate. Being first beats being right. It does not take long before topics like point-of-care, microfluidics, biosensors and 'omics become entrenched bubbles ready to break through or pop. Each of these has had a minimum three-decade run. Some buzzers from the past such as combichem, capillary electrophoresis and high-throughput screening have settled down. We know what they can do.
The concept of biomarkers has been floating about for a very long time. Some good ones include temperature, blood pressure, glucose, troponin and more. We have the tools to find others. A topical example is the speed with which viral RNA has been put to work to mark COVID-19 infections with high precision, and the ease granted by an emergency use authorization.
On the contrary, our community has been publishing very premature work on biomarkers. We hedge our bets with adjectives such as potential, putative and possible. An interesting study noted “Inadequate Reporting of Analytical Characteristics of Biomarkers Used in Clinical Research: A threat to Interpretation and Replication of Study Findings” [Clin. Chem. 65:12, 1554-1562 (2019)]. Here, five prestigious medical journals were searched for a decade of papers including the term biomarker in selection of participants for clinical trials, defining subpopulations, and/or tracking safety or efficacy.
The authors concluded inadequate method documentation and weak analytical and clinical validity for many of the publications. In a majority of 544 studies, there was no demonstration of assay precision reported, and shockingly, no information on the manufacturer of the reagents or instruments used. The authors considered nine analytical method performance characteristics commonly expected by the FDA.
These were largely missing in action.
Grants can be turned down because they focus on validation, not discovery. If the peer-review funding decisions are biased toward new ideas for markers without validating (or invalidating) older ideas, we will get more papers published with less value for patients. Sharing of well-documented samples could help, spreading the cost across multiple 'omics studies at once. Good samples are a costly overhead expense, rarely done right or in enough numbers. It’s harder to fund translational work for proteomics and metabolomics with their dependence on drugs, diet, time of day and lifestyles.
Until we do validate, the markers won’t mark. Commercial firms have little incentive to do the work. Monetizing success is much less feasible than for a drug, yet the searching is no less difficult. If the system will not fund clinical validation, there is little point in funding more biomarker discovery. If it is not translated, it’s not discovered. Perhaps the Chinese ministries are thinking fewer and better papers, no matter where they are published. We all could benefit.