Out of order: Not to be negative

The very fact that we define results as positive or negative highlights our reluctance to accept change, that the story we have built in our heads may not fully reflect our reality

Register for free to listen to this article
Listen with Speechify
0:00
5:00
I was in a comedy club a few weeks ago when the stand-up performer mentioned the scientific method in the middle of a story. She had a point to make about repeating patterns and hypotheses, but it was quickly obvious she had little knowledge of the practice of science.
 
I forgot about the moment until a week ago, when a friend from my graduate school days published a research article and I wrote to congratulate him.
 
He was grateful and then gave me a bit of the back story, suggesting it had been quite the chore to publish the article because its findings negated earlier studies of the same system. Thus, to convince people his results weren’t spurious, he and his colleagues had to “prove” their findings six ways from Sunday.
 
For my friend, this was a harsh reminder of his Ph.D. studies when he faced much the same challenge. His degree was made all the more tortuous because no matter how he dissected and regenerated his findings, they always ran contrary to accepted dogma.
 
He was not alone, either, as yet another graduate student in my department faced very similar challenges, being told again and again by her own supervisor to repeat the experiments until she got the right result. She never did, despite her excellence as a scientist, and I seriously believe the department graduated her mostly to get her off the books.
 
As students, I recall us sitting at the end of the hallway or in the cafeteria joking that there really needed to be a Journal of Negative Results, lest none of us ever get published.
 
The dogma had to be true, however insignificant the specific data points might be in the grand scheme of the biological universe. My friends had to be wrong because the important people knew better.
 
Turns out—oh, the irony—that my friends weren’t wrong then, and the one is not wrong now. Other labs followed up to show that the dogma was a tad overzealously defended.
This was already going to be my subject for this commentary, but then I saw a post by the University of Alberta’s Devang Mehta1 in Nature’s Career Column on October 4 that sealed the deal.
 
Plant geneticist Mehta related his recent experience using CRISPR gene editing to make cassava more resistant to viral disease. Despite previous publications showing this to be an effective approach, he found that it largely had the opposite effect, making the viruses CRISPR-resistant.
An interesting—if unanticipated—finding, there is no doubt. And then he tried to publish the results.
 
“Every peer reviewer agreed that our study was methodologically sound, but it soon became apparent that the finding was a message no one wanted to share,” he reported. “Why was it so hard for reviewers and editors to publish a single report showing a limited failure of CRISPR technology?”
 
Mehta and colleagues eventually did find a publisher willing to accept their paper—Genome Biology2—but like my friends, the challenge left a mark.
 
“When negative results aren’t published in high-impact journals, other scientists can’t learn from them and end up repeating failed experiments, leading to a waste of public funds and a delay in genuine progress,” he commented. “At the same time, young scientists like me are bombarded with stories only of scientific success, at conferences and in journals, leading to an exacerbation of ‘imposter syndrome’ when our own work doesn’t match these expectations.”
 
The fear is real, and likely contributes to those damning stories we have read about scientists fudging or fabricating their data, whatever their rationale.
 
At a deeper level, though, it begs us to ask how we have even allowed findings, no matter how legitimately derived, to be seen as unassailable, inviolable, sacrosanct.
 
Is not our understanding of—or faith in—any scientific tenet only as strong as the data that supports it?
 
When data arises that does not support the tenet, is not the purpose of science to question the tenet rather than the data?
 
The very fact that we define results as positive or negative highlights our reluctance to accept change, that the story we have built in our heads may not fully reflect our reality. At the very least, if the performance of the science seems valid, we should try to better understand the results, viewing them as just that, data points that are neither negative nor positive.
 
At the risk of hyperbole, to see it otherwise is to accept that we only need to conduct clinical trials on Caucasian men because all men are biologically equivalent, women are men with different hormone ratios, and children are men of diminished body mass.
 
Fortunately, we no longer believe that to be true, even if it did take a lot of “negative” Phase-4 data to convince people.
 
I said nothing to the stand-up comedian—I didn’t want to be that guy—but it was clear that we either have to start conveying the truth about how science really works, or maybe get closer to the ideal of how it is supposed to work.

Randall C Willis can be reached at willis@ddn-news.com
 

References
 
1. Mehta commentary: “Highlight negative results to improve science” Nature. 2019:Oct 4. (http://doi.org/10.1038/d41586-019-02960-3)
 
2. Mehta paper: Mehta et al. “Linking CRISPR-Cas9 interference in cassava to the evolution of editing-resistant geminiviruses” Genome Biology. 2019;20:80. (https://doi.org/10.1186/s13059-019-1678-3)


Published In:


Subscribe to Newsletter
Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

September 2024 front cover

Latest Issue  

• Volume 20 • Issue 5 • September 2024

September 2024

September 2024 Issue