| 2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00
When artificial intelligence (AI) isn’t out to destroy us or subjugate us, as in movies like the “Terminator” franchise or “The Matrix” trilogy, we tend to look at it as our savior, especially in fields where the data is overflowing and the issues incredibly complex, as in life sciences, pharma/biotech and healthcare.
 
AI won’t save us.
 
I mean, don’t get me wrong: It will help save us (if it doesn’t become sentient and resentful as it so often does in fiction and films), but in the end it’s just another tool. Just like next-gen sequencing and CRISPR and cryo-electron microscopy are wonderful, but they don’t solve the problems. In some cases, they can even increase some challenges as they solve others.
 
I say this mostly because we have a guest commentary this issue talking about AI, and we’ve had an increasing number of AI and machine learning news stories, not to mention feature sections that we’ve run on the topic.
 
But also, it’s as good an excuse as any to roll out something interesting I’ve had in my queue for commentary topics since last year from GNS Healthcare and a post on its blog titled “All AI Is Not Equal: Why Cause and Effect is Crucial for Healthcare.” In it, the author of the post shares some insights from AI pioneer Judea Pearl, who said in an interview that, as GNS paraphrases, he “thinks too many people are deploying AI to overcome uncertainty—predicting what will happen next by association rather than leveraging the power of the technology to deal with cause and effect. He goes on to say that AI and machine learning need to move more aggressively to evaluate interventions and causal models to gain true value.”
 
The blog post was focused more on healthcare than on pharma/biotech or life sciences, but the sentiments still apply in the industries DDNews serves.
 
And I think that if we don’t figure out how to make AI work better with us, instead of expecting it to do the work—as many of us do, often without realizing it—we are going to fail. We may not end up with AI that tries to kill us or take over the world, but we may end up with too many questions and half-answers to juggle on top of the ones we already have.

About the Author

Related Topics

Published In

Volume 15 - Issue 9 | September 2019

September 2019

September 2019 Issue

Loading Next Article...
Loading Next Article...
Subscribe to Newsletter

Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

Subscribe

Sponsored

Blue sinusoidal pulse lines are shown in light blue against a dark blue background, representing a heartbeat signal.

Getting to the heart of drug safety testing

High throughput optical scanning enhances cardiac ion channel analysis for safer drug development.
A blue and orange double-helix representing DNA is undergoing transcription with a large orange shape representing RNA polymerase against a blue background

Harnessing CRISPR-Cas9: Knocking out genes in myeloid cells

Explore the applications of CRISPR-Cas9 technology in therapeutic development for Alzheimer’s disease.
A blue x-ray style image of a human body is shown with the liver illuminated in orange against a dark blue background.

Harnessing liver-on-a-chip models for drug safety

Discover how researchers leverage microphysiological systems in toxicology studies.  
Drug Discovery News March 2025 Issue
Latest IssueVolume 21 • Issue 1 • March 2025

March 2025

March 2025 Issue

Explore this issue