EVENTS | VIEW CALENDAR
Editorís Focus: Artificial and essential, but still just a tool
When artificial intelligence (AI) isn’t out to destroy us or subjugate us, as in movies like the “Terminator” franchise or “The Matrix” trilogy, we tend to look at it as our savior, especially in fields where the data is overflowing and the issues incredibly complex, as in life sciences, pharma/biotech and healthcare.
AI won’t save us.
I mean, don’t get me wrong: It will help save us (if it doesn’t become sentient and resentful as it so often does in fiction and films), but in the end it’s just another tool. Just like next-gen sequencing and CRISPR and cryo-electron microscopy are wonderful, but they don’t solve the problems. In some cases, they can even increase some challenges as they solve others.
I say this mostly because we have a guest commentary this issue talking about AI, and we’ve had an increasing number of AI and machine learning news stories, not to mention feature sections that we’ve run on the topic.
But also, it’s as good an excuse as any to roll out something interesting I’ve had in my queue for commentary topics since last year from GNS Healthcare and a post on its blog titled “All AI Is Not Equal: Why Cause and Effect is Crucial for Healthcare.” In it, the author of the post shares some insights from AI pioneer Judea Pearl, who said in an interview that, as GNS paraphrases, he “thinks too many people are deploying AI to overcome uncertainty—predicting what will happen next by association rather than leveraging the power of the technology to deal with cause and effect. He goes on to say that AI and machine learning need to move more aggressively to evaluate interventions and causal models to gain true value.”
The blog post was focused more on healthcare than on pharma/biotech or life sciences, but the sentiments still apply in the industries DDNews serves.
And I think that if we don’t figure out how to make AI work better with us, instead of expecting it to do the work—as many of us do, often without realizing it—we are going to fail. We may not end up with AI that tries to kill us or take over the world, but we may end up with too many questions and half-answers to juggle on top of the ones we already have.