A senior gentleman wearing a blue shirt sits beside his doctor in a white lab coat as they review test results on a tablet together in a doctor’s office.

CREDIT: iStock.com/FatCamera

Redesigning clinical trials using artificial intelligence 

Artificial intelligence is prompting a fundamental rethink of clinical trial design and execution, with the potential to reduce patient burden, improve data quality, and accelerate development timelines. 
Photo of Bree Foster
| 7 min read


Rob DiCicco, smiling in a textured blue suit and pink shirt with matching tie against a grey background in a professional headshot.

Rob DiCicco, Vice President of Portfolio Management at TransCelerate BioPharma, held numerous leadership roles in clinical research at GSK and served as Deputy Chief of Health Officer at IBM Watson Health before joining TransCelerate. He has spent his career working at the intersection of clinical research, technology, and operations.

CREDIT: Rob DiCicco, TransCelerate.

While tech evangelists often tout artificial intelligence (AI) as healthcare’s silver bullet, the reality in clinical trials is more complex. Adoption has been cautious, shaped as much by concerns around trust and governance as by technical hurdles. Still, beneath the surface, methodical innovation is gaining momentum. AI is beginning to reshape how trials are designed, executed, and scaled, offering new ways to reduce patient burden, improve data quality, and accelerate development.

In conversation with Rob DiCicco, Vice President of Portfolio Management at TransCelerate BioPharma, Drug Discovery News explores how the clinical research landscape is evolving and what it will take to move from isolated wins to broad, sustainable change.

Can you give us a brief overview of how you've seen AI evolve in the life sciences space over the past decade, particularly as it relates to clinical development?

In discovery and preclinical research and development, we’ve seen AI deliver significant value, particularly in target identification, in silico modeling, and process automation. It’s helped accelerate timelines and validate novel approaches. But clinical development is a different story.

Adoption there has been slower and more uneven. The work is more complex, organizations tend to be highly matrixed, and the opportunities for quick wins are fewer. That said, we’re starting to see meaningful progress, particularly in areas like data quality and document authoring. Companies that have spent time building structured, reusable content libraries are now using AI to automate parts of the regulatory submission process. It may not be the transformative leap some are hoping for just yet, but it’s practical, it’s working, and it’s starting to stick.

In your view, what are the biggest technical and regulatory challenges that need to be addressed before AI can be widely and responsibly adopted in clinical trials?

One of the primary technical challenges is data quality and representativeness. The datasets used to train AI models often don’t reflect the full diversity of the populations where researchers want to apply insights or automate processes. This creates a real risk of bias, especially when clinical trial data comes from narrow populations or when data privacy laws limit secondary use in certain regions.

Health authorities, including the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), are beginning to engage on these issues, but global infrastructure and policy differences create challenges for organizations that want to conduct global trials. In the United States, for example, a fragmented healthcare system means that the lack of system interoperability extends beyond clinical research and electronic health record platforms to include hospitals, labs and other care settings. In Europe, the healthcare systems are somewhat less fragmented. Implementation and adoption of standards is further along but stricter privacy laws make scalable data difficult. Bridging these gaps will require not just technical solutions but also policy updates.

Health authorities are approaching AI as they do other innovations — developing perspectives and inviting industry and public input, but it's still early days. There's a massive opportunity for industry and regulators to collaborate on risk-based frameworks that enable appropriate, meaningful, and sustainable progress.

Can you share an example of a promising approach that you think is currently underutilized?

Two come to mind. First is using AI and data science techniques to simplify and accelerate data acquisition, cleaning, and quality assurance. The tools already exist, and the risk-based monitoring frameworks are in place, but organizations often revert to traditional methods due to perceived risk. Embedding AI-driven data quality tools at scale could yield major efficiencies in the sponsor, CRO, and investigator domains and maintain trial rigor.

The second area is real-world data (RWD). Sponsors have been investing in RWD for years to support regulatory submissions, inform clinical practice, and identify eligible patients for trials. But it hasn’t delivered the boost to recruitment that many hoped for. Why? Because the data often lacks a complete longitudinal view of a patient’s health or is missing critical context. A patient might appear to be a good candidate in one dataset but be ruled out based on information in another system, something that often only comes to light during screening.

That same data, however, could be used upstream to improve protocol design, making studies more efficient, more inclusive, and less prone to high screen failure rates. That’s a missed opportunity.

Trust is often cited as a major hurdle for AI in regulated environments. What does it take to build trust in AI-driven systems among regulators, researchers and patients?

End-to-end transparency. Stakeholders need clarity on where the data came from, how it was used to build the model, how the model performs, and how its outputs are interpreted. Some technology companies do a good job of publishing evidence to demonstrate reliability. Others do not provide as much insight.

Right now, at least in the United States, data stewardship typically sits with the healthcare provider. But in the future, it may be more appropriate and more effective for patients to control how their data is used. That means having the ability to opt in or out of secondary data use, change their preferences at any time, and easily understand how their information might be used. Just as importantly, patients should be informed when their data has contributed to research and what the outcome of that work means for their own care. If the healthcare and technology sectors made that level of communication and control a priority, it would go a long way in building public confidence and encouraging participation.

What frameworks or principles do you believe are essential for the responsible use of AI in clinical research settings?

In addition to transparency, frameworks that prioritize scalability and shared governance are needed. When solutions are built on narrow or bespoke datasets and use cases, it becomes difficult to scale them across programs or organizations. Mechanisms that promote data sharing, while protecting privacy, are critical to accelerating both model development and the deployment of AI-driven tools.

The path forward requires sponsors, technology companies, the patient community, and healthcare providers to come together and articulate the minimum thresholds for data robustness, representativeness, and algorithm reliability. This collaborative approach will establish the foundation for responsible AI use in clinical research.

Looking ahead, where do you see the most promising opportunities for AI to meaningfully transform clinical development in the next 5–10 years?

One major opportunity is using AI to enable more seamless data exchange to facilitate the use of historical controls and model-based drug development. This could help address one of our biggest challenges: the slow pace of patient recruitment in clinical trials.

Rather than focusing exclusively on finding more sites and patients, what if we needed fewer patients? AI could help build comparative data sets from historical clinical trial or RWD, potentially reducing the number of patients needed in control arms. This creates a win-win-win situation: cost reduction, time savings, and reduced burden on patients who don't want to risk randomization to placebo. Closely related is the use of AI to model and optimize trial design, making studies easier to participate in for both investigators and patients, which can also accelerate recruitment.

Another opportunity is to use AI tools to support patient encounters. If AI can automate administrative tasks for clinicians, they could spend more time with patients, asking better questions, and capturing higher quality data in EHRs. This creates a cycle where better clinical data leads to better AI models, which in turn supports better clinical care and research data. The key technical challenge here is improving our ability to extract insights from unstructured data reliably, an area where we’re starting to see real progress.

What key bottlenecks do you see organizations facing when trying to implement AI in clinical development?

Two critical bottlenecks stand out. First, the industry's failure to implement data standards at the pace seen in other sectors like finance, energy and telecommunications. These standards underpin the ability to move large volumes of data at scale and make systems interoperable. Without them, there's a high transactional cost for moving, cleaning and modeling data, and sustainability suffers as a result.

But the real issue goes deeper: the lack of standardization makes it hard to scale. When an organization considers investing in an AI solution, they don’t just want a one-off result — they want a solution that can support multiple use cases over time. Technology companies can build just about anything to specification, but if the approach isn’t scalable, continued investment is challenging. The acquisition cost of the technology is only part of the challenge; the cost of change management is just as significant. That’s what’s getting in the way of true industrialization. Solving for scalability will require alignment across regulators, tech vendors, pharma sponsors, and government agencies, because no single sector can solve this alone.

The second bottleneck is data sharing. There is a real opportunity in continuing to grow this as a standard practice. Some of the most promising AI applications, like developing digital biomarkers or measures of disease progression, demand more data than any one organization can generate. Advancing these use cases will require secure, shared data frameworks that allow all players in the ecosystem to contribute meaningfully.

What’s the most exciting innovation you’ve seen recently in the use of advanced analytics or AI in clinical development?

One of the most exciting developments is the use of machine learning in model-informed drug development and precision medicine. At a recent session during the Drug Information Association (DIA) Europe meeting, there were compelling examples of how researchers are starting to better understand variability in treatment response, integrate multimodal data to predict outcomes, and identify disease progression signatures in both selected tumor types and multiple sclerosis.

These examples illustrate the potential to reduce the number of patients needed in clinical trials, improve the probability of late-phase success, and develop novel endpoints. But, of course, realizing this potential depends on having access to high-quality, robust datasets and mature analytical capabilities. Those are the foundational elements that will make this kind of innovation scalable and impactful.

Where do you see the greatest untapped opportunity for AI to add value in clinical trials today?

The greatest untapped opportunity is the potential to reduce patient burden in clinical research, either by decreasing the number of patients needed in a given trial or by reducing the total number of trials required. While process automation and faster timelines are valuable, the real impact will come from fundamentally changing how clinical studies are designed.

By using AI to develop smarter, more informed protocols that reflect a deeper understanding of patient populations, we can reduce screen failure rates, accelerate recruitment, and ensure studies better represent the diverse populations they aim to serve. This approach not only improves operational efficiency but also enhances the ethical foundation of clinical research by minimizing unnecessary patient exposure.

About the Author

  • Photo of Bree Foster
    Bree Foster is a science writer at Drug Discovery News with over 2 years of experience at Technology Networks, Drug Discovery News, and other scientific marketing agencies. ​

Related Topics

Loading Next Article...
Loading Next Article...
Subscribe to Newsletter

Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

Subscribe

Sponsored

Close-up illustration of clustered, irregularly shaped 3D cell structures resembling organoids, displayed in a blue-toned background.
Machine learning-powered image analysis makes it possible to automatically and reliably quantify complex 3D cell structures.
Illustration of a glowing human brain with interconnected neural networks and bright data points, set against a dark, digital background.
Take a closer look at modern techniques that reveal when, where, and how neurons communicate in real time.
Gloved hand holding a petri dish containing red liquid culture medium against a light blue background.
As global regulations shift toward animal-free testing, how can researchers develop more biologically relevant in vitro models to advance drug discovery?
Drug Discovery News June 2025 Issue
Latest IssueVolume 21 • Issue 2 • June 2025

June 2025

June 2025 Issue

Explore this issue