EVENTS | VIEW CALENDAR
Translating between preclinical data and clinical outcomes: The bottleneck is always on the other side
Drug development is an information-rich endeavor, and it is the role of informatics researchers to ensure we are leveraging important findings from internal and external sources. Our work enables better-informed decisions about compounds in the pipeline, and it's especially beneficial in translational medicine, helping to improve the likelihood of success as candidate drugs move into clinical development.
Translational medicine provides a better understanding of drug mechanisms and interactions, which aligns well with today's regulatory emphasis on safe medicines with fewer side effects. By being able to translate preclinical data and observations into possible clinical outcomes, we can make the drug development process more efficient and cost-effective. Our work contributes to the goal of reducing attrition rates, improving our ability to pick winners and drop losers, in earlier and less-costly phases of clinical trials.
Translation needs to happen in both directions: Forward translation, or translating from preclinical studies to patient studies, and back-translation or "feedback loop," in which patient data from clinical studies are used to "humanize" preclinical drug discovery. The biggest challenge today is finding or accessing all the relevant and necessary data and information, and we often spend more time getting to the data than using it.
I started out in preclinical discovery many years ago, and I was surprised at the difficulty in finding out what happens to compounds once they enter the clinic. Even though high-level results from studies were available, we couldn't really see all the critical values. That's when I started looking into ways to improve information flow. I moved into clinical development to understand the problem from the "other side."
What I found was that depending on where you sit, the bottleneck is always on the other side.
After spending almost six years in clinical development, I recently moved back to the preclinical side, with better knowledge and appreciation of where the clinical data is— and the ethical and legal aspects of patient privacy. My role now includes responsibility for improving the information flow between these two sides so both can use the information in the best way possible. Ultimately, I want to eliminate any bottleneck so that whichever side you're sitting on, you have access to information that can answer your questions, progress projects and deliver new medicines to patients.
The questions raised in translational medicine are key to developing the targeted therapies of personalized medicine. Working with selected populations, you might need to identify improved responses and survival rates, see what effect a compound has on a particular type of tumor, and gauge performance against currently marketed products.
The ability to find answers relies on data diversity and complexity. Identifying preclinical models—in silico, in vitro, in vivo—or assays that can best predict clinical observations is not trivial. It requires understanding of preclinical-to-clinical correlations at the project level, at the model/assay level and at the subject level. The challenge in making these correlations lies in what we call the "translational chasm"—the information gap between preclinical and clinical information.
Maintaining an ongoing dialog between preclinical and clinical scientists, so they can improve the efficacy or safety profiles of candidate drugs, is challenging and involves three interrelated aspects.
The solution has to address the technology difference, cultural difference and skills difference within each organization. We often talk about technology and culture, but even the right culture—in terms of willingness and understanding—with all the right tools can fail if we don't have the right skills. Not many scientists have the necessary computational, quantitative and information science skills necessary.
The complexity of the data requires more than "point and click"—even with the most sophisticated tools—to access, retrieve, integrate, and analyze the necessary information and knowledge. It requires trained scientists who fully comprehend the scientific or medical problem and also have strong information and computational skills and expertise to apply sophisticated multidimensional analysis and visualization approaches to the data.
While technology is part of the solution, it has created its share of problems. A lack of standards among different systems and programs has led to technology bottlenecks in the information flow. The good news is we're seeing some convergence and progression of standards, allowing disparate systems to communicate. While progress is being made across the board, it has been in pockets, so the overall effect is one of small steps.
Because all of us within the industry are dealing with similar issues, several consortia have come together to join efforts and create larger datasets. When researchers look for correlations, it's like searching for a needle in a haystack. With a big enough haystack, you might find several needles to compare for similarities. With large enough datasets, researchers can identify more of the factors that contribute to the success or failure of a drug, or a particular class of drugs, thus gaining better insights.
This pooling of data across the industry might seem like competitors are now collaborators, but the sharing is "fit for purpose," without revealing competitive advantage or proprietary secrets. The collaboration is toward a common goal of developing a better understanding of the science and disease, not products. Working alone, no company can achieve the richness of data and knowledge that these consortia can achieve together.
As information flow improves between preclinical data and clinical outcomes, the need for even better communications becomes apparent. Today, we have a good handle on where the data reside and they can flow to any particular project. The next and more important leap is working at a higher level, so project teams can make decisions in the context of everything else that may be related. We need to ensure access to data across projects and across functions, even including inactive projects because there is valuable knowledge in our legacy projects.
The outline of the task is relatively simple, if difficult to achieve. We need to extract and structure our project knowledge in a way that enables translation. We need to connect project information across functional and scientific boundaries. We need the skills and ability to mine information across boundaries.
Today's knowledge base is a complex system of data sources, abstraction pipelines, document management, taxonomies/ontologies, text-mining engines, curation/quality control tools, and query capabilities. The knowledge base of the future needs to also integrate the clinical data and knowledge, much of which sits securely in regulatory-compliant document repositories.
To be successful in translational science, we can't wait until all of the necessary infrastructure is in place. We can build on the hard work and hand curation of data currently done in translational medicine. We also need to recognize that finding correlations in complex data requires large datasets, not just diverse data, and can be aided by exploiting existing knowledge and legacy data.
The issue of bottlenecks (existing and potential ones) is ever-present. The solution comes from having the right technology, culture and skills to address the problem. Most people focus on the technology and the culture. But it is the skill to understand the questions, process the data, and apply computational and quantitative approaches to exploit the data in an integrated fashion, that allows us to resolve bottlenecks wherever they exist and to provide context that makes a difference in the drug development process.
You can have the best tools possible, but if you don't have the skills to use them—or don't fully understand the question at hand—there will always be a bottleneck on one side or another.
Dr. Anastasia Christianson is a senior principal scientist in informatics and senior director of discovery information at AstraZeneca Pharmaceuticals based in Wilmington, Del. Christianson began her professional career at a small biotech company, DNX Biotherapeutics Inc., in 1992 before moving to Zeneca's Pulmonary Pharmacology department in 1994. Since then, she has helped to set up AZ's Genomics and Bioinformatics division in Wilmington, helped to establish informatics expertise in experimental medicine and clinical development, and has been the biomedical informatics global discipline leader for the last five years. Outside AstraZeneca, Anastasia has held adjunct professor appointments at universities including Johns Hopkins University, University of Pennsylvania and Drexel University. Christianson obtained her Ph.D. in Biological Chemistry from the University of Pennsylvania in 1989, followed by postdoctoral training at Harvard University in Cellular and Developmental Biology.