For healthcare providers, it must seem at times that prescribing a drug to a patient is a little like playing the video game app Minesweeper.
You know that any drug you prescribe has gone through rigorous preclinical and clinical testing that has been vetted by regulatory agencies. You know that the drug has likely already been administered successfully to—and is providing benefit to—thousands and possibly tens of thousands of patients.
But in the back of your mind, you also know that as you hit “enter” on that patient’s prescription, there is a very real chance that this patient is one of those few for whom the drug was shown to be toxic, regardless of whether we know why. When you make that move, you may strike the mine that significantly complicates the patient’s game.
Locating landmines
Despite our best efforts to understand the potential impacts of a drug candidate as it moves from the discovery phase to human trials, metabolic challenges and toxicity issues continue to plague many projects. Try as we might, what appears to be a viable candidate may not show its darker side until it has moved into human patients, whether still under watchful eyes during clinical trials or in the largely invisible world of post-marketing use.
In a 2014 review on the design and selection of drug candidates in early development, Optibrium’s Matthew Segall and Lhasa’s Chris Barber suggested that about 30 percent of late-stage drug failures were the result of issues related to toxicity, and that these failures contributed heavily to the costs of marketed drugs, accounting for about $1.8 billion. And even within the preclinical space, more than half of the failures were attributable to toxicity and safety issues.
But as troublesome as these numbers are, failure during preclinical and clinical development limits the exposure of the general population. This is not the case when a drug is identified as toxic after it has been approved.
Earlier this year, Igho Onakpoya and colleagues at University of Oxford performed a systematic review of global withdrawal of medicinal products related to adverse drug reactions (ADRs). Of the 353 products they’d identified as being withdrawn after regulatory approval from 1950 and 2015, 40 had been withdrawn worldwide. Of these, analgesics accounted for 25 percent.
Death occurred in 68 percent of these cases, and the most common mechanisms of ADRs were hepatotoxicity (25 percent), cardiotoxicity (20 percent) and nervous system toxicity (12.5 percent).
This study helps to highlight a significant challenge in drug development as—despite having moved through a variety of in-vitro and animal testing stages during preclinical development and even having been tested in clinical trials—these compounds can still demonstrate unexpected toxicity issues.
There are an ever-growing number of in-vitro assay systems being developed to characterize and monitor drug candidate metabolism that attempt an intricate balance between throughput and real-world accuracy. Given that drug processing is predominantly the domain of the liver, most of these assays involve liver tissues or extracts. That said, assays based on other tissues are seeing increasing development.
As an example of this latter situation, Roger Olsson and colleagues at Lund University and University of Copenhagen recently described their efforts to study the impact of the blood-brain barrier (BBB) on drug permeability, efflux and metabolism. The researchers exposed brains dissected from desert locusts to a variety of drug compounds and then used LC-MS/MS to test brain homogenates for metabolites.
“The absence of a vascular system in insects makes the ex-vivo model independent of blood flow through the brain,” the authors explained. “Thus, the locust ex-vivo model uses controlled in-vitro-like exposure conditions that provide direct comparison of chemical compounds.”
The researchers found that not only could they detect metabolites of the various drugs tested, but also when they co-administered metabolism inhibitors, they could inhibit metabolite formation. A compound known to be BBB-impermeable served as a control.
Recently, Rohit Jindal and colleagues at Massachusetts General Hospital and Rutgers University reviewed the current state of in-vitro technologies for drug metabolism, focusing on the liver.
“Several hepatic cell choices are available for these models, including primary human hepatocytes (PHH), cell lines (HepaRG, HepG2) and recently hepatocyte-like cells (HLCs) derived from human pluripotent stem cell (PSC) sources,” the authors wrote.
But as they explained, each of these systems comes with its own challenges.
PHH, for example, lose proliferation ability in vitro, so analysis relies on a steady and consistent supply of donor cells. Advances in cryopreservation have significantly improved the state of PHH work, and there has been evidence to suggest that these are in many ways preferable to fresh PHH, which may take a day or more to arrive following harvest, significantly altering their metabolic capabilities.
Growth conditions can also facilitate longer-term experimentation, whether through the incorporation of extracellular matrix (ECM) such as Matrigel in a sandwich configuration or 3D culturing methods using scaffolds such as PuraMatrix.
But even these models tend to be limited as the liver is not simply comprised of hepatocytes, but involves other cells that both support hepatocyte survival and have direct impacts on hepatocyte activity. Thus, there is growing interest in co-culturing methods.
“This is particularly useful for examining metabolic function and drug toxicity under adverse conditions such as exposure to lipopolysaccharide, where nonparenchymal cells, especially Kupffers (resident macrophages of liver), may play a pivotal role in modulating CYP 450 function and drug toxicity,” Jindal and colleagues explained.
But as Deb Nguyen, senior director of research and development at Organovo, explains, the need to screen as many compounds with simple yes/no answers as cheaply as possible in early-phase discovery smacks up against the inherent messiness of human biology
“We really want to be able to boil down what is really very complex—a human system—into as few simple tests as possible,” she explains. “But the problem is, we are human beings, and human beings are not simple organisms. Even our tissues are not simple.”
Part of the hazard in trying to simplify these assays is that it requires a certain amount of foreknowledge of how compounds interact with the enzymes involved in their metabolism and elimination, which Nguyen suggests introduces a certain amount of experimental bias.
“It assumes that we actually know what drives the phenotypes we’re looking for, and the problem is, we often don’t,” she worries. “No matter how smart we are, no matter how much research we’ve done, there is still a lot that we don’t know.”
Rather than try to simplify matters, she suggests, Organovo is embracing the complexity of human biology with 3D printing.
Reinventing the liver
“We’re trying to create tissue models that have multiple cell types and that are built into much more tissue-like 3D structures using our bioprinting platform,” she enthuses.
The Organovo system is a step beyond the more common co-culturing systems, she explains, acknowledging that even co-culturing is a step in the right direction.
“The challenge, however, with some of the co-culture systems is you have multiple cell types, but you’re still growing them on plastic dishes, which is still not how they grow in the body,” Nguyen explains. “That kind of culture system can really activate cells or change the types of signalling pathways that would take place in vivo.”
She suggests that bioprinting not only allows tissues with multiple cell types, but also that those tissues are being held together by cell-cell interactions and ECM proteins produced naturally by those cells.
This, she continues, sets bioprinting apart even from some of the other 3D culture methodologies, such as seeding cells onto preformed scaffolding materials. In those approaches, you have extra material that wouldn’t necessarily be present in the native tissue and could interfere with the native cellular structure.
“The [bioprinted] tissue piece can be analyzed the same way you would analyze a patient biopsy,” she says. “You can do everything in that kind of a system from very simple biochemical assays to much more complicated whole-genome analyses or looking visually at the structure through histology.”
This more native structure also means that tissue samples can preserve their functions for much longer periods—four or more weeks—than those typical of monocultures—one to two weeks.
The importance of this challenge was highlighted last year by Matthew Hutzler and colleagues at Q2 Solutions.
“A main shortcoming of current in-vitro systems, which use human liver microsomes and hepatocytes to assess metabolism, is that incubation times are limited due to loss of enzymatic activity of the drug-metabolizing enzymes over time, which precludes the ability to obtain an estimate of intrinsic clearance for low-turnover (slowly metabolized) compounds,” they suggested.
And they argued that the challenge may only get bigger as the prevalence of low-turnover drugs is increasing because of improvements in medical chemistry strategies to synthesize drug molecules less susceptible to metabolism. They pointed, in particular, to 2012 reports from Pfizer scientists that upward of 30 percent of the company’s drug candidates demonstrated low intrinsic clearance rates.
“With an assay system that lasts longer,” Nguyen comments, “you get a much more holistic, true sense of all of the impacts of a compound or a drug on the tissue, not just the most robust or the few that we thought to look for.”
For their part, Hutzler and colleagues noted that methods like relay incubation where a single sample is incubated with a series of cell cultures to provide a cumulative result or microfluidic and co-culture platforms are helping to address this challenge.
“Overall, it appears that most simplistic models for studying drug metabolism, while serving a very important purpose in drug discovery, have effectively peaked in terms of their predictive utility,” the authors concluded.
Longer viable cultures also afford you the opportunity to look at metabolic and toxicological impacts that may take longer to manifest than the lifespan of a typical culture, or provide opportunities to see if tissues can recover from the initial assault.
In September, Organovo described its work with Leah Norona and colleagues at University of North Carolina at Chapel Hill and the Institute for Drug Safety Sciences using the ExVive Human Liver platform to characterize drug-induced liver injury leading to fibrosis. With repeated, low-concentration exposure of liver tissue to methotrexate or thioacetamide, the researchers were able to monitor not just biochemical and histological signs of fibrogenesis, but also modulations in the expression of specific cytokines and genes.
Aside from the obvious importance of the study to identifying potential pathways of drug-induced fibrogenesis, the authors wrote, there are also implications in fibrosis treatment.
“While there exist a number of challenges towards developing effective treatment strategies (i.e., causation, stage of fibrotic injury, co-morbidities), this bridges a critical gap that could inform effective therapeutic approaches (i.e., novel biomarkers and interventional strategies) for treatment at early and late stages of fibrogenesis during which different hepatic cell types may be involved and targeted to prevent or reverse liver fibrosis,” they concluded.
Also in September, Organovo announced its commercial contracting for its kidney platform.
“The kidney is a key point of toxicity for compounds when they reach humans and a place that is often missed during preclinical testing,” Nguyen says. “And the proximal tubule specifically in the kidney is a key place of damage.”
This sensitivity, she explains, is because the tubules are the first location that experiences high concentrations of compound as the organ filters the blood. As well, these tissues carry significant metabolic enzymes and transporters that generate reactive compounds, which can lead to toxicity.
“We’re able to see some key toxicities with compounds like cisplatin, which is well known to create big problems in the kidney,” she continues. “We’re able to show some protection from that toxicity from some known inhibitors of transporter function.”
But even with the additional physiologically relevant complexities found in bioprinted tissues, there are still other factors that distinguish isolated tissues from true organ status. Fluid dynamics, for instance, can impact cellular exposure to chemical assaults and impart shear forces on tissues that can alter their physiological behavior and pharmacological responses. Likewise, tissues such as heart and lung are torsionally stressed by factors such as rhythmic beating and breathing.
To better mimic these complexities, organizations like Emulate, Kirkstall, Mimetas, Hμrel and HemoShear Therapeutics are developing microfluidic tissue platforms that can best be described as organ-on-chip (OoC) systems (see also “Special Report on Disease Modeling: So life-like” in the November 2014 issue of DDNews).
Last November, at the annual meeting of the American Association for the Study of Liver Diseases, researchers from HemoShear demonstrated the application of its Reveal-Tx platform to profile distinct mechanisms of hepatoxicity produced by a variety of drugs, as well as alterations in lipoprotein biology by a cholesterol-lowering drug.
Continuing from work first developed at Harvard University’s Wyss Institute, Emulate has developed a number of OoCs to model human physiology and disease. Effectively reproducing the tissue models described above, the company then layers on other biological and biomechanical influences to better mimic the organ microenvironment.
In 2013, for example, Wyss Institute’s Donald Ingber and colleagues demonstrated the use of their human kidney proximal tube-on-a-chip to assess drug transport and nephrotoxicity and identified stark contrasts in the responses of static and fluidic tissue models to insult.
“Kidney cell injury (both LDH release and apoptosis) produced by cisplatin could be almost completely prevented in cells in the proximal tubule-on-a-chip by co-administering the OCT-2 inhibitor, cimetidine, whereas the cells did not exhibit this physiological response when cultured under static conditions,” the authors wrote. “Cells also recovered more effectively after removal of cisplatin under fluid shear.”
Looking more broadly at OoC platforms, Vanderbilt University’s Shane Hutson and colleagues recently identified three main factors that would influence the success of these systems in toxicology: throughput, analytical integration and serial connection.
“Researchers will need to engineer and develop OoC platforms and control systems that will be amenable and affordable for high-content and/or medium- to high-throughput screening of chemical toxicity,” the authors wrote. “Concerted efforts are needed to engineer reliable perfusion and control systems for long-term OoC culture…and to integrate these systems with advanced analytic techniques for assessing OoC health.”
“It will also be critical to move beyond single OoCs and develop strategies for at least pair-wise coupling—for example, using an upstream liver OoC to introduce the potential for metabolic detoxification or activation,” they continued.
These latter two factors are clearly possible with many current OoC platforms, and the clear chip construction of several platforms make them amenable to high-content screening. It remains to be seen, however, how effectively the OoC format can be modified to suit throughput concerns and move it upstream in preclinical drug development.
Fully linked OoCs, however, may not be able to completely mimic the complex biology of a full organism, and so researchers will continue to rely on animal models to identify other possible stumbling blocks to drug development and safety. Unfortunately, with the possible exception of Mickey over at Disney, man and mouse are not the same thing.
Man or a mouse
“Animal model selection is increasing in its complexity,” says Michael Seiler, porrtfolio director for genetically engineered and humanized immune system models at Taconic Biosciences. “Probing the appropriate model with the right experimental design becomes the next challenge.”
“The prediction of human responses from traditional preclinical in-vivo studies in this field is often limited by the significant species differences in the proteins involved in drug ADME,” explained Imperial College London’s Ian Wilson and consultant Nico Scheer in a recent review. “Whereas the overall pathway of drug metabolism and disposition is highly conserved, the substrate specificity, multiplicity and expression level of individual proteins mediating these processes can vary significantly between species.”
To address this challenge, researchers humanize models, introducing human versions of key metabolic enzymes or more recently, replacing endogenous cells with human cells.
In the genetic engineering strategy, one or several mouse genes are knocked out and ultimately replaced by a specific human homologue suspected or known to be involved in the metabolism of the candidate drug, typically based on information gleaned from earlier in-vitro screening.
Although technologies to knock in and knock out genes have been used for decades, recent advances in gene-editing technologies with platforms such as CRISPR-Cas9 have increased the specificity of this process, increasing the likelihood that the downstream impacts of genetic manipulation and drug treatment are the result of changes in the metabolic pathway and not a side effect of the mutational process such as from insertional mutagenesis.
Such genetic approaches offer the opportunity to not only test specific genes, but also to test combinations of genes by cross-breeding genetically altered mice.
This can be particularly important given the evolutionary conservation as well as the host organism’s response to the genetic manipulation.
“You have gain-of-function of specific transporters and pathways that are human in origin, but introducing one or a few of those creates compensatory challenges within the host system, and those need to be recognized in the interpretation of the result,” offers Seiler.
Using this approach, Scheer (then at Taconic) and colleagues generated a mouse line that was humanized for four cytochrome P450 genes as well as two transcription factors (CAR and PXR).
As they reported last November, Western blotting and LC-MS/MS suggested the human enzymes were expressed in mice at levels comparable to those in human tissues. They then tested microsomal fractions with various drugs and metabolism inhibitors to test the activity of the enzymes, and found them to function as expected.
The authors concluded that their multiply humanized mouse line “provides a powerful adjunct to existing experimental approaches for preclinical drug development and optimizing drug use in patients. It offers the potential to study complex in-vivo DDIs [drug-drug interactions] involving the enzyme system responsible for ~75 percent of the phase 1 metabolism of all marketed drugs.”
Seiler echoes that the combination of the genetic control afforded by knock-out/knock-in humanized models enables specific questions to be asked in the in-vivo setting, where there is a fully competent immune system, contributions by other organ systems, and a general appeal to understanding very discrete mechanisms suspected to play a part in a particular compound evaluation.
Again, however, genetic engineering presupposes a certain degree of knowledge about which enzymes and genetic haplotypes are critical and may in fact keep you from seeing other opportunities or risks.
“The fundamental nature of genetic humanization of an inbred strain of mouse is you are introducing one particular haplotype of a particular drug-metabolizing enzyme, and that may be reflective of some segment of the populations, but not broadly applicable,” Seiler warns. “And that’s where you wind up in the argument for the tissue-humanized animals.”
In tissue-humanized or chimeric animals, researchers effectively replace entire organs, fully humanizing the metabolic processes of that particular organ system. Thus, when a drug is tested, it is exposed to the full complement of human enzymes rather than just one or a few of the suspected participants. As well, any drug-induced damage to that organ is likely more reflective of the human pathology.
This was exemplified in early 2015 by Stanford University’s Gary Peltz and colleagues at Eli Lilly, Bruker and Japan’s Central Institute for Experimental Animals who engrafted human hepatocytes into a TK-NOG mouse, after ablating the mouse liver with ganciclovir (via the thymidine kinase transgene).
They then probed the mice for their ability to model cholestatic liver injury caused by the pulmonary arterial hypertension drug bosentan. In its development, the drug failed to show any such toxicity in animal models, but caused reversible, dose-dependent injury in a significant portion of human subjects.
Both serological and histological analyses clearly demonstrated dose-dependent liver damage in the humanized but not the control mice, and that this damage could be detected early and in chimeric mice receiving as little as double the typical human dose.
“Bosentan-induced cholestatic toxicity could easily be detected in humanized TK-NOG mice after only one week of dosing, even when treatment of conventional mice for one month provided no evidence of its hepatotoxic potential in humans,” the authors wrote.
This study also highlights the earlier described advantage of chimeric humanization, as the mechanism behind bosentan-induced toxicity is unknown and therefore testing with genetically humanized mice would have been complicated.
“Bosentan-induced liver toxicity in humans is reversible, and we do not have liver tissue from bosentan-treated humans for comparison. Hence, we do not know what type of histopathology to expect in the livers of the humanized mice,” they lamented. “Nevertheless, if humanized TK-NOG mice had been used during the preclinical evaluation of bosentan, important information about its potential to cause cholestatic liver toxicity in humans would have been available to pharmaceutical companies and to government regulators at an early stage in its development.”
One of the criticisms of chimeric models, however, is that they occur within an immune-compromised setting, and the immune system, in many ways, is one of the most immediate responders to adverse events.
Seiler sees an opportunity in this, however, and with a nod to recent work in immuno-oncology, suggests that you can actually study the impact of drug metabolism on the immune system by simply adding it back to the TK-NOG mouse.
“If I introduce a drug in a liver-humanized model that’s co-engrafted with an immune system, does it have an impact on a specific immune cell subtype that we know is recapitulated in that model?”
Understanding the best way to triangulate the information from standard models to genetically engineered and progressing to the more complex tissue humanized models is really the major push in the field today, he continues.
But that push isn’t completely without a bit of push back.
“The use and application of genetically engineered or tissue humanized animals for this discipline—for toxicology-based work—is still targeting a group of drug development professionals that are in some ways quite conservative about interpreting the impact of those drugs,” Seiler notes.
But challenged with the task of finding suitable methods using current systems, the need to turn to humanized models is likely inevitable.
“The better we characterize and understand them, the more effective these models can be in asking precise questions and yielding reflections of how human metabolism will take place with new compounds,” Seiler suggests.
Even if the humanized models better replicate the typical human experience, however, will this translate to improved safety for individual patients?
Back to the bedside
A challenge that remains for almost all of these platforms is that they largely only test single variants of drug-metabolizing genes, which may be selected on the basis of their frequency in the general population. With in-vitro platforms, this challenge is typically mitigated by using pooled cells from a number of donors to represent a statistical averaging but again, individual patients do not view their health on the basis of statistics and the variability in enzymatic metabolism can be quite extreme.
In 2014, for example, Hutzler and colleagues at Boehringer-Ingelheim and BioreclamationIVT examined variances in the activity of the drug-metabolizing enzyme aldehyde oxidase (AO) in hepatocytes from 75 human donors. They noted that AO activity of cryopreserved hepatocytes varied by up to 17-fold across the 75 donors.
But perhaps more importantly, when individual donor activities were compared to AO activity in a pooled lot from 19 donors, a full 63 percent of donors demonstrated higher activity than the pool.
Earlier this year, University of Kentucky’s Timothy Tracy and colleagues discussed similar efforts to understand the variability range in several cytochrome P450 enzymes, which had previously been shown to range over a 30- to 100-fold range depending on the specific enzyme. The authors suggested that not only did genotype impact drug metabolism, but that activity could vary throughout a day, from tissue to tissue (e.g., liver vs. intestine), in the presence of disease or from birth to old age.
Given this span of confounding factors, there has been growing interest in the possibility of bringing some of these analytical resources to the patient’s bedside, giving attending healthcare providers a better understanding of how an individual patient might respond to a given treatment.
Organovo’s Nguyen sees two ways in which direct patient testing might work.
“The first goes a little bit back to the stem cell question,” she says. “The promise of being able to use an iPSC-derived source for your cells is that you can generate them from a patient who is getting ready to take a new treatment, or a patient who has a genetic malformation for which you would like to be able to find a new drug against and you don’t have a good model in a dish.”
The company has already started down the stem cell road and has shown some proof-of-concept work with iPSC-derived hepatocytes.
The challenge, Nguyen explains, is that cells derived from stem cells tend to be more immature and don’t always demonstrate some of the adult-stage functions for some downstream applications in which Organovo’s customers are interested. By the same token, she continues, there is increasing evidence that differentiation in 3D cultures might lead to better maturity.
The other opportunity is to take the lead from current preclinical practices and work in immuno-oncology by using patient-sourced cells directly.
In particular, Nguyen points to Organovo’s collaboration with Rosalie Sears at Oregon Health and Science University.
“Her group is very oncology-focused and they have a lot of interest in using this technology to try to get at the personalized medicine space,” she explains. “They’ve taken cells from pancreatic tumors from patients, and then bioprinted them into a 3D tumor model, and then looked to see whether some of the same histology that you would see from the biopsy can be reproduced in a dish.”
Thus, we may not be too far from the day when the press of an “enter” button for a drug (whether literally or figuratively) is preceded by the punch of a skin sample; a sample that may reveal a pharmacological landmine before it is set off.
Teach a lab to fish
Perhaps with an unconscious nod to the upcoming Aquaman film, researchers are not only humanizing mice but have also started re-engineering a workhorse of developmental biology: the zebrafish.
Because of its small size and transparency during larval development, the zebrafish offers researchers the opportunity to do whole-organism screening at much higher throughput and over a greater proportion of the life cycle.
Recently, Tom Carney and colleagues at Roche Pharmaceutical Research and Early Development and Singapore’s Agency for Science, Technology and Research described their efforts to develop and characterize a genetically modified zebrafish expressing the human gene for CYP3A4, an enzyme involved in metabolism of about half of all drugs. They then exposed humanized and wild-type zebrafish to a variety of drugs, using LC-MS to monitor metabolism and toxicity.
The researchers noted that although they could find no metabolites of midazolam, a common reference substrate for CYP3A4, in wild-type fish, they identified marked and significant metabolism in humanized fish, although the extent of metabolism was variable among different fish. Based on a fluorescence assay of mCherry-tagged CYP3A4, however, this enzymatic variability positively correlated with fluorescence intensity.
Continuing with the most fluorescently active fish lines, the researchers then examined metabolism of two other CYP3A4 substrate drugs—amiodarone and nefazodone—and in both cases, noted elevated metabolism in the humanized zebrafish. Furthermore, treatment with the CYP3A4 inhibitor ketoconazole significantly reduced the metabolism of all three compounds.
They also noted that for most of the compounds they tested, metabolites were released into the medium, suggesting that it may be possible to avoid killing of the fish, further reducing the strain on resources.
“Advances in genomic engineering in the zebrafish will permit targeted ablation of host detoxification enzymes and pathways and their replacement, through directed integration, by a suite of human detoxification enzymes under endogenous promoters,” they concluded. “Such approaches offer a technical advancement of zebrafish toxicity studies and a new phase of drug testing in zebrafish.”