The leather chair squeaks as the psychiatrist adjusts her position, her legs slowly going numb as she waits for her patient to open up. An hour into their session and the patient has yet to say a word.
“I understand that you have been isolating yourself from your family,” the psychiatrist notes quietly. “Are you afraid of something or are you perhaps feeling depressed?”
She stares at her patient, willing him to speak.
Instead, he slowly turns his attention to her gaze and twitches his nose, his small furry body practically swamped by the chaise on which he is strapped.
Unlike their human counterparts, animal models of neurological conditions like bipolar disorder, schizophrenia and anxiety disorders simply cannot tell you how they are feeling or whether they have suicidal tendencies.
Back in 2010, Mount Sinai School of Medicine’s Eric Nestler and Harvard University’s Steven Hyman spelled out the challenges.
“Many of the symptoms used to establish psychiatric diagnoses in humans (e.g., hallucinations, delusions, sadness, guilt) cannot be convincingly ascertained in animals,” they wrote in Nature Neuroscience. “When there are reasonable correlates in animals, (e.g., abnormal social behavior, motivation, working memory, emotion, and executive function), the correspondence may only be approximate.”
In part, they averred, this is because little is known about the pathophysiology of most states contained within the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IVTR) and there are few, if any, objective diagnostic tests.
“Consequently, diagnoses are based solely on phenomenology; i.e., on symptoms, signs and course of illness,” the authors explained. “As a result, the boundaries between DSM-IVTR disorders, and the boundaries between disorder and normal variation, are often arbitrary or hazy. This state of affairs creates enormous hurdles for the development and validation of animal models.”
As the number and variety of organisms modeling human neuropathology increase (see the sidebar “Mouse pads and modeling” below right after the end of this main article), the number and variety of techniques used to validate these models also continues to expand. And as with behavioral and histological assays, researchers continue to take their cues from common practices with human patients.
A good example of this is in the growing practice of in-vivo imaging, including MRI, PET, SPECT, ultrasound and optical imaging.
“The way that we apply imaging is to look how clinical doctors or neurologists are monitoring pathological events in patients’ brains by using, for example, MRI or PET scanning,” explains Antti Nurmi, director of science for Charles River Discovery Services at Charles River Laboratories. They then apply the same processes to their rodent models.
“This is actually a very strong benefit from a translational point of view, where we want to think about how well the results from these rodent models correlate with what is happening in the human brain,” he continues.
That said, the use of neuroimaging in animal models is still in its nascent stages, according to Drew Heinmiller, photoacoustics product manager for VisualSonics, noting that the field lags significantly behind other therapeutic categories such as cancer or cardiovascular disease.
“There’s not a lot of in-vivo imaging going on in neuroscience,” he says. “It’s all histological, neurohistochemistry, microscopy. It’s really trying to understand the cellular mechanisms at the cellular scale.”
The imaging systems developed at VisualSonics cannot resolve single cells, he says, reaching down to about 30 microns, but not all analysis has to occur at the cellular level.
“We’re imaging function of the brain in a live animal, which I think is actually an important step, especially in something like neuroscience where I think it is actually a little bit further behind in terms of the field in general because the brain is so complex,” Heinmiller suggests.
“The challenge from an imaging standpoint, I think, is going to be sensitivity,” he continues. “Being sensitive enough to any kind of molecular probe you use is going to be a challenge. There are plenty of groups working on that.”
One company that is working closely with neuroscientists is PerkinElmer, which extended and expanded its relationship with PET system developer Sofie Biosciences last summer.
“We are particularly interested in the use of our G8 PET/CT system to advance the investigation of Alzheimer’s and Parkinson’s disease research,” says Olivia Kelada, PET imaging applications scientist for PerkinElmer.
“One advantage of our system is the ability to image very low activities of radiotracer due the high sensitivity of the PET detectors,” she says. “This can be immensely helpful to detect drugs or probes that struggle to cross the blood-brain barrier.”
Researchers have also used the system to validate novel tracer biodistribution and specific uptake, moving CNS imaging away from challenges associate with the use of [18F]FDG and [18F]DOPA, she suggests.
Also working in the PET sector, Canada’s Cubresa recently launched its NuPET platform, a MR-compatible PET scanner that can work with an existing preclinical MRI to do simultaneous PET and MRI imaging in small animal subjects. The goal with this unit is to give researchers the best of both imaging worlds.
“You have MRI, which is a phenomenal modality for anatomical and functional neuroimaging,” explains Michael Simpson, director of marketing at Cubresa, whereas the PET modality allows you to “directly image a neurotransmitter or directly image the binding potential of a particular drug.”
Thus, he says, within a single short imaging study, you can monitor the effect that drug might be having on a functional endpoint if you are, for example, using functional MRI (fMRI) to look at brain activation or blood flow.
“You start to be able to look at or tease away cause-and-effect, to look at a mechanism of action, a little bit,” he adds.
Cubresa is not alone in their interest.
At the World Molecular Imaging Congress last September, Aspect Imaging and Seoul National University announced a partnership to offer a complete PET/MRI platform for simultaneous preclinical imaging.
At the same meeting, Bruker similarly introduced its MR-compatible PET scanner insert that, according to University of Leuven researcher Uwe Himmelreich, “enables us to produce improved PET resolution through MRI-based motion correction, and to guide external interventions in real time.”
Months later, Bruker expanded its portfolio further by completing its acquisition of the preclinical PET imaging business of Oncovision.
For Ross Nakatsuji, Cubresa manager of marketing communications, this understanding of mechanism isn’t just important for new, experimental therapies, but also for existing, marketed drugs.
“A lot of times, historically, we’ve had these treatments and we don’t fully understand the mechanism by which they work,” he suggests. “This is a great opportunity to be able to see cause and effect.”
“If you’re a drug company, you want to know how it works and why,” he presses. “And then, you could take it a step further and develop treatments for particular phenotypes, particular groups of patients that have characteristics that you can test for. Then the therapy becomes all the more effective at least for that subset of patients.”
Beyond the traditional workhorses of the preclinical and clinical labs, however, technological advances are allowing researchers to literally shine a light on the neurological spaces long kept dark by the bones of the skull.
“An increasingly wide range of techniques make use of the light absorption and/or scattering properties of brain tissue, and specifically the hemoglobin present in the vascular system, in order to obtain high spatial and temporal resolution readouts of hemodynamic changes,” University of Sheffield’s Chris Martin noted in 2014. “These techniques usually require visualization of the brain either through a craniotomy or a thin cranial window (the skull is thinned to translucency over the imaged brain tissue).”
Even with this invasive treatment, however—which potentially complicates the experimental results—Martin also noted the significant disadvantage of many standard optical imaging approaches: limited depth penetration to just the first few hundred microns of cortical tissue due to light scattering and absorption by tissue.
Layering optical imaging atop its long-held expertise in ultrasound, VisualSonics may have a response to Martin’s challenge in their newly launched photoacoustics platform.
“We essentially use pulsed laser light to generate an ultrasound signal,” Heinmiller explains. “You pulse the laser in and anything that absorbs the light will heat up a little bit and give this thermoelastic expansion. That creates a pressure or sound wave that comes through the tissue and then we listen to that.”
The system uses light in the near-infrared (NIR; 680-970 nm) and NIR II (1200-2000 nm) ranges.
“In that range, hemoglobin in blood is the main absorber,” he continues.
He explains that although researchers could already visualize blood in the form of a Doppler signal in ultrasound or through the use of contrast agents such as microbubbles, those techniques could only image moving blood.
“If the blood’s not moving or is stagnant, which is often the case in some tumors, you have hemorrhaging or stagnant blood or nonfunctional vasculature, we wouldn’t be able to see it,” Heinmiller explains.
This is not an issue for photoacoustics.
“Combining it with ultrasound for the anatomy, Doppler for the blood flow, microbubbles for the perfusion, and now you’ve got oxygen saturation, you combine all of these things and now you’ve got a pretty powerful tool for looking at a variety of different parameters,” he enthuses.
“And because it doesn’t use ionizing radiation, is relatively easy to use, fairly high-throughput, now you can do these longitudinal studies where you watch disease progression and measure all of these different parameters as you go.”
And it is in the ability to do longitudinal studies that these new platforms may find their greatest utility, as Heinmiller describes in a collaboration with Emory University’s Alex Kuan, who studies stroke.
“He had this model of stroke where he’d ligate the common carotid artery and then have the animal breathing lower oxygen for about half an hour,” Heinmiller explains. “You [then] release the ligation and you put the animal back on air or breathing 100-percent oxygen.”
“On the ligated hemisphere, you actually see a stroke forming, but not on the non-ligated hemisphere.”
Looking for signs of stroke, however, required that some animals be sacrificed, the assumption being that all animals within the study experienced the same degrees of impairment.
With Doppler and photoacoustic imaging, however, the researchers could perform live imaging during the course of surgical intervention.
“We were able to actually ligate the common carotid, watch the drop in perfusion on the one side, and simultaneously measure things like oxygen saturation, watching the drop in oxygen saturation,” Heinmiller says.
Charles River’s Nurmi likewise sees the benefit.
“One of the big justifications for us to invest in small-animal MRI units, PET/CT and SPECT/CT units for this preclinical imaging was to minimize the use of animals in the experiments,” he explains. “It is a massive opportunity for any study if you can monitor longitudinal progression of the disease or potential alleviation of the disease or therapeutic reversal when you’re developing new drugs.”
Rather than need 250 animals for a study, he suggests, the longitudinal opportunities might reduce that need to 40 animals.
Heinmiller adds that “you’re essentially using the same animal as its own control.”
That said, there are still many reasons to move forward cautiously, as neuroimaging in humans and model animals is not identical.
“Technical differences in performing the neuroimaging in animals and humans can influence our interpretation of in-vivo neuroimaging data,” says PerkinElmer’s Kelada. “For example, there are both temporal and spatial constraints that need to be considered during the interpretation of both neuroimaging signal and neurovascular coupling.”
“In addition, the use of anesthesia in animal research studies can restrict the translational implications of findings especially for fMRI data,” she continues. “Also, many of the major neuropsychological questions that are investigated in humans cannot be explored in current experimental animal models, such as the sleep-wake cycle.”
Move to true multimodal
Because each of the imaging modalities offers both strengths and weaknesses in the types of data it produces, researchers have long taken a multimodal approach to studying both human patients and animal models.
“In general, CT, MRI, and [ultrasound] are anatomic imaging methods but they have low sensitivity,” highlighted Third Affiliated Hospital of Guangzhou Medical University’s Zhi-Yi Chen and colleagues in 2014. “Radionuclide imaging and optical imaging are functional imaging techniques, while they suffer from low resolution, which often lack structural parameter.
“The combination of different molecular imaging techniques—namely multimodality imaging—can provide synergistic advantages over any modality alone and compensate for the disadvantages of each imaging system while taking advantage of their individual strengths.”
According to Heinmiller, part of the interest is simply improving experimental efficiency.
“We go to shows and you see more and more of these multimodal things because people want to get more data out of the same experiment,” he recounts. “Getting just one form of dataset, it’s almost not enough anymore.”
But the serial application of these technologies offers its own constraints on data acquisition and interpretation.
“The theme of multimodal imaging is not new,” Cubresa’s Simpson acknowledges. “And in the preclinical setting, microPET or microPET/CT scanner and a preclinical MRI system would be two tools that have really been around for a while and are mature technologies that a lot of core imaging facilities have today.”
But as he further explains, the “multimodal” image for many such experiments comes as a coregistration of the data after the imaging sessions have been completed and the data is merged.
But coregistration is no easy task as many changes may occur in the shift from one modality to another, including the physical movement of the test subject. Thus, there is a growing interest in platforms that can perform multiple imaging approaches simultaneously—true multimodality—such as those offered by Cubresa and VisualSonics.
Says Simpson, this offers researchers “the ability to simultaneously acquire that data by essentially having this single combined instrument with a single overlapping PET and MRI field of view that can acquire the data very accurately in that small animal, and that registration is not happening after the fact, but is kind of happening implicitly in the combined scanner.”
And this can be especially important when neuroimaging very small animals, Simpson presses.
“Their brains are very small,” he explains. “The emphasis on higher resolution is very important in the preclinical setting, so that you can actually have sufficient resolution to look at components of the rodent brain in sufficient detail.”
Simpson further goes on to explain that animal physiology—typically much faster than that of a human—is a further complicating factor, adding to the need for simultaneous image acquisition.
“[With] sequential registration of two separate systems, you’re losing a lot of confidence in being able to correlate what you see on MRI with what you see on PET,” he suggests, adding that researchers have a “much better ability to elucidate the relationships between some of these processes by virtue of looking at them together.”
Shortening the window of the imaging time, he continues, offers many benefits for the precision of the preclinical imaging study.
“It lowers the amount of imaging time,” he says. “It lowers the window for physiological changes to happen, so you have maybe a little more control over the measurement system, which is animal physiology.”
And it lessens the cumulative effect to anesthesia, which is known to impact neuroimaging, as Kelada noted earlier.
But it’s not just about the instrument, presses Cubresa’s Nakatsuji.
“It’s about developing radiotracers that are designed to target certain things, certain neurotransmitters, and then having the ability to see it in great detail with the MRI component to study the response,” he argues.
Chen and colleagues said much the same thing.
“It is still difficult to carry out multimodality imaging due to existing problems regarding the accuracy of coregistered image, extra ionizing radiation, the extra dosage of contrast agent and the toxicity of fused contrast agents,” the authors wrote. “Therefore, it is in urgent need of developing multimodality molecular imaging agent.”
Agents of change
Traditionally, contrast agents have been used to monitor gross anatomical and physiological changes within an organism, highlighting for example disruptions of blood flow and the blood-brain barrier or identifying accumulations of neurofibrillary tangles and plaque in the brain.
In recent years, however, there has been a steady growth in the use of ligands, whether small molecules or peptides, to monitor change at the molecular level, such as the presence and activity of neurotransmitters or signaling itself.
As though hearing Chen’s call, Stanford University’s Sanjiv Gambhir and colleagues recently reviewed the growth of nanoparticle-based imaging agents, focusing their discussion on those already approved for clinical use.
“Nanoparticles are a new and exciting class of imaging agents that can be used for both anatomic and molecular imaging,” the authors wrote, suggesting that the small size and unique properties of nanoparticles offer:
- Intense, longitudinally stable signals;
- Different targeting strategies, whether passive via the mononuclear phagocyte system or active via ligand targeting;
- High avidity as multiple ligands can be added per particle;
- Theranostic capabilities, as a single nanoparticle can be used for both diagnostic and therapeutic purposes;
- Multimodal signal capabilities as, for example, one nanoparticle can be detected by MRI for deep tissue imaging and screening as well as optical imaging for intraoperative guidance; and
- Multiplexing, as a nanoparticle can be functionalized to detect various molecular targets simultaneously.
Speaking directly to the optical space, Heinmiller describes agents that now can be used with photoacoustic imaging because they absorb light: NIR fluorophores, for example, and gold or oligomeric nanoparticles that absorb NIR light very strongly.
“So now we can have a nanoparticle, attach some kind of targeting moiety to it, inject it systemically and see it accumulate in various disease models,” he says.
From his perspective, how these molecular contrast agents evolve will in many ways dictate how photoacoustics evolves.
“It’s sort of like the situation with fluorescent imaging,” he says. “You have to follow the optical probes, how good they are and how sensitive you are to them. I do see the two fields developing in parallel and with help from each other.”
He recounts his recent experiences at the SPIE Photonics West meeting.
“They had a whole section on PA imaging, and there was some work presented by these calcium-channel signalling molecules, where they would basically change their optical properties based on the calcium concentration,” he recalls excitedly.
“There are significant limitations there,” he recognizes. “For example, depth of penetration and the sensitivity. So right now, they’re looking at things like fruit flies and zebrafish. But they are using photoacoustic imaging to do it.”
“You can see the firing of neurons noninvasively,” he enthuses. “That is absolutely huge.”
“Is it here yet in the mouse model?” he says with a shrug. “I would say not yet. But that’s where people want to go with it.”
Although the new and advancing in-vivo imaging modalities are unlikely to replace existing methods used to characterize animal models of human disease, their increased use is likely to provide ever-expanding insights with ever-clearer sights in.
Mouse pads and modeling
Unlike so many other therapeutic categories such as cancer or cardiovascular disease—and despite extensive advancements in molecular characterization—many neurological disorders continue to sit within something of a black box, metaphorically and literally represented by the human brain. While conditions such as Alzheimer’s disease (AD) and multiple sclerosis have seemingly obvious etiologies—e.g., amyloid deposition and demyelination, respectively—the knowledge and characterization of these events have not yet led to significant improvements in disease control.
And in many aspects of neuropathology, animal models struggle to find equivalent use to what they have in other clinical states.
“One major advantage of using small-animal models for the monitoring of neuronal activity (instead of humans) is the possibility to combine invasive methods that measure hemodynamic changes with noninvasive imaging tools such preclinical PET,” explains Olivia Kelada of PerkinElmer. “In addition, the synergies between optical and PET/CT can be used for the study of neurological disease in animals.”
“For example,” she continues, “bioluminescence can be used to track the progression of AD in transgenic mice while [18F]FDG PET could be used to quantify the reduction in glucose consumption in pathological brain of AD compared to normal brain.”
But understanding how well those animal models mimic human conditions remains challenging.
“We think it highly unlikely that animal models, especially in organisms as neurobiologically different from humans as rodents, can be expected to recapitulate all salient features of a human mental illness or even to have perfect correspondence with respect to individual behavioral symptoms,” wrote Mount Sinai School of Medicine’s Eric Nestler and Harvard University’s Steven Hyman back in 2010. “Above all, models are meant to serve as investigative tools. Thus, most important in developing, examining and reporting on animal models of disease is to be clear about the goals of the model and, in that context, to judge construct, face and predictive validity.”
Over the last seven years, the application of tools such as CRISPR/Cas9 to shrink the gap between rodent and human genetics and analytical tools such as in-vivo imaging systems to more fully characterize the models and better correlate the findings with the human experience has helped validate neurological models, but the work is ongoing.
Working with Lundbeck Pharmaceuticals, Taconic Biosciences recently launched a series of animal models that investigators can use to study schizophrenia, autism, attention-deficit/hyperactivity disorder (ADHD) and epilepsy.
“In those models, what they’ve actually done is deleted large regions in the mouse genome that are comparable to large deletions that we saw in humans that have strong associations to all of those neurological disorders,” explains Taconic’s Paul Volden.
But unlike monogeneic diseases like cystic fibrosis, simply having the same deletion or mutation does not mean the pathophysiology will be the same.
“In humans, if you have one of those large deletions, your likelihood of being diagnosed with one of those neurological disorders increases by magnitudes, depending on which specific deletion you have and which specific disorder we’re referring to,” Volden continues. “Your likelihood of getting those disorders increases; it is not 100-percent penetrance here.”
“That’s where the idea comes into play that these aren’t models for schizophrenia, autism, epilepsy and ADHD specifically,” he continues.
A critical step for the researchers who developed the models, Volden presses, was to understand exactly what they had by first characterizing the models with a battery of behavioral phenotypic tests and electrophysiological assays.
The goal, he says, was to understand what changes occur in these models that have also been seen in humans with the same genetic deletions.
“I think that this is a fantastic approach,” he enthuses. “It’s an approach that enables other investigators to jump on board.”
“They get an understanding of how the model works, how similar it is to specific pathological conditions in humans,” Volden explains. “And now, if they have an idea, they can come in and intervene, test drugs or other kinds of therapies.”
Antti Nurmi of Charles Rivers Laboratories concurs on the importance of characterization, describing his company’s multipronged approach.
“We have a model, for example, in AD, where certain pathological processes are developing or progressing over time as the animals age,” he explains. “With AD, it is typically the amyloid burden that we may want to monitor. That is the driving thought in the model that leads to the Alzheimer’s-like condition in these animals.”
Not only are the mouse models scrutinized thoroughly with behavioral tests, such as those monitoring changes in memory and cognition, but the company also relies heavily on imaging to identify morphological and physiological changes.
“And beyond the imaging and behavior in these models, we of course apply biomarkers,” Nurmi continues. “So, we can measure, for example, disease-specific proteins or we can look at gene expression changes as the disease progresses, or we can look at them at the same time we are providing treatment to diseased animals.”
As an interesting sidebar, Nurmi described one very unusual, but expanding method of performing cognitive tests: touch-screen operant systems.
“Basically, we are applying different types of attention, cognition and related assays that are almost the same as what is being applied in human patients,” he explains, adding the system is like an iPad that rodents poke to perform certain tasks.
“You can basically train the rodents to do something and then you can make these tasks more complex,” he says, “And in disease, you can actually see that the diseased ones cannot learn the task or they cannot relearn to do the task any longer.”
Charles River researchers recently demonstrated the power of this platform with a mouse model of AD.
Both mutant and control mice were trained to nose-poke images projected on the iPad screen in exchange for a reward. They found that both mutant and control mice acquired visual discrimination at comparable rates.
The researchers then reversed which image triggered the reward and found that, whereas both groups made a lot of mistakes initially, the control mice were able relearn the patterns much faster than the mutant population.
Driving the need for a lot of these—and other advances—is the fact that any disconnect between models and patients can be expensive for drug companies looking to score the next blockbuster.
“Numerous CNS compounds that passed [preclinical] testing failed in human trials perhaps because animals serve as models of disease mechanisms and not the disease itself,” offers Kelada. “In diseases where there is a greater mechanistic understanding, some disparities still remain between animal models used in drug discovery validation and the human diseases being targeted for treatment.”
“[For example,] transgenic models of AD can show relatively little neurodegeneration, neuroinflammation, cognitive or behavioral impairment,” she adds.
Volden presses the point.
“We’ve seen a bunch of failures,” he explains. “A few weeks ago, Merck pulled out with their Phase 1 inhibitor verubecestat. Lilly pulled out. Lundbeck pulled out. I could list a few others. And quite a few of those that I listed pursued this amyloid hypothesis.”
The one company he left off his list, however, is Biogen.
“Biogen also has a monoclonal antibody that is tied to this amyloid plaque hypothesis; however, when they published their clinical results in Nature at the end of 2016, they were fantastic,” he recalls. “In fact, in the publication of their Phase 1 data, they had evidence for clinical efficacy in patients with reduced cognitive decline and dose-responsive.”
When these results were confirmed a few months later, he adds, the company’s stock prices shot up, something he attributes in part to how they used their AD models.
“In their manuscript, right alongside their clinical data in patients, they published their preclinical findings in the mouse model that they used,” Volden explains.
First, they described preclinical efficacy, in which the antibody cleared plaque from the mouse brains, he recounts, which is great because it justifies going into the clinic.
“But they also described important mechanistic insights,” he notes. “They showed that their antibody could penetrate into the brain, so cross the blood-brain barrier, which is very important.”
According to Volden, the mouse models were also critical to the development of Biogen’s clinical dosing strategy.
“If you look at the paper, the observed dose-response in the mouse model was consistent with the clinical doses that they used,” he waxes. “So this is an example where it seems as though if you’re using the models the right way, then potentially they can very much enable and enhance your clinical trials.”
As well, because many of these neurological conditions don’t tend to manifest until late in life—whether a mouse or a human—drug trials can be delayed significantly while waiting to see if your test subject is going to manifest disease.
To that end, Taconic recently started an initiative focused on aging the animals to make them ready for investigators to use off the shelf.
“There is attrition in the animals, especially the AD animals, so we can’t necessarily make those large aged cohorts available off the shelf,” cautions Volden, “but what we can do is have smaller cohorts that are ready so they can get to those pilot studies.”
Ultimately, much as with imaging technologies, a multimodal approach may be required for model systems of neurological disease.
“The only way to go forward from that is that you combine different animals to your experiments, and you use multiple models that basically address certain parts of that disease,” concludes Nurmi. “And then from those pieces, you make the full picture as good as possible.”
“In the end, of course, there is a certain amount of uncertainty in how that data actually applies to humans, but at the moment, we don’t actually have better tools that could replace the rodent models,” he adds sanguinely.