Before most people ever get to experience the relief offered by prescription medicines, they first spend time with a physician who runs a battery of tests to identify the patient's medical condition. Historically, these two processes—diagnosis and treatment—have evolved from different directions. One set of clinicians examined thousands of patients to correlate specific medical manifestations (symptoms) with specific medical conditions (diseases), while other clinicians and pharmacologists followed the response of diseased populations to different experimental drugs in the hope of finding a definitive therapeutic.
With the advent of molecular medicine, however, the distinctions between the two processes are starting to blur. Discoveries in the fields of pharmacogenomics (see FDA piece on page 1) and biomarker identification are leading healthcare workers and pharmaceutical companies to a new middle ground in drug and diagnostics development; a place where scientists are taking the opportunity to killing two birds with one stone. This is the realm of a new form of healthcare development called theranostics.
Theranostics can be defined as using diagnostic testing to identify the disease, select a treatment regimen, and monitor the response of the patient to therapy. Essentially, the same tests that researchers developed to understand disease pathology and the ameliorating effect of a new drug are being used to monitor how the drug works in patients to bring them back to health.
As with any new business model, theranostics had early adopters, big pharmaceutical firms and biotechnology companies with drug and diagnostic kit development arms, but for the most part, the industry seemed to lag behind. More recently, however, the concept of co-developing drugs and diagnostics has picked up steam.
At the May 2004 Bio-IT World Conference and Expo in Boston, Dr. George Poste, director of the Arizona BioDesign Institute, suggested that researchers no longer had the luxury of focusing on their own little corners of the biological universe, but rather must stand back from their experiments and look at human health from a broader perspective. As he explained it: "The elucidation of the human genome was the last step of obligate reductionist biology."
According to Poste, medicine is in a period of transition as it becomes more information-rich and data-driven. In his view, the principal technologies of molecular medicine, genomics and informatics, offer the possibility of better disease diagnosis and treatment. Likewise, they offer the potential for medicine to move from a science based on prediction to one based on prevention.
The FDA seems to have also taken notice recently of the potential impact of theranostics. In April, the agency released a draft preliminary discussion document entitled "Drug-Diagnostics Co-Development Concept Paper" that offers the organization's initial thoughts on this subject. The document is designed to elicit comment from the public, which the FDA will use to eventually draft guidelines on development and testing. Throughout the document, the authors make it clear that the diagnostics development process should occur as early as possible in the drug development process.
"Ideally, a new diagnostic intended to inform the use of a new drug will be studied in parallel with early drug development (phase 1 or 2 trials) and diagnostic development will then have led to prespecification of all key analytical and clinical validation aspects for subsequent (phase 2 and phase 3) clinical studies," the report reads. "These include the intended population and selection of diagnostic cut-off points for the biomarker intended to delineate test positives, test negatives, and, when appropriate, equivocal zones of decision making."
That the theranostics business model will be lucrative is beyond question. Endless market reports have lauded the possibilities. What these reports do not discuss, however, is the possible negative connotations associated with diagnostics and drug co-development. While it is unthinkable that companies would consciously develop tests in the hopes of increasing the sales of the associated drugs or that the tests would be predisposed to selecting candidates who might benefit from the drug rather than those who need the drug, there must be some acknowledgement of the fact that people typically find what they're looking for, whether it's there or not, if they look hard enough.
One also has to question the impact that drug-diagnostic co-development will have on healthcare systems that are already strapped for cash. When MRI and CAT scans first entered the medical market, they were niche tests, only being performed on patients who would best benefit medically from high-resolution scans to identify an otherwise untraceable condition. But over the last decade or so, these high-resolution, high-cost tests have become routine and, some have argued, over prescribed to the point where insurance companies are now balking at the added expense.
Will the same thing happen with the increasing prevalence and scope of diagnostic tests? Even as the cost to produce these kits decreases through improvements in microfabrication, analytical sensitivity, and electronic miniaturization, the sheer spectrum of possible tests has increased. At present, the average metropolitan hospital has at its disposal a few dozen tests that it can perform on a variety of clinical samples, but waiting in the wings, in "for research purposes only" form, are several hundred new tests that will begin to flood the clinical market as they are validated internally and by the FDA.