Traditionally, disease treatment options have been largelydependent on the outcome of epidemiological studies that use large cohorts ofpatients to provide evidence of the effectiveness and safety of drugs. However,it is widely recognized that not all patients with similar diseasecharacteristics benefit to a similar extent from a particular drug. In fact,about 90 percent of drugs work effectively for only 30 to 50 percent ofindividuals.
According to a report by Spear,et al.1, a particular cancer drug class is ineffective for about75 percent of individuals of a patient population. Breakthroughs in the fieldof genomics, including the completion of the sequence of the human genome in2001, have led to the development of many molecular techniques that allow theelucidation of differences in the genomes or transcriptomes of patients. Thesedifferences can increasingly be attributed to observed differences in drugeffectiveness.
Today, these genomic tools are the basis of the field ofpersonalized healthcare and companion diagnostics. Companion diagnosticsdescribes the process of stratifying patients depending on their response to aparticular drug treatment. By facilitating personalized healthcare, moleculartechniques have the potential to significantly improve patient care, rescueeffective drugs from development failure and provide cost benefits to bothhealthcare systems and pharma development programs.
The mutational testing of the KRAS gene in patients withmetastatic colorectal cancer is one important example of successful patientstratification. Certain mutations in the gene result in non-responsiveness of atumor to treatment with EGFR-inhibiting drugs. Implementation of mutationtesting removes the risk of treating patients with an ineffective drug and theresulting side effects, while at the same time eliminating the costs of anineffective treatment.
The Oncology Timesreported an economic analysis that estimated potential cost savings resultingfrom personalized healthcare in the range of more than $600 million on drugsalone2. In light of this potential, public funding with a focus onpersonalized healthcare has significantly increased. Recently released budgetsfor the U.S. Food and Drug Administration (FDA) and National Institutes ofHealth (NIH) for the fiscal year 2012 underline this trend: The NIH budget willincrease to $31.38 billion, funding "Enhancing the Evidence Base for HealthCare Decisions" as one of three major themes.
New technologies—particularly second-generation sequencingtechnologies—that enable sequencing of whole genomes at costs significantlybelow $10,000 per genome will further fuel the pipeline of potential molecularbiomarkers that may be employed for patient stratification. Many diseases aremultifactorial, and multiple markers may be needed to explain more complexdisease phenotypes or to serve as marker sets to predict a certain disease outcomeor guide towards a specific therapy. In the near future, these new sequencingtechnologies may enable the screening of hundreds of cancer genomes and thusthe elucidation of underlying disease mechanisms.
However, new discoveries alsosubstantially increase the complexity of the validation of biomarkers. Inaddition to being encoded in the genotype of a patient, biomarkers are alsolikely to be determined by epigenetic mechanisms, for which the complexity ofdata mining and interpretation is currently not well understood.
While molecular screening methods have supported thediscovery of potential new biomarkers, these technologies have often failed tobenefit drug development programs. This is due to biological variation of themarker itself or is caused by variability of methods and technologies that areemployed in the validation of the biomarkers.
Standardization of experimentworkflows is of key importance to eliminate as many variables from such complexand data-driven workflows as is possible. A biomarker that demonstratesclinical utility must rely on sample and assay technologies that are robust andultimately compatible with diagnostic laboratory operations. Standardizationstarts with the collection of the biological sample and stabilization ofcontained biomolecules.
In one example, Mueller,et al., demonstrated the importance of immediate RNA stabilization of bloodsamples in a research study monitoring leukemia therapy and evaluating thebcr-abl to abl transcript ratio3. Standardized collection, transportand storage of biological specimens until later processing—along withoptimized, adapted nucleic acid isolation methods for further downstreamassays—can contribute significantly to reducing data variability. Fullautomation of such processes avoids user interaction errors and variationintroduced through manual handling steps in a workflow that can varysignificantly between different operators. The need for highly standardizedexperimental workflows is also leading to an increasing number of collaborativeprojects among public and private researchers.
High-throughput biomolecular analysis or relatedapplications in second-generation sequencing need to be further validated byindependent assay technologies. State-of-the-art technologies comprisereal-time PCR and other sequencing-based approaches such as Sanger sequencingor pyrosequencing. Potapova, et al.4, have recently described howpyrosequencing can be efficiently utilized in both high-throughput sequencingand for further validation in a lower throughput format. In contrast to Sangersequencing, real-time sequencing based on the release of pyrophosphate duringthe course of the sequencing reaction provides the means for quantitativemeasurements combining the benefit of sensitive mutation detection withaccurate high-resolution sequence information. Pyrosequencing also allowsanalysis of the epigenetic methylation of a potential biomarker.
Real-time PCR, used in clinics on a daily basis, is anothervalidation technology that has undergone an evolutionary development leading tofurther standardization of assays and elimination of variability from datainterpretation. Multiplex PCR also contributes to the elimination ofvariability, as multiple markers can be analyzed with high performance in asingle reaction, which has been demonstrated by Ishii, et al., among many others5.
Other tools, such as microarrays or gene pathway PCR arrays,aid in the selection and validation of potential biomarker candidates byallowing scientists to investigate gene expression or somatic mutations of tensto hundreds of disease-related genes with one experiment. As an example, RT2Profiler PCR Arrays provide important means of quality control to monitor theefficiency and performance of the underlying PCR reaction—essential informationfor the validation of biomarkers.
Second-generation sequencing has already proved successfulin explaining previously uncharacterized genetic disorders6, andmany new technologies are in development. Whether these new technologies forhigh-content biomolecular screening can deliver on their promise to elucidatecomplex disease mechanisms still remains to be proved.
Undoubtedly, the finalanswer will strongly rely on the use of standardized pre-analytical sampleprocesses and robust assay technologies for validation of newly discoveredbiomarker candidates. It is critical to identify early on in drug developmentbiomarkers that possess clinical value and to move from the analysis of geneticdifferences to the implementation of personalized healthcare into clinicalpractice.
Dr. Dirk Löffert is vicepresident of sample and assay platform technologies at QIAGEN, where hisresponsibilities include technology and product development for samplepreparation and assay solutions for the life sciences. Löffert received hisPh.D. in molecular biology and immunology from the Institute for Genetics atthe University of Cologne in Germany.
1. Spear, Brian B., etal.: Clinical Trends in Molecular Medicine, May 2001.
2. Tuma, Rabiya: OncologyTimes, March 2009.
2. Tuma, Rabiya: OncologyTimes, March 2009.