The life-sciences community was recently shaken up when a new study was published in the Journal of the American Medical Association, “Reanalyses of Randomized Clinical Trial Data,” calling the findings of six decades of clinical trials into question. The study reported that after reanalysis, the clinical trial data did not come to the same conclusion as the original researchers, with as many as 13 trials coming back with different results.1
Sharon Moulis, Ph.D, is the director of tissue diagnostics alliances at Definiens. Her background includes development of quantitative immunohistochemistry methods, assay design, companion diagnostic strategy and implementation and clinical trial sample management for companion diagnostics.
The study begs several questions: What happened? Are the issues relegated to only these 13 trials, or is this a wider problem? And perhaps most importantly, how can we reduce these errors going forward?
Data accuracy landscape
Data inaccuracies unfortunately extend far beyond the studies examined in the JAMA report, with results of multiple studies having been called into question in recent years. The clinical trials observed in this particular JAMA study appeared to have conflicting data results for a number of reasons; in one case, a hospital changed its protocols midway through the trial, and in others the original raw data may have been flawed.2 In the case of other drug trials, flawed data reporting has been named the culprit.3 It may be tempting to simplify the issue and say that the reason for data inaccuracies is that researchers don’t always have the necessary tools and infrastructure to share, analyze and report data—and that researchers need access to more sophisticated analytical tools and a more advanced “big data” approach to clinical trials. However, the challenge goes much deeper than that. The root of the issue actually begins with the data collection itself—and with the lack of standardization therein.
The lack of industry standardization is compounded by two main factors:
- Volume of data: Data-collection errors can be prevented, but with the astronomical amount of data passing between hands in a clinical trial—which is increasing every year—without proper standardization, errors are often inevitable.
- Multiple site testing: Researchers often spend years developing research and a hypothesis that ultimately gets implemented in a clinical trial, and during that process, as it now stands, it’s difficult to ensure that methods and protocols remain unchanged or that data reporting is 100-percent accurate. With multiple labs gathering and analyzing data—often five or six CROs participating—protocols may be inadvertently altered at different sites.
Standardization is a widely recognized issue, and the industry has attempted to make improvements in this area. For example, in order to combat the lack of standardized, high-quality biospecimens—which is seen as a significant roadblock to cancer research—the National Cancer Institute (NCI) developed the NCI Best Practices for Biospecimen Resources,4 which outlines operational, technical, ethical, legal and policy best practices. These are certainly helpful for the cancer research community, but there is not yet a monitoring mechanism in place to ensure researchers, lab technicians or others are following them. Moreover, these guidelines on their own are not enough to completely eliminate standardization issues.
Challenges with data collection
In the Hospital Setting
Data collection usually starts in the hospital, where tissue samples are collected and then sent to the lab. Challenges with standardization start as early as this first stage—procedures for how tissue samples are collected and fixed may differ from hospital to hospital, and patient notes are often still handwritten. With pharmaceutical companies relying on data collected amidst this array of variables, it’s easy to see how mistakes can be made. Moreover, by the time an error is caught, the material may have already been sent out to multiple organizations, perpetuating the mistake. Pharmaceutical companies spend significant resources attempting to correct these errors, but it is not always possible to trace them back to the source.
In the Lab Setting
Today’s labs have an immense amount of responsibility within the clinical trial process, but unfortunately, lab employees are not always provided with adequate training, education and an understanding of the importance of every single step they take. For example, a lab technician may be conducting tumor scraping for molecular testing and might think the accidental scraping of non-tumor tissue into the tube won’t make a difference—but it will. That lab technician is likely expected to have a large skill set and execute multiple tasks at once, but has not been provided in-depth training or details regarding the projects on which he’s working. If this happens with one person at each lab participating in a clinical trial, the lab results will ultimately differ and it may be impossible to determine exactly where the issues are stemming from.
Impacts on the industry
The lack of standardization in the collection process and subsequent data inaccuracies puts a great amount of stress on the industry—data inaccuracies affect the entire pipeline from beginning to end, since it’s often difficult to determine which data can be trusted. Ultimately, data inaccuracies raise costs, impact drug development and shake consumer confidence in clinical trials and even approved drugs.
In addition, data accuracy issues hinder the development of companion diagnostics, which are a significant part of today’s drug discovery and development process and move toward personalized medicine for cancer treatment.
The FDA requires a companion diagnostic for a drug if it works on a specific genetic or biological target that is present in some patients with a certain cancer or disease.5 The lack of standardization and data quality issues make the work of developing companion diagnostics extremely challenging since the tests must survive a long list of variables from the hospital to the lab.
The future of companion diagnostics relies on quantitative results; in order to be statistically significant, research labs must treat everything accurately. If we as an industry aren’t able to achieve data accuracy, we won’t be able to achieve true personalized medicine: with too many variables to account for, we can’t stratify patients properly and ultimately can’t provide patients with the most appropriate treatments.
Developing a solution
With personalized medicine in reach and the demand for effective companion diagnostics increasing, the need for data accuracy has never been greater.
While the NCI Best Practices mentioned earlier are a good start, the industry as a whole needs to focus on standardizing processes from beginning to end, particularly in the following areas:
- Tissue procurement: Standardizing how tissue samples are collected and handled is crucial to data accuracy.
- Lab investment: Investing more in lab education and standardizing how lab employees are trained is vital. By better educating clinical sites on lab testing protocols, as well as ensuring that they understand how following those protocols impacts the outcome of testing, we can ensure more consistent results.
- Data management: Developing a standard technological platform to ensure accuracy in data points is also an important piece of the process. Many CROs still rely on paper, and those that do have automated systems are often bound by strictly defined rules on what can be entered, which eliminates their ability to be flexible to react to changes. Both of these scenarios can result in data errors. Additionally, developing a better means to aggregate data together in one location is important; this is being done to some extent in clinical trials, but many times the associated labwork is not always integrated. There is not yet a robust place where all of this data can be stored and efficiently mined for results.
Solutions to all of these areas are in progress, and better data accuracy is on the horizon. However, there are steps that can be taken throughout the entire pipeline in order to improve data accuracy more quickly. If we look across the spectrum, for example, medical schools can dedicate more time to educating about the new age of companion diagnostics and molecular testing, and the importance of standardization, which will help bring these concepts into the hospital; hospitals can create standardized guidelines for themselves and address these challenges in their employee training; and once trained effectively, lab technicians can work to follow protocols and standards more closely.
Commitment to standardization and data quality across the industry is key to ensuring data quality and ultimately clinical trial and drug development success.
Sharon Moulis, Ph.D, is the director of tissue diagnostics alliances at Definiens. Her background includes development of quantitative immunohistochemistry methods, assay design, companion diagnostic strategy and implementation and clinical trial sample management for companion diagnostics.
References
1 Shanil Ebrahim, et al., “Reanalyses of Randomized Clinical Trial Data,” Journal of the American Medical Association, Vol. 312, no. 10 (September 10, 2014)
2 Shanil Ebrahim, et al., “Reanalyses of Randomized Clinical Trial Data,” Journal of the American Medical Association, Vol. 312, no. 10 (September 10, 2014)
3 Jefferson, T. et al., Neuraminidase Inhibitors for Preventing and Treating Influenza in Healthy Adults and Children (Review) (Wiley, 2014).
4 National Cancer Institute. (2011). NCI Best Practices for Biospecimen Resources. Retrieved from http://biospecimens.cancer.gov/bestpractices/2011-NCIBestPractices.pdf
5 U.S. Food and Drug Administration. Personalized Medicine and Companion Diagnostics Go Hand-in-Hand. Retrieved September 26, 2014, from http://www.fda.gov/ForConsumers/ConsumerUpdates/ucm407328.htm