EVENTS | VIEW CALENDAR
IO Informatics forms group to refine methods for translational research
VANCOUVER, B.C.— IO Informatics, Inc. has announced the extension of the mandate and scope of its Working Group on Semantic Applications for Hypothesis Generation in Translational Research. The Working Group, which was formed in July of this year, has extended its time frame to March of 2008 and increased its scope.
EMERYVILLE, Calif.—IO Informatics Inc. recently put together a working group of senior life science researchers from leading industry and research institutes to focus on semantic applications in translational research. What that means in practical terms is that the group has a mandate to extend IO Informatics' technology and user workflow in ways that will refine existing methods and deliver new ones for improved biomarker qualification and hypothesis generation for the translational research community.
"As applications based on emerging semantic technologies move from the theoretical to the practical, there is a great opportunity to improve methods for data association across research and knowledge sources. This will facilitate better understanding of experimental findings and their underlying mechanisms, to improve hypothesis generation and qualification," says Robert Stanley, president and CEO of IO Informatics.
In moving toward this goal, the group will focus on refining IO's Sentient software technology and use cases for biomarker identification and hypothesis generation in translational research.
The idea of focusing on the semantic data model is important to the pharmaceutical industry, Stanley says, because is allows researchers to describe how data from proteomics, genomics, metabolomics and other areas fit together to define or describe how a biological system may be experiencing certain conditions such as toxicity from a particular mode of treatment.
"We have interest in Sentient's ability to allow biological researchers—not just IT experts and bioinformatics folks—to connect their own experimental data and experimental methods with either their own or with globally referenced canonical resources," he explains.
Other types of software on the market that use the same data model to do this kind of work usually are hard-coded to data sources, Stanley notes, "but where we excel is in allowing people to network different data sources, and that is an important aspect of the working group's efforts."
Some of the current working group members include Pat Hurban, director of technology development, and Alan Higgins, senior director of translational research at Cogenics Inc., a Division of Clinical Data; Jonas Almeida, professor of bioinformatics at the Division of Quantitative Sciences of MD Anderson Cancer Center; and Ted Slater, senior manager of the Indications and Pathways Center of Emphasis at Pfizer.
As Almeida notes, "This application we are working to create is the conclusion of a bridge. We have the beginning and end of the bridge well constructed. At one end, we have data from many nonclinical sources and then at the other end, we have data from many clinical sources and bringing them together is hard. The domain experts—and not just bioinformatics experts—need to be able to bring together heterogeneous sources of data and view the results in a meaningful way. We are trying to make an application that can finally give us the middle portion of the bridge."
Biomarkers are important, Almeida notes, because they give researchers a way to identify populations and subpopulations and find out who can be treated best with any given treatment.
"So many drugs are very promising or work well for specific pathways and such, but they don't work for everyone in most cases and they don't work the same for everyone, so identifying subgroups is important," he says.
The ultimate application or applications that come out of the working group's efforts could be used as broad tools for data mining and visualization, Stanley notes, but the group will remain focused on biomarker discovery and validation for now, with an eye toward enhancing translational research.
"Having a powerful translational medicine research application is important because it hits both ends of the chain," Stanley says. "You get upstream results by finding biomarkers for things like toxicity, and then you get downstream results that allow you to identify and compare different patient data so that you can personalize treatments."