Beyond Genomics: Translational Medicine Goes Data Mining

We are fortunate to live in a time of growing life expectancy across most world populations, but this has also resulted in an increased prevalence of chronic diseases. Accounting for more than 70 percent of healthcare spending in the developed world, treatments for chronic diseases are typically costly, prolonged, and - in many cases - largely ineffective. 

This is further compounded by the extremely low probability - less than 10 percent - that a drug will make it from Phase 1 to approval during the course of a clinical trial. This statistic is disheartening not only for the scientists, physicians, and patients involved, but also for the drug development industry at large. In addition, the average cost of clinical trials - before approval - has reached $30-40 million across all therapeutic areas in the U.S.

A Drug Development Strategy Focused on Efficacy

This unsustainable situation has driven the notion that money could be better spent to pursue more effective treatments. Over the last couple of decades, the drug development industry has been looking at strategies to select more efficacious drugs while controlling the ever-spiraling costs related to their development.

This has led to the evolution of the multifaceted discipline, Translational Medicine (TM), which has been called “bidirectional” since it “seeks to coordinate the use of new knowledge in clinical practice and to incorporate clinical observations and questions into scientific hypotheses in the laboratory.” The beauty of the translational approach is that it applies research findings from genes, proteins, cells, tissues, organs, and animals to clinical research in patient populations, with an explicit aim of predicting outcomes in specific patients. Essentially, it promotes a “bench-to-bedside” approach, where basic research is used to develop new therapeutic strategies that are tested clinically. 

From Bench-to-Bedside…and Back Again

However, translational also works “bedside-to-bench” since learning from clinical observations can provide optimal feedback on the application of new treatments and potential improvements.  

Recent technological advances have endowed us with the ability to test this “Bench-to-Bedside-to-Bench” approach. Today we can investigate the molecular signature of patients to identify biomarkers (or surrogate clinical endpoints) that then allows us to stratify the patient population and only administer the drug to those who have any hope of responding to it. 

Herceptin, arguably ‘the first personalized treatment for cancer,’ is a good example of the benefits of a translational approach. A 30-year success story in the making, Herceptin was discovered by scientists who used genomics technologies to identify ‘over-expression of HER2’ which leads to a particularly aggressive form of breast cancer. Adding Herceptin to chemotherapy has been shown to slow the progression of HER2-positive metastatic breast cancer.  Read the story here

Human Genome Project – Translational’s Driving Force

The world's largest collaborative biological project, the ‘Human Genome Project,’ successfully mapped 95% of the human genome. The sequencing of the human genome holds benefits for a wide range of fields and is perceived as the driving force behind Translational Medicine applications. We can now look forward to a time where the focus will start shifting to a more ‘individualized approach to medicine,’ perhaps even a focus on disease prevention as opposed to treating symptoms of disease. 

The viewpoint of the Personalised Medicine Coalition, that “physicians [will] combine their knowledge and judgment with a network of linked databases that help them interpret and act upon a patient’s genomic information,” further shows faith in this unification of art and science in medicine.

The Human Genome Project may have spearheaded technological advances in the genomics and bioinformatics fields, but many challenges remain for TM to cross over into clinical utility and become mainstream. 

Beyond Genomics Knowledge

For one, Translational Medicine can no longer solely rely on the ever-present bounty of genomics knowledge (though it will keep us busy for quite some time). All biologists know that genetics doesn’t work in isolation. Yes, it helps to identify biomarkers and particular molecular signatures, but the integration of knowledge from different biological silos is the next big challenge – and opportunity. That challenge (and opportunity) is data.

The goal is to effectively mine data brought together from live experiments, external ‘open access’ sources, legacy and real-world data portals, clinical and preclinical data systems, and more.

PerkinElmer Informatics will examine the challenges associated with the data integration needs of translational researchers, to deliver on the promise of Translational Medicine.

Download this article which features insights into how translational is simultaneously reducing expenses and improving patient health.