Tackling Data Challenges in Translational Medicine

When scientists gather, it doesn’t seem to take long for the conversation to steer to difficulties in corralling and analyzing data. In fact, a recent PerkinElmer survey shows that more than half of life science researchers said a lack of data transparency and collaborative methods is the key obstacle to precision medicine. New data-generating laboratory technologies are driving an urgent need to better manage and share an unwieldy influx of data.

Researchers across life science disciplines, including translational, are crying out for more practical ways to improve access to data and collaboration around that information.

Data Democratization

The goal has been to democratize data. This requires enabling scientists to access and analyze relevant data through scalable tools provided by IT rather than specific requests for each dataset or analysis.  Researchers are no longer satisfied waiting for IT or bioinformaticians to run reports; they want applications they can actually use, at the bench, to speed their work. 

Right now, the National Center for Advancing Translational Sciences says it takes, on average, 14 years and $2 billion to bring a drug to market – and perhaps another decade before it’s available to all patients who need it.

Tackling the data challenge could go a long way toward shortening that timeline and reducing that cost.

Tough Questions On Data

Some of the most pressing questions around data management stem from the most basic need: bringing useful, appropriate data together, and making it searchable and sharable, to solve problems. 

Translational researchers have a wide variety of medically relevant data sources available to them, from omics to adverse events to electronic health records and more. Tapping into the right data at the right time can help these researchers:

Determine new uses for approved or partially developed drugs

Analyze trends and predict potential direction for further research

Translate discoveries from basic research into useful clinical implementations

Analyze clinical data and outcomes to guide new discoveries and treatments

Here are some of the lingering questions that need answers in order to truly democratize data:

Question 1: How to Bring Data Together?

Most organizations still struggle to find all the data that might be helpful to them; or it is captured in silos that are difficult to penetrate, let alone aggregate. Often times, people who need the data aren’t even aware it exists. Figuring out the best means to usefully aggregate data remains a challenge.  Further complexity is added when layering in who is permitted to access specific datasets and how that access is controlled.

Question 2: How to Compare Data?

Once data is aggregated, researchers must be able to determine if they are accurately comparing appropriate or related data sets. This can stem from non-standard ontologies that make it difficult to map different data to each other. Additionally, if two items look similar but are in fact very different, how can the scientist tell? 

Question 3: When to Normalize Data?

Aggregating and integrating data inherently changes the data; it is possible to manipulate the data without meaning to. So some are in favor of normalizing data as early as possible in the integration process, arguing it’s best to align the data well before analysis. Others say normalizing all data – some of which you may never use – is too time consuming and expensive. They support aggregating data later, so it is closer to raw when doing analysis. This can make analysis more effective because there is more context, but the data is harder to share.

Question 4: Who Analyzes Data?

In most organizations today, a small subset of the research organization – data scientists and bioinformaticians – perform data analyses. This creates a bottleneck. But until most bench researchers have the tools and skills to analyze the volumes of data they encounter, it is going to be difficult to scale analysis capabilities. Currently, the solution has been to employ more data scientists.

Delivering Answers

To solve the problem of helping researchers and scientists more quickly and efficiently analyze data themselves, we’re building scientifically relevant applications in an intuitive, simple, and repeatable framework with PerkinElmer Signals™ platform. We’re delivering workflow-based applications on the Signals platform for uses from Translational to Medical Review to Screening and more.  

Powered by TIBCO Spotfire®, the Signals platform makes it easier and more intuitive for researchers to perform everyday data mining. It enables scientists to leverage a single platform that combines best in class technology to address the issues around big data storage, search, applying semantic knowledge, and analytics in a solution that they can understand and use, leading to faster insights and greater collaboration.

PerkinElmer Signals is our answer to the four basic, yet pressing questions above. With it, we’re providing an out-of-the-box cloud solution that can handle the wide array of experimental and clinical data available to translational scientists. Without IT intervention, they can integrate, search, retrieve, and analyze the trove of relevant data from across internal and external sources.

If you’ve got questions about Precision Medicine data management tools and ROI, download our white paper.