Shooting for the Moon and Beyond with Translational Research



Integrating experimental and clinical data from proprietary private and public databases to accelerate research discoveries should be commonplace. During his 2016 State of the Union Address, President Obama took a crucial first step in making this a reality in the battle against cancer, calling on Vice President Biden to lead a national Moonshot initiative to accelerate research in this field. In June, the vice president took action, announcing a first-of-its kind, open-access cancer database. Dubbed the Genomic Data Commons (GDC), the new database is intended to provide the cancer research community with a unified repository to share information across cancer genomic studies in support of precision medicine.

 

What is the GDC?

The GDC contains the raw genomic and clinical data for 12,000 patients. In the future, Vice President Biden envisions that the open database will include detailed analyses from the research community of the molecular makeup of cancers, information on which treatments were used for specific types of cancer and how patients respond to those treatments. In addition, the GDC will consolidate the National Cancer Institute’s diverse datasets of genomic sequences and analyses of tumors. The main objective of all of this is to consolidate the data for open consumption to speed research, and eliminating time spent by researchers on similar projects that have already been conducted by others.  

 

Historically, public data sets like The Cancer Genome Atlas (TCGA) and the 1000 Genome Projects have helped the scientific community advance their understanding of complex diseases like cancer. In our view, the GDC is the latest step in advancing the scientific community’s knowledge in this area, and is a positive step in the right direction.

 

The Challenges

That said, there can be challenges with systems like GDC. These include:

·         Availability: The availability of data and the willingness and ability of researchers to share their data. Some companies may not want to share proprietary information about their genomic trials.

·         Consent and legal issues: In many cases, patient consent may not allow for the publication of this data.

·         Scope: While genomics is an important piece of translational medicine, there are many other profiling technologies not supported by the GDC.

·         Access controls: The GDC is designed to be an open platform and has little focus on restrictions. While it makes sense for sharing public data, access controls are an important and difficult part of a commercial solution dealing with clinical data.

 

Solutions

We also see a future where additional collaboration could occur with complementary systems offered by private companies to overcome the challenges listed above. For example, our cloud-based data management, aggregation and analysis platform for pharmaceutical researchers, Signals for Translational, helps integrate experimental and clinical research data from many sources and assay platforms. This not only includes genomic data, but also proteomics, metabolomics and imaging. A platform like Signals can integrate proprietary in-house data that researchers may not be willing to share with the public data available from the GDC.  

 

Beyond Moonshot

Seamlessly integrating data visualization, exploration and analytics capabilities with data management enables a highly interactive hypothesis-driven analysis workflow. With these capabilities, researchers can easily complete orders of magnitude more efficiently than with the traditional workflow.  We must be able to integrate experimental and clinical data from existing proprietary, private and public databases to research in order to support greater collaboration and help scientists to increase the speed and efficiency of developing targeted drugs - not only for cancer but across all therapeutic areas of translational research. 

How to Conquer Data Overload


Today’s typical pharma or Biotech Company is drowning in data. Your company most likely has multiple, different systems producing volumes and volumes of information about your research, your customers, your markets, operations and opportunities.

Collating and analyzing the data is fast becoming a distant dream for some companies - with basic data collection & storage an increasingly time-consuming challenge. 

It’s a remarkable time we live in– one in which data is easier to generate and capture than ever before. But, like the wider world in which we live, it can also be overwhelming.

With data warehouses, data lakes, operational data systems, research data systems, clinical data systems, as well as ERPs, CRMs, inbound and outbound email, social media & blog posts and archived data, you need to harness this data or watch your competition leap ahead. 

Unifying Data to Deliver Business Intelligence (BI)

Analysts and data scientists are spending vast amounts of time to find and assemble information. They are manually performing detective work and falling behind. Business leaders need to quickly access all the relevant data in one spot to make better, more informed decisions, faster.

PerkinElmer Signals™ Perspectives: Like Online Shopping for Data

This was the challenge that drove us to develop PerkinElmer Signals™ Perspectives. We wanted to provide pharma and biotech companies with a platform that unifies disparate data sources, without the limitations of typical BI and analytics tools. PerkinElmer surfaces all available data for the BI tools, so the business insights are grounded by a complete view of the enterprise. 

Our objective is to provide you with a 360° view of your data. We believe that accelerating data discovery & integration - giving you instant visibility into all of your information - allows you to confidently seize opportunities and mitigate risks.

Conquer Data Overload with Signals™ Perspectives 

Watch our latest video on Signals™ Perspectives to learn how it analyzes and tags data to create a universal index which everyone in the organization can understand and use. 

Related Webinar: Unify Your Data Sources With The Universal Adaptor For TIBCO Spotfire® 

Resources: Visit the PerkinElmer Signals™ Perspectives page to learn more about our TIBCO Spotfire-powered platform, or arrange a demo/get a quote. 

Overcome Collaboration Challenges with Outsourcing Partners


The pharmaceutical industry’s bias for doing everything in-house has been replaced by a growing preference for outsourcing non-core functions. Global drug discovery outsourcing alone - an $8.2 billion market in 2010 – is projected to reach to $21.2 billion this year [source: Kalorama Information].

Contract Research Organizations (CROs) support the pharmaceutical, biotechnology and medical device industries - performing R&D and clinical trial activities. The rationale for this externalization of research is lower expense, greater efficiency, and greater agility and resourcefulness for the sponsor.

An Impact  Report  from The Tufts Center for the Study of Drug Development found that - on the clinical side - not only had global spending on CROs increased 15 percent annually, but projects with high CRO use were submitted more than 30 days closer to the projected due date than low-CRO-use projects.

Distributed research is not without challenges, however. Outsourcing and partnering present some hurdles that both sponsors and their CRO partners must overcome:

•   Discovery pipelines live in many hands and minds outside the sponsor company

•   Entire teams must benchmark performance and quality

•   Data and services must be integrated across vendors

•   Data silos and fragmented intelligence still exist

•   Organization and management are difficult

•   Stakeholders experience communication lags

•   Concerns remain over data security and IP protection

Outsourcing & Partnering Can Challenge the Control & Analysis of Data

Pharmaceutical sponsors can find themselves with too many collaboration partners, and no easy, secure means to connect and communicate with them in near-real time. They may work with more than one CRO, or CROs with multiple or remote locations (a growing number are in China), making it difficult to maintain control over data and to share results.

CROs also work with multiple sponsors, which introduces competitive and IP concerns. Outsourcing partnerships can be long-term relationships, but individual projects are often limited-term engagements. This leaves sponsors looking for a way to connect everyone involved in the project – internally and externally – but only for the duration of the project.

Much of the communication between sponsor and CRO today takes place through email. Not only can that introduce delays, but valuable data assets are floating in cyberspace with little to no controls.

Cloud-Enabled Collaboration

For outsourcing to succeed in driving innovation, reducing cost and expediting development, the mechanisms for communication and collaboration across distributed labs must be effective, efficient, secure and easily deployed. Users must be able to capture and analyze data from a variety of disparate sources, manage the metadata around it, and collaborate around shared results & insights.

Scientific  Computing  World  recently reported that software vendors are using the cloud to build secure, flexible platforms as collaborative workspaces for their pharmaceutical and CRO customers.

PerkinElmer told the magazine that organizations five years ago worried about perceived safety issues of cloud technology. Knowing the cloud has played a major part in facilitating collaborative models, we believe today the industry is far more willing, “to leverage cloud-based informatics platforms for sharing research, workflows, results and analyses.

Setting up collaborations in the cloud means that each   party has access to the same informatics infrastructure. At the same time, data exchanged as part of the joint work or service activities is kept corralled in the cloud, helping to ensure that in-house systems are at much less risk of breach.”

The Collaboration Platform

PerkinElmer’s ongoing efforts to leverage the cloud are focused on providing a flexible informatics infrastructure for data sharing between pharmaceutical researchers and CRO partners. E-Notebook serves as the foundation for the collaboration platform, helping scientists take advantage of PerkinElmer’s traditional technologies (think of the rich experience delivered via the Windows client on a laptop). But now opportunities to analyze and review information are offered through a web browser. This way, users can monitor actions and submit data from mobile devices, staying better connected and improving efficiency. Helping scientists take advantage.

Data entry into analytics applications can take place not just from a single device, but the smartphones and tablets of scientists on-the-go – at conferences, in the field, even at lunch. The goal is to use technologies to get analysis results and access to data quickly, from wherever they are.

With the Collaboration Platform a pharmaceutical company can initiate a workflow request from the E-Notebook by sending requests to partner CRO using Elements. Both the sponsor and the partner collaborate and steer the work in the right direction within Elements and on completion transition the work back into the E-Notebook where it becomes part of the pharma’s knowledge base. Learn more.

These efforts will:

1.   Remove the need for the sponsor to deploy a second E-Notebook or to grant access to the their E-Notebook system so it can be accessible to the CROs. Elements - SaaS accessible through a web browser- removes the IT overhead of deploying and maintaining an additional E-Notebook system

2.   Enhance collaboration

•   The Collaboration Platform  provides better capabilities to share experimental data in distributed research environments.

•   All experimental detail and results are accessible to the CRO and the sponsor company (no need to wait for results)

•   The data is accessible in real time

With no indication that outsourcing will slow down, pharmaceutical companies and their CRO partners should look to the cloud – and their solution providers – for the secure, flexible informatics platforms that will enable them to collaborate, communicate, and share data in real time.

Are you living up to the promise of outsourcing? Could you benefit from a cloud-based Collaboration Platform?


Clinical Development – Changing for the Better


With new regulations taking effect this fall and more advanced technologies now available, the state of clinical development of new drugs and devices is definitely changing for the better.

On November 1, 2016, revisions to the Good Clinical Practice (GCP) addendum E6 (R2), from the International Conference on Harmonization (ICH) Working Group, will take effect. This will increase the adoption of Quality by- Design (QbD) and Quality Risk Management (QRM) principles in clinical development.

There are three big changes in store for pharmaceutical companies, but first, let’s revisit QbD and QRM and give some historical perspective.

QbD builds quality into the pharmaceutical manufacturing process, using a systematic approach to product design, process design and control, process performance, and continuous improvement. In embracing QbD, the FDA recognized that increased testing doesn’t necessarily improve product quality; it must be built into the product.

QRM, meanwhile, is a systematic process to assess, control, communicate and review possible risks to the quality of drug products across the product lifecycle. Quality should be based on scientific knowledge and the protection of patients. The effort, formality, and documentation involved in QRM should be commensurate with the level of risk.

A Bit of Clinical Development History

To appreciate the improvements these practices have made to date, it’s important to remember what the clinical development world was like before QbD and QRM first emerged. Back when ICH GCP E6 (R1) was released, in the mid-1990s, clinical trials were nowhere near as costly and complex as they are today - nor were the regulatory, ethical and quality standards as high.

From a technology perspective, two-plus decades ago the data from clinical trials was still mostly captured on paper. The most innovative companies were using Clinical Data Management Systems (CDMS) to perform heads-down double-data entry to produce paper Case Report Forms (CRF), which were then mailed from the clinical sites to the sponsor. Data clarification reports were mailed back and forth to resolve discrepancies. The process was amazingly inefficient.

ICH GCP E6 (R2) was an effort to modernize the process, and improvements continue to this day.

Changing Clinical Trial Data for the Better: E6 (R2)

The latest addendum, GCP E6 (R2), which applies to clinical trial data being submitted to regulatory authorities, provides “a unified standard for the E.U., Japan and U.S. to facilitate the mutual acceptance of clinical data by the regulatory authorities in these jurisdictions.”

Importantly, it encourages the “implementation of improved and more efficient approaches to clinical trial design, conduct, oversight, recording and reporting while continuing to ensure human subject protection and data integrity.”

Today’s life science industry has much better technological tools at its disposal, allowing for implementation of more efficient approaches in clinical development to meet the goals of protecting human subjects and data integrity. Unlike the ‘90s, when electronic data capture wasn’t available and the Internet was still a mystery, we now have ePRO, CTMS, IWRS, IRT, eTMF, and slick visual analytics for the analysis of clinical, operational, and safety data.

Realizing the technological advances, regulators are encouraging pharmaceutical sponsors and their CRO partners to use technology to achieve numerous goals:

• Identify and mitigate areas of high risk prior to conducting a trial

• Monitor risks during the trial

• Improve trial oversight

• Ensure data integrity and data quality

Three Big Changes for Clinical Development Pharma Sponsors

The revised E6 guidance places three main areas of responsibility with the pharmaceutical sponsors: riskbased quality management, risk-based monitoring, and CRO oversight.

1. Risk-Based Quality Management

New ICH guidance states that quality management is expected to be risk-based, and should be applied to monitoring and auditing tasks. Quality management involves efficient protocol design, data collection tools and procedures, and information collection that’s essential to decision making.

A risk-based quality management system should be implemented to manage quality throughout the clinical trial, including: design, conduct, recording, evaluation, reporting, and archiving. Activities should focus on human subject protection and the integrity of trial results. Methods to assure trial quality must be equally weighted with trial risks and the importance of the information collected. The trial must be operationally feasible, and operational documents such as protocols and CRFs should be clear, concise, and consistent.Anything unnecessary – think of certain procedures or data collected – should be avoided.

2. Risk-Based Monitoring (RBM)

The monitoring portion of the guideline had major additions, including an addendum calling for a risk-based approach to monitoring clinical trials by leveraging a combination of onsite and centralized monitoring. Centralized monitoring was called out as a method to “complement and reduce the extent and/or frequency of onsite monitoring.”

Monitoring activity results should be “documented in sufficient detail to allow verification of compliance with the monitoring plan.” The monitoring plan should be “tailored to the specific human subject protection and data integrity risks of the trial.”

Such a centralized monitoring strategy requires, from a technology standpoint, three capabilities for RBM:

1. Data integration

2. Visualization-based data discovery

3. Predictive analytics

These lead to fast, actionable insight; visibility into the unknown; self-service discovery; and universal adaptability.

3. CRO Oversight

For QbD and QRM to be more successful in clinical development, regulators are asking sponsors to take on a more active role; they can’t completely hand over trial responsibility to the CROs. “The sponsor should ensure oversight of any trial-related duties and functions carried out on its behalf,” ICH guidance states.

Achieving Clinical Trial Goals

With better technologies and updated regulations – like the addendum to ICH CGP E6 – working in tandem, life sciences companies are able to improve clinical development and achieve many important goals:

• Better trial oversight

• Reliable data quality

• Lower monitoring costs

• Faster study cycle times

• Patient safety

What would a Phase III project manager from 1996 (talking on a Motorola StarTAC!) think about the technological advancements - and better guidance - available for clinical development in 2016?

Are you working with the best solutions for QbD and QRM in your clinical trials?

ISO Certification 9001:2008 for Informatics R&D and Global Support

               

Do Standards Have a Place in Software Development?

Standards would seem to be anathema to software developers, who might protest that their use would stifle the creativity and flexibility required for agile or iterative development.

By their very definition, standards are a model or example established by authority, custom, or general consent; something set by authority as a rule for the measure of, among many things, quality or value.

Is it possible to create standard requirements to improve the quality or value of software, without affecting the very creativity needed to achieve what the software or application is being designed to do? The simple answer is yes. ISO 9001:2008 certification can improve quality management systems for software. The International Organization for Standardization (ISO) establishes requirements for quality management systems in organizations that seek: 

• to demonstrate their “ability to consistently provide product that meets customer and applicable statutory and regulatory requirements” and

• to “enhance customer satisfaction through the effective application of the (quality management) system.” This includes establishing processes for continual improvement.

Because ISO 9001:2008 requirements are “intended to be applicable to all organizations, regardless of type, size, and product provided,” they apply well to quality management in software development and service.

This year, PerkinElmer received ISO 9001:2008 certification for its Informatics R&D and Global Support functions - both of which are important to customer satisfaction.  Guido  Lerch, the company’s executive director of quality control and assurance for informatics, and Lillian  Metcalfe, quality system manager, say they saw an opportunity with ISO 9001:2008 certification to proactively invest in implementing standards and thus the overall quality of software delivered.

Measure Inputs and Outputs

Certification provides a framework under which individual companies create the processes and procedures that lead to quality products and services.  It is not ISO mandating what the certified organization does, but rather the organization seeks certification of standards it devises and applies to processes.

For software development, PerkinElmer has not restricted what its development teams can do. Instead, the Company structured the “inputs and outputs” – the precursors for starting development and the processes to evaluate what they release.

“In the middle, we want our developers to go off and experiment and try lots of things,” Lerch says. 

Adherence to ISO 9001:2008 requirements assures that R&D and global support processes are clear, documented, and monitored, and that people are trained. It creates checks and balances to monitor the effectiveness of the quality management system, which leads to improved product quality and more satisfied customers.

It can also reduce the duration or scope of customer audits, as customers gain confidence from the knowledge that standards and processes are in place to develop and build products in a consistent manner. “There are hundreds of questions they won’t need to ask us,” according to Lerch, since PerkinElmer can explain its processes.

ISO 9001:2008 gives customers confidence they know exactly what they are deploying because the software has gone through a thorough testing regimen that follows certified procedures. In addition to knowing there are quality standards in place, there are also two people – Lerch and Metcalfe – whose roles are dedicated to the quality management of all PerkinElmer software released.

Two Important Points

While PerkinElmer Informatics has received ISO 9001:2008 certification for its R&D and Global Support functions, the company has been following such guidelines in principle for many years. Certification formalizes the company’s efforts.

ISO 9001:2008 has now been updated to ISO 9001:2015. PerkinElmer has three years to recertify under the new standards, and is committed to not only achieving certification, but maintaining it.

How confident are you in the quality management of the software and applications you’re deploying? Does ISO 9001:2008 certification increase that confidence?

The Convergence of Search and Analytics: A 360-Degree Data View


The convergence of search and analytics provides greater visibility.

Business intelligence (BI) tools were born to optimize data-driven decision making, but BI tools need structured data on which to operate. All too often, relevant data isn’t nicely formatted, stored and curated but instead resides hidden in some other less-structured format - such as text fields, technical reports or other longer descriptive prose. Extracting this hidden data has historically been too difficult, meaning that current datadriven decisions are actually based on only the most accessible and incomplete information. 

Enterprise search tools were introduced to help business and data analysts find more relevant content across disparate repositories. They provide greater classification and retrievability from document stores, file shares and other content silos within an organization. The end user still has to open up the content and read it to extract value and understanding and see the relevant data. Any subsequent analysis requires the end user to transcribe information into a form suitable for incorporation into a BI tool - often correlating data across the multiple relevant pieces of content uncovered by Enterprise Search. 

The Birth of Search Based Analytics

Recently Gartner highlighted a trend which has been apparent to many industry observers - that these two solutions are converging into a space they describe as ‘search based analytics.’ Search based analytics provide a capability to deliver a 360-degree view of all available and relevant information for both content access and analysis. As part of this process, Content Analytics can be applied to tease out the underlying data structures from otherwise unstructured content, facilitating the path from content access to data analytics. Not only does this unify data, but the layering of data visualization and statistical analysis over search helps identify and explain connections and provide insight.

Think of it this way: a Google search provides a long list of varied information assets related to a search. This data is presented as hyperlinks, providing the end user with a prioritized list of results they can read and explore further. However, Google still relies on the user to read and determine value from the sea of offerings in order to extract the underlying meaning. 

In contrast, Yahoo! Finance presents data plots, charts, and tables so that the end user can query and filter these structured values - enabling analysis of trends and patterns. The data is supplemented with links to analyst reports, press releases and pertinent content. This is a basic example of search-based analytics, where content links are provided to enhance the previously structured data.

Content Analytics ingests the content sources for you and extracts their deeper meaning. Content Analytics creates structured data from unstructured sources and supplements it with the previously structured data as well as the ability to quickly access the relevant content for further context.

To be truly successful, such a convergence must leverage natural language capabilities - both to interpret the underlying content and to make it easier to find the information users seek. People interpret concepts and findings in many ways, and search-based analytics solutions must accommodate the variance of language integral to human nature. After all, data is only useful if you can find it and draw insights from it.

This enables workers at all levels – not just IT or super users – to gain a complete picture based on a search of the complete data set. You don’t have to know how to ‘speak database’ to get at the data you need.

Years ago, Gartner called the notion of such comprehensive user-friendly search Biggle, from a combination of BI and Google. But this vision is only just now starting to be realized. Evidence suggests that businesses have begun to explore and invest in this convergence of search and analytics.

Challenges with Current Tools

Organizations continue to struggle with integrating disparate data – structured, unstructured, internal, and external. Search tools can help find documents, but they still rely on the user to generate insights from the data within. And valuable data is still locked away in unstructured or poorly structured forms.

Analytical tools still rely on data that is structured, rendering them useless for an organization’s unstructured data. Unless something changes, Gartner estimates that up to 90% of information assets in a typical enterprise will be inaccessible by 2017.

Holistic Data Initiatives

There are opportunities to unify the search and analytics experience to allow immediate access to both relevant content and analysis of structured and unstructured, public and private data.

For instance:

• Major hospitals are analyzing their Electronic Medical Records, including unstructured text from doctor’s notes, to better understand Hospital Operations and clinical results.

Pharmaceutical companies are combining sample, compound, or material data from technical documents and reports with assay and test data from data warehouses, experimental information from ELNs and safety assessments from clinical study reports to provide a 360º view of the sample, material or compound. 

Clinical development organizations are reducing the risk of clinical trial failures through improved study design and more targeted study populations by extracting and analyzing a correlated view of all patient information – from biomarkers to phenotypes to electronic health records - to drive patient stratification.

Outcomes researchers are analyzing medical outcomes through access to public sources, health records and internal databases to provide a consolidated view of the effect of therapeutics in the broader population. 

• Industry leaders are combining public and proprietary knowledge to see a consolidated and unified view of the current competitive landscape, thereby identifying business risks and opportunities through the aggregation and correlation of disparate data sources into a single holistic view. For instance, pharmaceutical companies are assessing drug patent lifespans and their impact on market dynamics. 

PerkinElmer Signals™ Perspectives, powered by Attivio®, delivers that 360-degree data view, addressing the challenge that all research organizations face. It gathers the information stored in many different places and formats, ranging from databases to reports to social media, and provides the analysis that enables decision making based on complete, relevant information.

Do you have a complete picture of your data? To learn how to find and correlate disparate data using natural language queries – without having to understand the underlying data structures – view our webinar here featuring PerkinElmer Signals™ Perspectives: Universal Adaptor for TIBCO Spotfire.

Clinical Informatics: So Much Data, So Little Time

               

Companies in the pharmaceutical and healthcare industries, overwhelmed by data, are turning to clinical informatics to solve their data challenges.

And there are plenty of challenges:

•  The industry is facing an ever-increasing number of clinical trials worldwide.

•  Every day, clinical and drug development professionals are generating more complex and disparate kinds of data.

•  Most companies are using a wide variety of methods to collect, sort and analyze the data.

•  Many of these data collection and sorting methods are unconnected and isolated from each other, leading to a need for manual data aggregation.

This new video on PerkinElmer Informatics for Clinical Development explains the importance of rapid, straightforward analysis to allow fast-paced decisions that are critical to the success of a clinical trial.

To be valuable, clinical informatics must deliver data structured in a way that allows it to be effectively retrieved and used.

Whether improving patient enrollment & retention, reviewing clinical data and operations, or handling critical risks in monitoring, the objective of clinical informatics is provide data integration across the development lifecycle and deliver a complete picture of your pipeline - in real time.

This empowers you to make meaningful, data-driven decisions, increases your Company’s clinical development speed & agility while dramatically reducing costs and helps you meet the challenges of the future of medicine head-on.

Learn more about PerkinElmer Informatics for Clinical Development here.

Video Thumbnail

Greater Visibility in Clinical Data Review

               

In the era of big data, it’s a dominant theme – the need to break down silos and better integrate and present data so more time is spent analyzing it and less time preparing it for analysis. Why should it be any different in clinical development?

With more than 208,000 trials going on worldwide, and each one involving broader and more diverse data sets and formats, organizations from big Pharma to virtual Biotechs to CROs are looking for tools that empower optimal analytics-driven decision-making.

Clinical development has evolved beyond the analysis of traditional data obtained from randomized clinical trials alone to determine the safety and efficacy of drugs. Today, translational and health outcomes data are being integrated with traditional clinical measures, making the analysis of this broader data set much more challenging, but also much more insightful.

Visual Analytics

Getting to those insights quickly is essential in an industry driven by time-, safety- and cost pressures. Medical monitors, safety review teams, biostatisticians, data managers, pharmacologists and others need tools that let them quickly analyze the breadth of available data. They need a complete picture of what’s occurring in clinical development – as its happening.

Optimally, data should be available for analysis as soon as it’s collected – from first patient in. Waiting weeks or months until data is collected, scrubbed, cleaned, and locked down is no longer an option.

An analytics platform that leverages data visualization can speed the clinical data review process by empowering clinical development teams to more easily see trends, outliers, and patterns buried in a sea of clinical data. This ability to quickly visualize and analyze data lets team members optimize the trial process and gain the insights to reduce risk, manage resources and investments for greatest patient benefit, and, ultimately, move drugs and devices faster to market.

What are some of the things people can easily identify or evaluate with a visual analytics platform?

     • Unknown relationships in data

     • Early safety signal detection

     • Data quality issues

     • Protocol violations and dropouts

At its heart, effective clinical trial management means finding – as early as possible – the things that are going wrong, or have the potential to go wrong. To plumb the depths of clinical development data, spending less time preparing it and more time acting on insights from it, requires the right visual analytics platform.

Clinical Data Source Integration

Whether virtual Biotechs or big Pharma, users need real-time access to aggregated data from multiple sources. There are a wide variety of clinical data sources: SAS®, Medidata Rave®, Oracle Inform®, and Oracle LSH®, are just a few. Connecting to these without scripting or IT involvement is key to accelerating data analyses. People who need the information no longer have to wait for customized reports enabled by IT or biostatisticians. Nontechnical people on the clinical development team must be able to easily leverage the solution too.

With a direct connector to SAS or Medidata Rave, for example, users can quickly gather information from source systems and, using intuitive visualizations that dig into the clinical data, spot outliers, patterns, and trends.

Perkin Elmer’s Clinical Data Review solution, which is powered by TIBCO Spotfire®, gives clinicians the visual analytics power needed to speed medical review, assess data quality, and identify causalities & relationships in the data. The insights gained can result in fundamental process improvements and cost savings:

     • Save days per week, per trial in medical review

     • Reduce database lockdown prep time

     • Quickly establish a safety profile of the data

     • Better understand key attributes from data exploration

     • Take questions to their next logical step

     • Visualize data in seconds, vs. waiting days for static reports

Cempra Pharmaceuticals called it “a best-in-class approach to analyzing our clinical trial data” while IDC Health called TIBCO Spotfire “the de facto end-user interface of choice for the strategic assessment of clinical data.”

Are you making the most of your clinical data review? Gaining insights at the earliest possible moment? Investing in the right visual analytics platform can make all the difference in the rapid, robust review of your clinical data.

For specifics on timely safety surveillance and optimizing data flow with our Clinical Data Review solution, check out our webinars.

Cloud-based High-Performance Computing Solves Image Analysis Bottleneck in Phenotypic Screening

               

There is no denying phenotypic screening offers many advantages for drug discovery. As we’ve previously noted in this blog, many pharmaceutical companies have increased their use of phenotypic screens after it was reported that the majority (28) of first-in-class small-molecule drugs receiving FDA approval from 1999-2008 traveled the phenotypic path, vs. a target-based approach (17).

The Phenotypic Bottleneck

However, the trend towards empirical models involving organisms (versus mechanistic models at the molecular level) increases the complexity of the analysis – and therefore processing time. Researchers conducting phenotypic screening campaigns at pharmaceutical companies that process approximately 500,000 compounds calculate image analysis time of at least three months - even if they could keep image analysis time to one hour per 384 well plate.  

That bottleneck stems not only from the volume of compounds to be processed, but the number of parameters. There are upwards of thousands and many multiple-defined parameters, which must be analyzed to discover and understand a relevant molecular mechanism of action (MOA). It’s a far more complex analysis than single target. Roche calls phenotypic screening “the most refined and delicate of all assays.”

Research Challenges Facing High-Content Image Screening

Cell biologists in pharma, biotech, and academic and institutional research, working with microscopy and high content imaging technology face three IT-related challenges:

• Data storage & management

• System architecture

• Speed and performance 

Data storage & management – Researchers need a suitable database to transform high content screening analyses into robust biological conclusions.  They are looking to store all the results from raw image data, image analysis results, secondary analysis, hit stratification, metadata, and phenotype, while being able to perform searches, conduct advanced analytics, share data, and collaborate around results. 

Unique in phenotypic screening is the vast amount of data that is generated as raw data. A single screen can produce a few terabytes of high resolution images. Another critical need is the ability to integrate screening data with other data types, from genomics and high-throughput screening to tissue, in vivo, and more.

System architecture – While on-premise solutions give a pharmaceutical company close control over its data, it can be expensive to keep current with server and computing requirements. It doesn’t take long for a server to outgrow its capacity and for computing power to struggle to keep up, as newer technologies churn out more data even faster. Leveraging the web and sharing data with external partners - such as contract research organizations - also strain IT resources.

Performance – Phenotypic screening needs image analysis to catch up with image acquisition speeds. Batch analysis of large high-content screening campaigns, in particular when analyzing complex cellular assays with texture and morphology changes, can take weeks or months – an unacceptable delay. And users want to parallelize more functions – import, analysis, delete, etc.

A Cloud Solution

Rather than investing in more on-premise hardware and systems, pharmaceutical and biotech companies can find the computing power they need in the cloud. And, just as the sharing economy means people can get rides without buying a car, for example, the cloud means researchers can get computing power without buying hundreds of PCs and numerous servers. Providers like Amazon and Microsoft let researchers “rent” high-performance computing power for just the amount of time required. Compared to the pharmaceutical case calling for three months of analysis time, using the cloud and high performance computing can bring it down to two or three weeks.

Cloud-Based Phenotypic Screening

Moving phenotypic screening to the cloud, while a sound strategy for saving time and resources, requires the right software solutions. At SLAS in January and the High Content Analysis and Phenotypic Screening conference in February, PerkinElmer introduced its platform to offer cloud-based computing performance, machine learning, and storage for phenotypic screens (as well as non-high-content based screens). Signals for Screening combines the functionalities of the Columbus high-volume image data storage and analysis system with High Content Profiler automated hit-selection and profiling software, powered by TIBCO Spotfire® data visualization and analysis software – and enables it for the cloud. This brings the entire workflow for data analysis and phenotypic screening under one umbrella.

When built on top of Signals, PerkinElmer’s cloud-based data management and aggregation platform, Columbus and High Content Profiler not only perform image analysis, but also machine-based characterization of parameters in individual cells. In a single, seamless workflow, data from the instrument can be analyzed to answer a broad array of questions regarding each individual cell.

Speed Analysis Time and Save on IT Investment

Leveraging the cloud with high-performance computing software significantly speeds analysis time while reducing IT investment. Rather than spend more on additional PCs or servers, why not look into cloud-based solutions that offer functionality specific to drug discovery screening needs – both phenotypic and target-based? 

Have you used or considered cloud-based drug discovery solutions?

Precision Medicine Starts Here: Signals™ for Translational

               

Translational scientists face mountains of data – different types, in different formats, and from different sources –making it nearly impossible to find signals in the noise. 

Scientists are working with more data – and asking more complex questions – than ever before. And they are increasingly facing a firewall between data and discovery.

PerkinElmer’s Signals™ for Translational brings together laboratory research and cutting-edge clinical data to match the right treatments to the right patients. Data is aggregated and organized on an easy-to-use cloud-based platform, allowing scientists to pull it from any source, at any time. Signals™ empowers scientists to move from hypothesis to validation in minutes, not months. 

Learn more about Signals™ and Translational research. 


Video Thumbnail