Accelerate Insights into Your Rave Data

Analytics and Visualization Enhance Clinical Data Management

Medidata Rave and other EDC platforms hold volumes of clinical research and clinical trials data. As electronic data capture (EDC) and clinical data management (CDM) systems, they help clinical trial teams to capture, manage, and report patient data. 

To fully realize the value of the investment in an enterprise-scale EDC solution, clinical leaders often seek to leverage EDC data for a wide range of analytics challenges. They aim to use EDC systems like Rave for Medical Monitoring, Risk-Based Monitoring, to optimize Clinical Trials Operations, and more.

Electronic Data Capture for Pharma & CROs

As the gold standard for EDC in global, complex, randomized clinical trials, Rave is a comprehensive solution to a major industry challenge for large and mid-sized pharmaceutical companies and contract research organizations. 

But are users getting all the insights they can from Rave’s tool set? Does it support full clinical data review, risk-based monitoring, and more? Could users benefit from purpose-built applications that enhance data-driven insights?

After implementing an EDC system and ensuring baseline requirements are fully met, it can be disappointing to learn there may be gaps that demand additional solutions. Yet, as the number of clinical trials grows, the importance of having the right informatics to heighten and speed information-to-insight in clinical operations becomes critical. Estimates are that the global e-clinical solution software market will reach $6.51 billion by 2020, due to an escalation of clinical trials data.

“When speaking with our pharma and CRO clients, I’ve found that the majority require more powerful solutions for RBM and CDR. It really comes down to the need for deeper levels of understanding with operational risks and how to quickly adapt and act on them,”
said Masha Hoffey, Director of Clinical Analytics at PerkinElmer..

Expanding the use of a deployed solution like Rave to cover additional medical review and RBM requirements can be risky. It is not always clear:

  • What the user requirements for the expanded use are.
  • What requirements Medidata supports out of the box.
  • Which requirements may need additional services.
  • Which requirements are supported if Rave is implemented using predefined standards.

While there is nothing wrong with using predefined standards, there is the risk that they compromise flexibility. What’s more, the decision to use the predefined standards often has to be decided at such an early phase in the Rave implementation that there is a good possibility such standards will not meet your immediate or future needs.

Five-Step Gap Analysis

A proven way to assess whether to use Rave’s standard analytical and visualization capabilities versus an alternative solution is to perform a five-step gap analysis:

  1. Identify the use cases for analytics and visualization in your medical review and/or monitoring workflow
  2. Spell out the user requirements
  3. Map Medidata Rave capabilities to those requirements
  4. Identify requirements that are not supported 
  5. Consider alternative solutions that offer connectors to Rave to mitigate any gaps

Complementary solutions, like PerkinElmer’s Clinical Data Review (CDR)  and Risk-Based Monitoring (RBM), with seamless connectivity to Rave, are purpose-built for clinical operations and safety teams. 

These solutions, with built-for-purpose analytics and visualization capabilities, offer necessary capabilities such as:

CDR

  • 360º view into patient profile, adverse events, and other safety domains
  • Optimized data surveillance across functional teams
  • Ensured high-quality data for faster database lock

RBM

This means you don’t have to manually export data from Rave, because the Connector for Medidata Rave can extract, transform, and load Rave data into TIBCO® Spotfire for clinical analytics and visualization. Using the standard CDISC ODM (Clinical Data Interchange Standards Consortium – Operational Data Model) format facilitates the regulatory-compliant acquisition, archiving, and interchange of metadata and data for clinical research studies

Working together, Medidata Rave and PerkinElmer provide a more complete, accelerated view of clinical data, increasing the overall quality and speed to insights while leading to better decisions from the Rave data. 

Want to supercharge your RAVE clinical data? Compare your gap analysis to our Clinical Solutions or contact us to better understand your requirements. Contact Us 

Wishes Can Come True: The ChemDraw Innovation Challenge Starts Now!


The PerkinElmer ChemDraw Innovation Challenge 

If you’ve ever said, “If only it could…” while using ChemDraw, here is your chance! Dust off your old ideas or dream up fresh ones - because the first-ever ChemDraw Innovation Challenge starts now. 

We’re inviting ChemDraw’s more than 1 million users to think about how the chemical drawing platform might help you do your science just that much better.

No Idea is Too Small … or Too Big

No idea is too small, incremental, radical or disruptive. As a community of users and chemists doing a wide range of work, you have no doubt reached a limit, or thought of a way we could better support you in your use of ChemDraw and the  family of ChemDraw solutions including Chem3D, ChemDraw Cloud, ChemDraw for Excel, ChemFinder, and our new PerkinElmer Signals for ChemDraw

Do you wish ChemDraw:

Would become the Google Docs of collaborative chemistry?

Could support Biomolecular HELM Notation?

Offered support for a specific plug-in?

Whatever your notion or idea, we want to hear about it!  

Join the ChemDraw Innovation Challenge now!

How Do I Submit My Idea?

We’re using the Spigit innovation management platform to crowdsource ideas and foster collaboration among participants. 

How the ChemDraw Innovation Challenge Works

The process follows a flow of IDEAS:

I = identify the opportunity and the target audience (i.e., ChemDraw users)

D = develop a plan, design experience, and power platform

E = engage the crowd, collaborate and discuss, and build upon ideas that are graduated to the next level

A = analyze those ideas, score and rank them, prioritize them

S = select and review winning ideas, communicate the results, recognize and reward the winners, and implement the innovative ideas


During the submission phase, we’ll welcome new ideas and open up those ideas to other Challenge participants to comment and provide feedback. Spigit creates a dialogue by informing those on the thread of new comments. Challenge participants will also be invited to vote for their favorite ideas.

As ideas are generated, they will automatically be awarded points based on the number of votes, views, and comments. Those receiving the most votes will proceed to the next round.

The ChemDraw Community Drives Improvements

This “emerging ideas” round relies on more input and feedback from our ChemDraw Innovation Challenge community – what thoughts do you have to improve the innovations that have progressed this far? Based on your experiences and your needs, what can you contribute that adds value to these ideas? 

This round is also automatically scored based on the number of votes, views, and comments – and the finalists here are then evaluated by our team of PerkinElmer ChemDraw experts. 

These experts are assigned to review the proposed innovations based on their experience with the type of idea – is it a feature/function, or related to service, workflow, integration, etc.? Each of the ideas will be given a score of 1 to 5 by experts & those with an avarage rating of 3.5 will proceed to the Pairwise ROund where again the community will vote on the best of best.

ChemDraw Innovation Winner Selection

The winners of this exhaustive, crowdsourced effort will be selected by the Expert Review Panel, and will be acknowledged by PerkinElmer and the ChemDraw community. All ChemDraw users will be winners as ChemDraw – now in its 32nd year – benefits from an influx of innovation that helps us all achieve greater heights with our chemistry, and stronger bonds within our ChemDraw community.

Join the ChemDraw Innovation Challenge Today!

Revisit your “if only” wish list for ChemDraw, dust off those glimmers of ideas, and join the ChemDraw Innovation Challenge. Your idea, whether incremental or radical, just might emerge as a winner.




Precision Medicine: Can Informatics Help Meet Expectations?


A little over two years ago, the Precision Medicine Initiative was announced to much fanfare at the prospect of one day being able to provide individuals with the right treatment at the right time, based largely on their unique genetic makeup, as well as environment and lifestyle factors.

Since the sequencing of the human genome, efforts have been underway to leverage genetic testing to find more tailored treatments and therapies for individuals’ conditions. The White House says “precision medicine is already transforming the way diseases like cancer and mental health conditions are treated,” and points to molecular testing and genetics to determine the best possible treatment. It put $215 million behind the initiative, which is collecting genetic data from volunteers, to be shared broadly with researchers and others involved in finding precision - or personalized - medicine solutions.

From a consumer perspective, precision medicine “is not yet delivering customized care.” In fact, there have been some troubling misdiagnoses and - more generally - difficulty for medical professionals to make individual treatment decisions based on the genetic data currently made available to them. 

Potential for Precision Medicine

Excitement at the potential for precision medicine, however, has not dimmed. Instead, government agencies, research institutes, medical professionals, and technology vendors are working hard to deliver on its promise.

In 2016, the U.S. FDA issued two sets of draft guidance to streamline regulatory oversight for Next-Generation Sequencing tests that are used to identify a person’s genome. The guidance will help developers of NGS-based tests as FDA works to ensure the tests are safe and accurate.

The first draft focuses on standards for designing, developing, and validating NGS-based in vitro diagnostics used for diagnosing germline (hereditary) diseases. The second draft guidance focuses on helping NGS-based test developers use data from FDA-recognized public genome databases to support clinical validity.

Data-Driven Precision Medicine

Bioinformatics joins NGS and drug discovery as technologies that will - in part - drive the global precision medicine market to nearly $173 billion by the end of 2024. The global market study said “proper storage of genome data plays a crucial part in this segment,” and reported acute data storage and data privacy issues remain to be solved.

Since precision medicine is a data-driven initiative, it makes sense that standards apply to how data – whether clinical or research  –  is collected, stored, analyzed, and used to support disease research, translational medicine, and drug discovery. PerkinElmer welcomes efforts to standardize big data analytics for precision medicine. 

Informatics platforms will be required that can: 

Support translational research with designated workflows

Securely consolidate public databases and patient information in a single solution

Provide analytical and visualization capabilities for data from a host of sources – electronic health records, clinical lab records, genetic testing and more

Integrate and aggregate data for cohort analysis

Leverage the cloud to increase access to the broadest range of data at a low cost

Enable self-service and effective collaboration within and across organizations

Want to leverage informatics to make the excitement for precision medicine a reality? Deploying the right informatics solutions can set you on the right path.

Download our white paper, The Need for an Informatics Solutions in Translational Medicine, to learn how our platform - designed to address the complexities of translational research - enables researchers to more quickly and easily identify and manage biomarkers essential to precision medicine.


Beyond Genomics: Translational Medicine Goes Data Mining


We are fortunate to live in a time of growing life expectancy across most world populations, but this has also resulted in an increased prevalence of chronic diseases. Accounting for more than 70 percent of healthcare spending in the developed world, treatments for chronic diseases are typically costly, prolonged, and - in many cases - largely ineffective. 

This is further compounded by the extremely low probability - less than 10 percent - that a drug will make it from Phase 1 to approval during the course of a clinical trial. This statistic is disheartening not only for the scientists, physicians, and patients involved, but also for the drug development industry at large. In addition, the average cost of clinical trials - before approval - has reached $30-40 million across all therapeutic areas in the U.S.

A Drug Development Strategy Focused on Efficacy

This unsustainable situation has driven the notion that money could be better spent to pursue more effective treatments. Over the last couple of decades, the drug development industry has been looking at strategies to select more efficacious drugs while controlling the ever-spiraling costs related to their development.

This has led to the evolution of the multifaceted discipline, Translational Medicine (TM), which has been called “bidirectional” since it “seeks to coordinate the use of new knowledge in clinical practice and to incorporate clinical observations and questions into scientific hypotheses in the laboratory.” The beauty of the translational approach is that it applies research findings from genes, proteins, cells, tissues, organs, and animals to clinical research in patient populations, with an explicit aim of predicting outcomes in specific patients. Essentially, it promotes a “bench-to-bedside” approach, where basic research is used to develop new therapeutic strategies that are tested clinically. 

From Bench-to-Bedside…and Back Again

However, translational also works “bedside-to-bench” since learning from clinical observations can provide optimal feedback on the application of new treatments and potential improvements.  

Recent technological advances have endowed us with the ability to test this “Bench-to-Bedside-to-Bench” approach. Today we can investigate the molecular signature of patients to identify biomarkers (or surrogate clinical endpoints) that then allows us to stratify the patient population and only administer the drug to those who have any hope of responding to it. 

Herceptin, arguably ‘the first personalized treatment for cancer,’ is a good example of the benefits of a translational approach. A 30-year success story in the making, Herceptin was discovered by scientists who used genomics technologies to identify ‘over-expression of HER2’ which leads to a particularly aggressive form of breast cancer. Adding Herceptin to chemotherapy has been shown to slow the progression of HER2-positive metastatic breast cancer.  Read the story here

Human Genome Project – Translational’s Driving Force

The world's largest collaborative biological project, the ‘Human Genome Project,’ successfully mapped 95% of the human genome. The sequencing of the human genome holds benefits for a wide range of fields and is perceived as the driving force behind Translational Medicine applications. We can now look forward to a time where the focus will start shifting to a more ‘individualized approach to medicine,’ perhaps even a focus on disease prevention as opposed to treating symptoms of disease. 

The viewpoint of the Personalised Medicine Coalition, that “physicians [will] combine their knowledge and judgment with a network of linked databases that help them interpret and act upon a patient’s genomic information,” further shows faith in this unification of art and science in medicine.

The Human Genome Project may have spearheaded technological advances in the genomics and bioinformatics fields, but many challenges remain for TM to cross over into clinical utility and become mainstream. 

Beyond Genomics Knowledge

For one, Translational Medicine can no longer solely rely on the ever-present bounty of genomics knowledge (though it will keep us busy for quite some time). All biologists know that genetics doesn’t work in isolation. Yes, it helps to identify biomarkers and particular molecular signatures, but the integration of knowledge from different biological silos is the next big challenge – and opportunity. That challenge (and opportunity) is data.

The goal is to effectively mine data brought together from live experiments, external ‘open access’ sources, legacy and real-world data portals, clinical and preclinical data systems, and more.

PerkinElmer Informatics will examine the challenges associated with the data integration needs of translational researchers, to deliver on the promise of Translational Medicine.

Download this article which features insights into how translational is simultaneously reducing expenses and improving patient health. 



PerkinElmer Signals™ for Screening: Uniting High-Content Screens & Target-Based Assays


One Comprehensive Platform for High-Throughput and Phenotypic Screening Data.

High Content Screening Data Analysis Challenges

For pharma drug discovery, combining HCS with HTS is fast becoming the norm. But with the tremendous volume of data these methodologies produce, combining the two into a single platform has faced significant challenges, ranging from technical resources (a single phenotypic screen, for example, can produce a few terabytes of high resolution images), and siloed data (one of the greatest challenges facing the big data/data analytics industry today), to performance issues (pharma phenotypic screening campaigns require an image analysis time of 3+ months to process 500,000 compounds). Availability of IT resources is also challenging for scientists as on-site/custom data analytics solutions need support and maintenance. Smaller IT teams mean fewer resources to support the growing demands of scientists.

Researchers need tools which enable discovery of highly-characterized and higher potential drug candidates from phenotypic screening campaigns. They want to improve the quality of the downstream decision making, and empower collaboration & distributed research within and across organizations. Equally as important, they want (and need) a unified software interface for all screening instrumentation, and a lower total spend.

Signals™ for Screening:  Breaking Old Ground?

We recently released a video on Signals™ for Screening, PerkinElmer’s new single platform which unites high-content screens with target-based assays. 

As an industry leader in this space, Signals™ for Screening represents another step in PerkinElmer’s efforts to support pharma and biotech R&D with the latest tools and technologies. 

The video demonstrates the platform’s ability to analyze the results of high content screens quickly and easily, allowing High Throughput Screening (HTS) & High-Content Screening (HCS) analysis groups and core labs to identify promising candidates faster.   

Signals™ for Screening may be a new product, but it has been built by scientists and domain experts with decades of experience in imaging, screening, informatics and cloud technologies. It is the only platform to unite HTS and phenotypic data, and enables researchers to integrate, search, retrieve, and manage data from anywhere, inside or outside the firewall. This fosters collaboration, increases efficiencies, and enables faster processing of ever-larger volumes of compounds. 

Watch the 2 minute PerkinElmer Signals™ for Screening video here.

 

Real World Evidence: Making RWE Real


Can real-world evidence and advanced analytics accelerate the evaluation of drug safety and efficacy?

When the 21st Century Cares Act became law in December 2016, it ushered in a new era for real-world evidence (RWE) to help break bottlenecks in drug development and product approvals. The door has been opened, as drugs and medical devices are developed, to make use of vast amounts and divergent sources of health-related data:

  • • Claims data – medical and pharmaceutical
  • • Clinical trials data
  • • Clinical setting data – from electronic health records and lab results to genomic    or  pathology reports
  • • Pharmacy data – point-of-sale and Rx-fill rates
  • • Patient-powered data – self-reported outcomes, social media 

The Cures Act excludes randomized clinical trials (RTCs) from its definition of RWE – defining it as “other than” RTC data with regard to evaluating “the usage, or the potential benefits or risks, of a drug.” The New England Journal of Medicine says RWE is “health care information from atypical sources,” which includes billing databases and product and disease registries. 

The U.S. FDA needs to establish some guidelines for what it considers real-world data (RWD) and how it will allow RWE to be used. In July 2016 the agency introduced draft guidance relative to RWE and medical devices. The 21st Century Cures Act has given the FDA some timelines for drafting guidance relative to using RWE in these instances:

  1. to help support the approval of a new indication for a drug approved under section  505(c) 
  2. to help support or satisfy post-approval study requirements

An openness to RWE indicates recognition of the usefulness of data generated from actual use in a clinical setting. The goal is to ensure that relevant RWE data is applied methodically to the evaluation of drugs for safety and efficacy. 

The Promise of RWE

RWE provides a more comprehensive view of a drug product’s real-life therapeutic and economic value to patients, payers, providers, and sponsors.  It adds real-life clinical practice and actual health outcomes information to our understanding of drug therapies, and is being eyed for its potential in expanded labeling and repurposing of existing drugs. It can help us study physician utilization patterns, the patient treatment journey, and drug comparative effectiveness.

AstraZeneca, for example, used RWE studies to supplement RCT data in a 2013 study of COPD treatment. Consider this data pool:

  • • medical records of 21,361 patients over an 11-year period
  • • linked national, mandatory Swedish healthcare registries – including hospital, drug,  and cause-of-death data
  • • total anonymized data representing 19,000 patient years

The benefit to AZ and COPD patients? “By combining such large quantities of data with appropriate statistical techniques, the study gives healthcare providers a fuller picture of how COPD care has evolved and the impact of different COPD management strategies on outcomes for patients in actual clinical practice,” wrote AZ’s Georgios Stratelis, MD, PhD, in a blog post. 

From aiding regulatory approval decisions for new products, generating ideas for next products, providing longitudinal assessments, to proving post-market value, RWE offers considerable promise for pharmaceuticals.  It is said to provide as much as $450 billion in top-down opportunity for U.S. healthcare alone. 

Overcoming RWE Challenges: Advanced Analytics

Until the FDA establishes its guidelines and the global industry settles upon agreed standards for RWE use, various participants and stakeholders are working to clear hurdles and eliminate obstacles. Among those is the data challenge itself – data quality.

The Network for Excellence in Health Innovation (NEHI) hosted a roundtable in December 2014, “Real World Evidence: Ready for Prime Time?” that cited data quality as the top barrier to the use of RWE. “Most sources of RWD [real-world data] are not collected for research purposes. Many researchers become ‘data janitors,’ forced to ‘clean’ gaps and inconsistencies in data through methods that may not yet have wide acceptance for statistical validity,” the report states.

In addition to data sources like insurance claims, electronic health and medical records (Health Economics and Outcomes Research), and pharmacy bills, RWE can also include:

  • • radiographic images
  • • biobank data
  • • molecular genomic data
  • • vital statistics
  • • patient wearable-generated data

While exciting, this adds significant new volume (on top of already crushing data loads), data integration challenges, and a real need for long-running analytical and visualization capabilities. Unified access to all relevant data sources empowers scientists and others to make decisions based on the most comprehensive dataset, including RWE data. Technology platforms must be able to integrate and make sense of RWE data, in near-real time, for decision makers to make productive use of it. 

Accelerating time-to-insight is a top goal, and achievable with out-of-the-box RWE solutions offering pre-built analysis modules for:

  • • cohort-building with propensity score matching
  • • comparative effectiveness
  • • safety signal detection methods
  • • machine learning

Learn more about PerkinElmer’s solutions for integrating RWE and leveraging all data to accelerate drug development and get therapies to market faster.

Can real-world evidence and advanced analytics accelerate the evaluation of drug safety and efficacy?We’re convinced it can. Download our white paper, Real-World Evidence Through Advanced Analytics, for a more complete analysis of the challenges – and opportunities – at hand.



A Nobel for Autophagy: High Content Screening is Essential to Understanding this Cell Recycling Mechanism


Although the concept of autophagy — the process for degrading and recycling cellular components — has been around since the 1960s, a deeper understanding of this cellular mechanism wasn’t realized until Yoshinori Ohsumi  began experimenting with yeast to identify the genes involved in autophagy, beginning in the 1990s. His discoveries won him the 2016 Nobel Prize in Physiology or Medicine, announced this past October.

Ohsumi’s work “led to a new paradigm in our understanding of how the cell recycles its content,” the Nobel Assembly at Karolinska Institutet, which awards the prize, announced. “His discoveries open the path to understanding the fundamental importance of autophagy in many physiological processes.” 

Disrupted or mutated autophagy has been linked to Parkinson’s disease, Type 2 diabetes, cancer, and more. This has major implications for studying human health and developing new strategies for treating disease.

Breaking the Bottleneck: The Role of HCS in Drug Discovery Research 

Autophagy is best understood by applying high content screening (HCS) - also referred to as high content analysis (HCA) - analytical approaches. HCS/HCA are sophisticated image-analysis and computational tools that have become useful in breaking the industrial biomolecular screening bottleneck in drug discovery research.  The bottleneck stems from image processing, statistical analysis of multiparametric data, and phenotypic profiling – at both the individual cell and aggregated well level. 

HCS tools, described as “essentially high-speed automated microscopes with associated automated image analysis and storage capabilities”, are necessary — as high throughput screens — to identify cells, recognize features of interest, and tabulate those features. 

 It is no small task. To validate and automate a phenotypic HCS analysis requires:

  •  • data management
  • image processing
  • multivariate statistical analysis
  • machine learning (based on hit selection)
  • profiling at the individual and aggregated well-level
  • decision support for hit selection

HCS in Phenotypic Profiling of Autophagy

We validated a phenotypic image and data analysis workflow using an autophagy assay. Secondary analysis provided an automated workflow for extensive data visualization and cell classifications that furthered understanding of the multiparametric phenotypic screening data sets from three different cell lines.

 An end-to-end solution that includes reagents, instruments, image-analysis tools, and informatics facilitates screening breakthroughs and successful screening experiments.

The solution should serve both small-scale experiments and analysis of a small number of plates and automated methods for larger data sets

Inspection of the data can be done using an unsupervised machine learning technique called Self-Organizing Map algorithm – a type of artificial neural network that clusters similar profiled data points together. In our autophagy assay of the HeLa, HCT 116, and PANC-1 cells, this analysis showed that the phenotypic response to chloroquine is very different in the three cell lines – something almost indistinguishable by eye.

Semi-supervised machine learning methods were used to perform Feature Selection and Hit Classification. By reducing the number of parameters to just the most relevant ones, hit stratification and classification enabled identification of which wells are ‘autophagosome positive’ or ‘autophagosome negative.’ 

Both supervised and semi-supervised machine learning led to similar EC50 curves in the autophagy assay. The supervised linear classifier is helpful whenever the phenotypes are predicted by the user, while unsupervised classification might be better suited for applications with an unknown number of phenotypes. Read the full application note here.

While the underlying pathways still need further research for complete understanding, we are able today to merge phenotype and genotype to help measure autophagy quantitatively in different cell lines, using end-to-end HCS solutions.

Serving industry with HCS solutions for more than a decade, PerkinElmer offers a number of integrated solutions to fully enable autophagy analyses at the speeds required for today’s drug discovery

Interested in incorporating autophagy into your research, and leveraging the available HCS technologies that can analyze, classify, and interpret phenotypic screening data to deliver fast insight? 

Check out this webinar to learn about an autophagy assay across three cancer cell lines. This assay was done to validate and automate a phenotypic HCS analysis workflow using PerkinElmer’s Columbus and High Content Profiler, powered by TIBCO Spotfire. 



Technology Tackles Tough Biologics Challenges


If you watch TV and know anything about pharmaceutical drugs, you know biologics have arrived. Sufferers of rheumatoid arthritis (among other conditions) see advertisements for Humira, Remicade, and Enbrel – all top-selling biologics. Rituxan, Avastin, and Herceptin are best sellers in cancer treatment. Diabetes, Alzheimers, HIV/AIDS, and other major diseases have biologic treatments now. In fact, eight of the Top 10 best-selling drugs in 2015 were biologics.  

Estimates for the global biologics market size vary, but it can be expected to grow nearly 11 percent annually, reaching $386.7 billion by 2019. By 2020, biologics could account for more than half of Top 100 pharmaceutical sales. 

Another indicator of the lasting power of biologics is investment in biosimilars – which is estimated to reach $35 billion by 2020, up from just $1.3 billion in 2013. Biosimilars are follow-on drugs to biologics whose patents are expiring. Unlike generics of chemical drugs, biosimilars are not exactly interchangeable but may be substituted if they have “no clinically meaningful differences in terms of safety and effectiveness from the reference product,” according to the FDA.

Biologics: Increasingly Popular, but Challenging

Pharmaceutical companies are embracing biologics, but they are costly and challenging to make. Biologics are made up of living matter (such as human cells, bacteria, and yeast) and can be comprised of up to a million atoms, compared to chemical medicines that can have less than 100 atoms http://phrma.org/what-are-biologic-medicines . Unlike chemical drugs - which offer one molecule per one target - biologics are often multi-targeted to treat a collection of conditions (For example, Humira treats moderate to severe RA, but also ulcerative colitis, Chrohn’s disease, and plaque psoriasis.)

For these reasons, biologics can take up to 18 months just to produce (not counting discovery and development). They have been compared to manufacturing an airplane of 6 million components, while traditional small-molecule drugs are more like assembling a bicycle of 150 pieces.  Another comparative data point: there may be 40 to 50 critical steps in the manufacturing process for a chemical drug, but for biologics that could run to 250 or more.

Biologic Drugs Deliver Results: Medical and Commercial

Given that biologics may offer advantages over  small molecule drugs, pharmaceutical and biotech companies are looking to find ways to optimize the discovery, development, and manufacturing processes involved. Often this means multi-department and external collaboration and work of a distributed nature.

Science-based informatics solutions can have an impact here - bringing together all of the data to enable collaboration and improve efficiency, quality, yields, savings, and more.  Biologics workflows can be improved with standardized methods around collecting, analyzing, managing, aggregating, and displaying data. PerkinElmer has developed biologics capabilities for our E-Notebook and TIBCO® Spotfire visualization and analysis solution. These help researchers capture and organize biologics data that can easily be presented to downstream peers.

Meeting the Needs of Biologics: Data Producers and Data Consumers 

In developing biologics capabilities specific to E-Notebook for Merck PerkinElmer helped scientists capture data with content so they can perform sequential and comparative analysis throughout the lifecycle of a product. It also lets lab managers and others assess productivity and cycle time, understand capacity and perform full product lifecycle analysis. 

While E-Notebook is primarily regarded as the domain of data producers, facilitating data entry - TIBCO Spotfire® is deployed for data consumers to help them more easily view and interpret biologics data and results in a visual platform. With the ability to query results and see visualizations directly from TIBCO Spotfire®, scientists, business leaders and others throughout an organization can make effective, strategic decisions from E-Notebook data.

If you’re involved in the challenging but rewarding work of biologics, are you working with the best possible solutions to ease complex workflows? Check out information relating to biologics workflows: 

Case Study: Supporting Biologics R&D with PerkinElmer Informatics

Webinar: Enhanced Electronic Laboratory Notebook Integrated with TIBCO Spotfire® to Support Biologics R&D

Interactive Demo: Assessment of Bioprocessing Parameters

Themes from the 2016 Spotfire User Group Meetings


Data Visualization and Analytics Spanning the Life Sciences Continuum


“If you can establish a common visual language for data, you can radically upgrade the use of that data to drive decision-making and action.”

That quote from Tom Davenport captures the major themes espoused at the 2016 Data Analytics and R&D Informatics User Conferences, hosted this past fall at GSK, Gilead and Roche facilities. 

Customers from discovery research, clinical and even post-marketing and emerging fields shared examples of how visualization and analytics technologies eased the interpretation of results and improved decision-making across the full life sciences spectrum.

Representing a wide-ranging group of scientists and researchers, the presenters described similar challenges:

  • • Complex and varied datasets and sources
  • • Distributed teams
  • • Dependence on IT
  • • Lack of standardization
  • • Urgent need to gain insights from all the data available to them.

They reported that visualization and analytics solutions capable of assimilating data from multiple sources have empowered them to perform complex analyses and create easy-to-use visual dashboards to gain insights from data faster. 

A bioinformatics research team at Gilead Sciences found itself asking complex and variable questions of large and complex datasets – from whole-genome, whole-exome, and RNA sequencing to in-house and public databases. “Scientists need tools to parse the data and then transform it into something useful,” Jeremiah Degenhardt, a research scientist at Gilead, said in his presentation

The research team discovered a combination of tools, featuring a rich visualization and analytics solution, best met its needs for initial and intermediate data processing, data visualization and data delivery.

Gilead also found that standardizing its approach to visualization with TIBCO® Spotfire in active clinical trial data review helped meet different functional needs, provide more comprehensive visualizations, and enable & ease the ability to drill down from summaries to the detailed patient level. Replacing a homegrown visualization solution with TIBCO® Spotfire improved insights for a wide variety of potential users – from clinical researchers and clinical data managers to medical monitors, safety scientists and statisticians.

For emerging fields such as biologics, visualization brings speed and resource-savings to data analytics and workflows. At Bristol-Myers Squibb more than 60 TIBCO® Spotfire projects have been implemented since 2015. In his presentation on leveraging TIBCO® Spotfire for biologics discovery, Dave Nakamura, business capacity manager for Biologics IT at BMS, shared use cases demonstrating the benefits of deploying visualization software

In one instance, TIBCO® Spotfire contributed to the reduction from 2-3 FTEs working for approximately two weeks to 1 FTE spending an hour to produce similar results. In another, researchers spent only three hours, vs more than 43 hours, to develop a UI prototype for the bioinformatics team.

Applying visualization to outcomes data is helping achieve new insights from real-world evidence (RWE). RWE is intensifying across the drug product lifecycle, with increasing reliance on more data points post-launch. This includes:

  • • Post-marketing commitments
  • • Utilization and prescribing patterns
  • • Adherence
  • • Long-term clinical outcomes
  • • Comparative effectiveness
  • • Differentiation in sub-populations
  • • Usage differences
  • • Effects of switching

In his presentation, Dr. Jamie Powers, director of RWE and Data Science at PerkinElmer, described opportunities to maximize investment in visualization for RWE, health economics and outcomes research (HEOR) and epidemiology. 

Across the broad life sciences spectrum, visualization and analytics tools like TIBCO® Spotfire are providing real value, more quickly, for researchers working in discovery, on clinical trials, or outcomes research. We are seeing that visual data exploration and analysis can be simplified, even for complex R&D questions. It helps to produce reliable scientific conclusions and provide a holistic view of research for clinical trials.

Check out the presentations from our 2016 Data Analytics and R&D Informatics User Conferences to see how you can leverage a TIBCO® Spotfire deployment - no matter the stage or phase of research. Better yet, be sure to attend a conference next year to network with - and learn directly from - other users, or share your use case and experiences.


Machine Learning


Are You Ready for Machine Learning?

Some users are so enthusiastic about Machine Learning (ML) that Gartner added it for the first time to its Hype Cycle for Emerging Technologies in 2016 based on its ability to revolutionize manufacturing and related industries. Forrester Research says enterprises are “enthralled” by the potential of ML “to uncover actionable new knowledge, predict customers’ wants and behaviors, and make smarter business decisions.” 

It’s not just for manufacturing or retail, however. Computers that can “learn” have applications in the life sciences as well. Predictive modeling can, for example, assist with diagnostics - whether from clinical or molecular data, or from image recognition.  A study at Stanford University showed that ML more accurately predicted outcomes for lung cancer patients than did pathologists.

What is Machine Learning? 

Gartner defines it as “a technical discipline that provides computers with the ability to learn from data (observations) without being explicitly programmed. It facilitates the extraction of knowledge from data. Machine learning excels in solving complex, data-rich business problems where traditional approaches, such as human judgment and software engineering, increasingly fail.”

According to Tom Mitchell, author of Machine Learning (which in 1997 was one of the first textbooks on the subject), ML can be defined as any computer program that improves its performance (P) at some task (T) through experience (E). 

It works, in part, by finding patterns in data. It has typically been the domain of proficient data scientists, software engineers, IT professionals, and others who understand deep learning, neural networks and similar specialized subjects. In short, ML has felt off-limits to many subject matter experts – think of those pathologists in the Stanford study – who could benefit from ML’s algorithms.

“The analytics revolution becomes real when the average person becomes more comfortable with advanced analytics,” says Dr. Jamie Powers, DrPH, Director of Real World Evidence and Data Science at PerkinElmer Informatics. “Not that it’s simple or easy, but the barrier to entry isn’t as high as one might think.”

And getting on board is important, says Dr. Powers, if organizations don’t want to be left behind.

“The insights that can be gained from statistical analysis and machine learning are far greater than what a pair of human eyes can possibly offer,” he says. “What machine learning can do is direct where the humans should focus.”

“Starting Small” with Machine Learning

The beauty of machine learning, compared to routine statistical prediction, for example, is in its ability to let users experiment. Even if “you’re not there yet” because you still struggle with data and basic analytics, you can start small and explore with ML.

For example, consider an analysis we ran using a publicly-available dataset (used in previous datamining competitions) from a Wisconsin breast cancer study. It’s a small dataset from 699 patients, and we ran several algorithms to test diagnostic accuracy (benign or malignant) based on nine tumor characteristics. The goal was to find an ML algorithm with greater than 95% predictive accuracy. 

What tool did we use? The TIBCO Enterprise Runtime R (TERR) engine embedded within TIBCO Spotfire. Beyond its visualization and dashboarding capabilities, TIBCO Spotfire offers advanced analytics functionality for machine learning applications. TERR lets you work in Rstudio to develop, test, break, and iterate with algorithms – but it makes R run 10- to 100-times faster than open source R.

We tested seven different algorithms. Our results showed six of the seven machine learning algorithms yielded 95+% accuracy. While a small sample set, it’s enough to demonstrate that sometimes a simple model with high predictive accuracy is good enough. 

If your next steps include trying ML on your TIBCO Spotfire and TERR installation, here are three tips and one caveat to get you started:

  1. 1. Follow a process. Machine Learning is about process to determine which algorithm to use, saving you from countless other decisions to make with a more manual evaluation of available data.
  2. 2. K-fold cross-validation is a best practice for training algorithms. Maintain 80/20 data split for final predictions, to have a hold-out sample to test the final algorithm.
  3. 3. Try ensembles – or combinations of algorithms to see if they deliver the accuracy and insights sought.
  4. 4. The caveat: Machine learning (or statistical prediction) is NOT a substitute for proper experimental design and/or sample selection.

From science fiction to real science, Machine Learning is finding new applications and new devotees who want to use its power to solve problems and answer seemingly unanswerable questions. 

“If you have Spotfire and R, you can start with small projects,” Dr. Powers says. “The technology exists, the information is there. Machine learning is more accessible than it’s ever been. So, are you ready for machine learning? I would say anyone who’s willing to learn about it is ready.”

Watch Dr. Powers’ talk at Advanced Pharma Analytics 2016 to see how you can begin incorporating machine learning to benefit your organization.