Accelerating Scientific Innovation: A Q&A with Nic Encina, Vice President, PerkinElmer Innovation Lab


Overcoming the Innovator’s Dilemma

Best - if not first - explained in Clayton Christensen’s 1997 best-seller, “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail,” the dilemma faced by companies is how to meet customers’ current needs while also preparing to meet future or unstated needs. Why do so many successful companies miss developing disruptive technologies and innovations that could have catapulted their growth, and which may instead trigger their demise?

The Innovation Lab

A popular means of addressing the dilemma is the creation of innovation labs. AT&T was one of the first, with Bell Labs – the inspiration for Jon Gertner’s book, “The Idea Factory: Bell Labs and the Great Age of American Innovation.” Lockheed Martin developed Skunk Works® back in 1943 and continues innovating from it today. Universities, governments, and companies from Lowe’s Home Improvement to organizations such as UNICEF have launched innovation labs that stimulate new thinking in order to solve new and emerging challenges.

PerkinElmer has one too. We recently caught up with Nicolas Encina, vice president of PerkinElmer, Inc.’s Innovation Lab in Kendall Square, Cambridge, Mass. , to find out what’s behind this initiative – and why it should matter to you. 

Q: What, exactly, is an innovation lab?

A: Typically, it’s an environment where a diversified team can focus on strategic - or even speculative - initiatives in a way other product development and management teams cannot. We can look three to five or even 10 years out to pursue and explore a variety of opportunities with a greater tolerance for risk and failure. Innovation labs typically need to be loosely tied to the corporate goals, both physically and culturally, so that they can focus on things beyond quarterly or yearly objectives. At the end of the day, all R&D departments innovate in one way or another, but it really boils down to scope and horizons. 

Q: What led PerkinElmer to create its Innovation Lab?

A: Our President and CEO, Robert Friel, wanted to address the innovator’s dilemma. It’s a concern that successful companies have – where you have great products that your customers love and expect you to maintain and support, and, of course, to continue updating, so most of your resources get channeled toward maintaining your existing competitive advantage . Our R&D departments are feverishly working on and producing those types of innovative solutions. 

But identifying entirely new opportunities shouldn’t be an after-thought or left to chance, because unless you’re mentally and physically prepared to seize new opportunities you’ll most likely miss them entirely.   Many of the most innovative companies have some form of innovation lab. They recognize that you need to create startup environments to focus on forward-looking and disruptive innovations. Robert Friel encourages our organization to innovate and collaborate.  We’ve recruited some of the finest minds from world-leading companies and universities to our Innovation Lab to tackle difficult problems across healthcare – which is our initial area of focus.

Q: How do innovation labs stay grounded with a company’s mission and goals, and not just innovating for innovations’ sake?

A: It’s interesting to balance the need to create value while you’re trying to give your team permission to follow their passions, risk failure, take chances, and experiment. Ultimately, I do believe that it’s about having the right people involved and creating an environment that encourages them to do what they do best. But in order to avoid a random walk that becomes too academic, we guide ourselves with an innovation strategy and an evolving portfolio of ideas that we’re actively working on. This helps direct our thinking.

Q: What is the direction?

A: We know that the future of science and exploration is based on data. PerkinElmer provides instruments that generate data, but  our customers  also value the information and insights that the instruments produce.  They also see the importance in how these insights contextually align with the broader challenge of innovating to improve human- and environmental health.   That gives us at the Innovation Lab our purpose, with a common thread of exploring opportunities from big data, cloud and the Internet of Things. 

Every year, with all the advances in technology, our customers are generating more data, richer data – gigabytes, terabytes, petabytes of data. And the challenge for our customers has become, ‘What do I do with all of this?’ It’s become too much to manage. 

This is not a hypothesis - customers are struggling with data. We know that for a fact because they tell us. So we need to think about what those solutions, those next steps are, to help our customers become more efficient and effective. It’s a natural outcome once you start thinking of your customers more as collaborators.

Q: Describe how the Innovation Lab determines next steps and solutions for customers.

A: We test theories, vet and validate our ideas with actual customers when appropriate. Once we identify the general direction that we want to go in, we further break things down to Minimal Viable Products (MVPs) that define a developmental stage that can prove or disprove a hypothesis. Then we gather information and make hundreds of micro-pivots as we refine the theory further. It’s very much an evolutionary process that favors smaller leaps over larger bounds. 

Our ultimate goal is to create something that we can quickly put in front of early adopters so that we can accelerate the learning process by observing usage in a semi-controlled environment. We have co-development relationships with a few customers who will use and test what we’re building, understanding that the product is early and subject to change. With two or three early adopters for beta testing, we get enough inbound data to determine when we’re on to something. Then we can expand. Once we conclude that we have a repeatable experience that delivers true value then we can move to a production stage.

Q: Have you had any successes?

A: In October 2015 we introduced PerkinElmer Signals® for Translational, a cloud-based data management, aggregation and analysis platform we developed in the Innovation Lab in collaboration with top pharmaceutical companies and with expertise from within PerkinElmer in translational medicine. It’s out in the market and resonating with customers. We’re developing the PerkinElmer Signals™ platform to satisfy other modalities such as high content screening. It’s both the basis for and just the beginning of what we’re doing.

Q: How do innovation labs deal with failure?

A: We recognize that our CEO has entrusted us to do things that are exceptional, new, and interesting, and part of achieving that is you need to take chances. But you mitigate failure by approaching new challenges systematically. Ultimately, failure isn’t failure if you learn something valuable from it – even if all you learn is that your previous attempt isn’t the right one. Sometimes really interesting things come from unexpected places. It can open up new channels, new ways of thinking to discover unanticipated things.

You also have to hire the right people who can learn from failure and be fearless without being reckless. 

Q: What kinds of people are attracted to the work in innovation labs like yours? 

A: For starters, we’re all engineers in one form or another. But we look for people who are experimental, aren’t intimidated by new and complex sciences and have the right skills to implement their vision to solve the most pressing scientific problems of our time. 

We’re a combination of experimenters and true professional software engineers who come from production environments on low-tolerance, mission-critical projects in big data and the cloud. Our team members need to have autonomy; they have to be willing to teach and learn from each other and have a drive for the mission. At PerkinElmer, we definitely need to be curious and comfortable tackling the wide variety of sciences, industries and technologies we’re involved in. But it’s great because we work on projects in rapid prototyping and iterative ways that expose us to lots of people at the height of their fields. 

The pace, change and commitment to creating technical solutions for the next generation of health challenges keeps things interesting. As compared to many other innovation labs, we know that when we discover something new it could have significant impact and ultimately improve or even help save  lives. There are few things more motivating.

Q: Any final thoughts on how the Innovation Lab benefits customers?

A: The world is changing so rapidly. Our customers need to know we’re looking ahead and not just around the next corner, to deliver solutions for their most pressing challenges. Having an Innovation Lab, being committed to innovating on an entirely new level, lets our customers leverage the best of new ideas, like the cloud and Internet of Things, so they can make breakthroughs and have success in their endeavors.

Innovation Labs, like those of PerkinElmer or other major brands, serve as a lightning rod for discovery and innovation. The future of science and exploration is based on data and PerkinElmer  is increasingly helping customers aggregate, manage and analyze the wide range of data generated from its instruments. The Innovation Lab is proving itself to be an indispensable tool for exploring opportunities with big data, the cloud and the Internet of Things.

To learn more about PerkinElmer’s Innovation Lab, contact Nic and his team at: InnovationLab@PERKINELMER.COM

Shining Light on Dark Data


For the vast majority of organizations, gaining visibility into their data is akin to shining a spotlight into a forest. You see the trunks and branches of trees and perhaps some groundcover, but have no idea what else is out there - or even how big the forest is.

That’s because - according to IT market research and advisory firm Gartner Inc. - as much as 90 percent of big data assets will be inaccessible to their organizations by next year. And perhaps as much as 80-90 percent of an organization’s data is unstructured.

Forrester, in a custom study for Attivio, found that 75 percent of unstructured data goes “unused,” and a full 59 percent of structured data goes unused as well.

What’s Causing This?

Current business intelligence (BI) and analytics delve into known and mostly structured data. But not all relevant data is in structured databases. Secondly, there are some difficulties associated with the quality, condition, and context of data. The Forrester survey found that 64 percent of data management effort and time is spent finding and profiling data sources, leaving precious little time for actually analyzing and drawing the insights and conclusions that drive intelligent business decisions.

Much of the time sink is because data is not always well organized or intuitively named. The people that want to use the data often understand the domain while the people who own the data usually understand databases. Those two groups speak different languages, making it difficult to collaboratively answer both their questions. Complicating things, relevant data is also spread across many disparate sources.

Consider: Let’s say a customer knows which databases contain relevant information, but since the data is in different databases, it is hard to combine. Conversely, IT knows there is value in such combination, but they have not had the resources to build a warehouse uniting the databases. The problem is to overcome the silos inside real resource constraints. 

Unexplored Unstructured Data

Meanwhile, vast amounts of semi-structured and unstructured data remain largely unexplored. There are several reasons for this. First, the proverbial “forest of data” is simply too dark and confusing. Who wants to go in there? For those who do venture into the data, most often the data is captured in its original form, without assigning any upfront structure. Most organizations have not invested in upfront overhead to structure more free-form content. Those who did, often find the structure to be obsolete.  Maintaining it during daily use is irregular - as the selected structure can prove cumbersome or irrelevant. Still, users often regret not putting effort into structuring content earlier, but, after coming to that conclusion feel it’s too late to do so.

Unstructured data offers value. Leading organizations find benefits from mining their unstructured content for new insights, unlocking the value that has, until now, been inaccessible. Some examples:

• Unraveling safety signals from preclinical safety documents

• Understanding the competitive landscape across multiple public sources

• Diving for hidden, associative relationships between pathways, genes, proteins, and drugs to explore drug repurposing opportunities

Get a 360 Degree View

Unstructured and semi-structured content are rich with patterns and trends and data points that - when identified and unified - can yield significant, and perhaps business-changing, insights. Without it, many decisions are being made based on incomplete data.

Structured, semi-structured, and unstructured data, from both internal and external/public databases & sources, are needed for a complete, 360-degree view to bring daylight to the full forest of available data.

The Good News for Business Analysts & Life Science Researchers

Efforts have been underway to unburden IT from responding to requests for reports and data sets for others to analyze. Instead, to speed data discovery, business analysts are using self-service tools that let them more quickly respond to their own lines of inquiry. The shift from IT-led reporting to business-led self-service has reached a tipping point, according to Gartner.

Data democratization has been a good thing. More people can get their hands on data, faster. Now they need to get their hands on more data – tapping into the full complement of their data to access the right information, with the right context for their inquiries.

Can we unlock the value of data trapped in unstructured sources?

Can we easily provision data to analytical tools without needing to know the underlying data structure?

Can we unify data from disparate sources?

Yes, yes and yes.

PerkinElmer Signals™ Perspectives: Universal Adaptor for TIBCO Spotfire

Just this month, PerkinElmer announced a new partnership with Attivio. PerkinElmer Signals Perspectives, powered by Attivio, will bring this data discovery and content analytics platform to scientists and life science organizations worldwide. Now researchers can profile, identify, and unify semi-structured and unstructured data from disparate sources, leading to faster insight. The Universal Adaptor for data source discovery leverages the Attivio syntactical and contextual understanding to uncover relationships between data structures, to help analysts or scientists find the data that is relevant to them in minutes rather than weeks or months. It automatically searches the data landscape to deliver data in a format similar to e-commerce shopping carts and - like retail websites - can “recommend” the most relevant data given the context of the search. It correlates all structured and unstructured data, and enriches the data with scientifically-relevant dictionaries and ontologies. This unifies all data sources for easy analysis in PerkinElmer’s TIBCO® Spotfire visualization and analytics platform.

PerkinElmer Signals™ Perspectives: Content Analytics

For the next level of data analytics, content analytics makes sense of the rich materials and insights locked away in emails, publications, SharePoint sites, scientific posters, patents and research papers – all forms of unstructured or text-based content.

Imagine what discoveries await when the depths of this data is probed. Attivio asserts that “the benefits are game-changing” when companies look beyond structured data for their business intelligence initiatives. 

It’s time to bring full daylight to the vast forest of unstructured and semi-structured data. With PerkinElmer Signals™ Perspectives, powered by Attivio, we’ll help you clearly see all of your data.

Using Data Analytics to Meet Agribusiness Challenges


Agribusiness faces some formidable food security challenges:

•   A global population of 9.7 billion by 2050 

•   Alternative energy demands fueling competition with food.

•   Climate change.

The United Nations Food and Agriculture Organization (FAO) projected that food production would need to increase by 70 percent to feed 9 billion people in 2050. McKinsey this year said “feeding the global population has reemerged as a critical issue,” citing increases in caloric demand and crop demand, resource constraints such as water and arable land, and the competition for corn and sugar between food and energy production. The management consulting firm states that meeting global demand “will require disruption of the current trend.”

Is Data a Solution?

Part of virtually every solution for meeting world food needs is to leverage Big Data. When it comes to agronomics, better data leads to better policies. It enables world food organizations to more easily measure the effect of those policies and “to hold governments accountable for the pledges they make,” according to the FAO’s 2015 Statistical Pocketbook, World Food and Agriculture. 

Agribusiness firms and farmers in developing nations are increasing their use of technology and data to   better understand crop yields, soil conditions, weather patterns, pest infestation, and more. The goal is to spread the technology and best practices to developing countries. Monsanto -traditionally a chemicals, seeds, and genetic traits company - has announced it is transitioning to a greater emphasis on data science and services. Chief Technology Officer Robert T. Fraley told Reuters the company would offer services and solutions that use data to help farmers boost crop yields.

The Right Data Tools

A major, multinational agricultural company has been applying data solutions internally as well to improve its productivity, innovation, collaboration, transparency, and intellectual property protection. For more than a decade, the company has been expanding its use of TIBCO Spotfire® analysis and visualization software across R&D, QA/QC, and commercial operations. 

As an enterprise solution, TIBCO Spotfire® helps the company improve farming practices, protect crops from pests and diseases, and make more efficient use of farming resources, including water. The TIBCO Spotfire solution was selected for four primary reasons:

1. It enabled company leaders to build and visualize analytical pipelines.

2. It easily handled the large volumes and variety of data.

3. It integrated with the enterprise’s data ecosystem.

4. It demonstrated a commitment to the R&D market.

Dow AgroSciences, an agribusiness company, uses a different PerkinElmer solution, the E-Notebook electronic laboratory notebook, which provides templates for the physical science sector. For more than eight years, R&D scientists working on insecticides, herbicides, fungicides and more have tracked all individual and group experiments using E-Notebook. The data is date- and time-stamped, easily searchable, and secure. E-Notebook serves as a long-term data archive central repository for R&D scientists to search, retrieve, and find the data for analyses that can drive innovation.

Dow AgroSciences has found that data quality improved with E-Notebook, first by eliminating the risk of manual transcription errors, but also because users are far more likely to directly upload Excel spreadsheets and other supporting files to their experiments. This level of detail enriches the record, compared to summary notes in a paper notebook. Scientist productivity has increased 10-15 percent as well - no small accomplishment when the challenge to improve the food supply is so great. 

Learning from Others

From finance to pharmaceuticals, other industries leverage big data to drive innovations from deeper insights - and agribusiness is no different. PerkinElmer’s informatics solutions have been helping agribusiness clients for years to achieve better insights from data in order to do the important work of improving crop yields, reducing disease and infestation, and finding solutions for constrained resources. 

The planet is counting on farmers and agribusiness to feed the growing population. To meet this challenge, agribusiness must leverage data – and have the best tools available to collect, manage, store, retrieve, analyze, and visualize that data for maximum effect. 

What agribusiness data challenges are you facing? Do you have the right partner?

SMEs in Pharma and Biotech - Size Matters


As Big Pharma struggles to discover new drugs under the dual pressures of faster speed and lower cost, small and medium-sized entities (SMEs) are stepping up with entrepreneurial flexibility. A report last year from the European Medicines Agency (EMA) found that 27 percent of new drugs introduced to European markets during the period of 2010 to 2012 were originated by SMEs. That compares to the 49 percent that were attributable to big pharma’s internal efforts.

SMEs – classified in Europe as micro-entities of up to 10 employees, small companies of fewer than 50, or midsized enterprises up to 250, or those organizations with annual turnover under EUR 50 million – are contributing to pharmaceutical innovation at a time when discovery is getting harder and harder to come by.

Drug Development Technology, in its article “Small players, big drugs” credited SMEs with being more flexible, creative, lean, adroit and efficient, as well as willing to take on risk and explore unproven opportunities, all while being less encumbered by bureaucracy.

They are the virtual David to big pharma’s Goliath. The virtual pharma and SME business model is lean and nimble, with the goal of achieving clinical proof of concept for drug candidates as efficiently, costeffectively, and quickly as possible. According to Bernard Munos, founder of a pharmaceutical innovation consultancy, “Big pharma companies spend $4 billion to $10 billion in R&D for each drug they bring to market. Small pharma, on the other hand, does the same thing for a few hundred million to a couple of billion – that is 50% to 90% cheaper.” 

But don’t count on David slaying Goliath. Despite support from EMA’s office for SMEs and the FDA’s Center for Drug Evaluation and Research’s Small Business and Industry Assistance program, virtual pharmaceutical companies and SMEs face their own challenges.

Science|Business reported on the difficulty of “small biotechs to raise the funding necessary to take drugs through the later stages of clinical development and onto the market,” as well as “how hard it is to grow a large, independent company.” In addition, other sources say SMEs can benefit from more guidance in running effective clinical trials. 

Therefore, it’s more likely virtual pharma and SMEs and big pharma will strike up partnerships, leveraging the advantages of each to increase drug discovery and development productivity and speed safe and effective products to the global marketplace.

Munos described “networked innovation” and “precompetitive collaboration” in his Forbes Q&A, citing JLABS - part of Johnson & Johnson’s external R&D engine - which provides a “capital efficient, resource-rich environment where emerging companies can transform the scientific discoveries of today into the breakthrough healthcare products of tomorrow.” And Deloitte is advising corporate life sciences R&D IT executives to build collaborative associations with smaller startups.

Data: A Focus of Collaboration

An obvious area on which to focus collaboration is data – how to collect, store, manage, and share data to draw faster insights and gain knowledge. McKinsey, in the article “How big data can revolutionize pharmaceutical R&D,” suggests expanding the “circle of trust” to select partners in order to break silos and extend knowledge and data networks.

IT-enabled portfolio management allows for quick data-driven decision making. Solutions for clinical development enable SMEs to perform the analyses they need to get better insights faster, as well as collaborate with external partners. McKinsey finds that smart visual dashboards can help with analysis of current projects, business development opportunities, forecasting, and competitive information.

Are CROs the Right SME for the Job?

Contract Research Organizations (CROs) know the power of using the right tools for data analysis. In fact, an important driver for increased outsourcing is the pharmaceutical sponsor’s preference for service providers who offer advanced informatics technologies that can reduce development time and costs. These systems must provide real-time data access and predictive analytics, as well as gather and visualize data more quickly.

Quintiles - the world’s largest provider of biopharmaceutical development and commercial outsourcing services - created an integrated suite of software modules named Infosario which handles all phases of drug research, development, and trials. Its visualization component is TIBCO Spotfire.

“The Quintiles Infosario system allows researchers to revisit decisions made earlier within the same drug trial. That's a big benefit,” Quintiles CIO Richard Thomas told ComputerWorld, because "clinical trials take six, seven years, so feedback loops collect a lot of data along the way.”

Could this mean that progressive CROs might become the next clinical informatics software vendors? They do have unique capabilities for designing and building such systems. They possess a deep understanding of the software needs for clinical trials, and can leverage their experience testing technologies to better design and customize clinical informatics systems. They also have the domain expertise compared to some generalist software companies without scientific backgrounds. These CROs even have databases built from years of experience to help determine who can run the most effective trials.

For our answer to that question – whether CROs could become the new clinical informatics software vendorsread our new white paper by our market strategist, Ben McGraw.

Betting on the Cloud: Life Science Data Management


Whether cloud computing emerged from J.C.R. Licklider’s vision of an “intergalactic computer network” or computer scientist John McCarthy’s notion of computation delivered as a public service, Computer Weekly’s “History of Cloud Computing” puts its genesis in the 1960s.

Although it didn’t gain traction until the 1990s, cloud computing has now passed the hype phase and is fully established, but in some industries more than others. IHS Technology projected that enterprise spending on the cloud will reach $235.1 billion by 2017 – triple the 2011 investment. 

Forbes reported that cloud computing is the strongest technology investment sector for the third year in a row, according to Deloitte's 2015 Global Venture Capital Confidence Survey. 

Life Science and the Cloud

Cloud computing is generally credited with three main benefits when it comes to business agility: increased collaboration, cost savings, and interoperability/connected operations. Given these advantages, what opportunities does the cloud present for life science? 

Nature reports that “biologists are joining the big-data club” and looking to the cloud to solve data access and volume challenges. “Much of the construction in big-data biology is virtual,” writes Vivian Marx.

Accenture sees opportunities for life science companies to “accelerate their adoption of the cloud across broader parts of their organization and value chain.” In its Outlook report, “Three Ways the Cloud is Revolutionizing Life Sciences", Accenture projects:

1. Personalized medicine will benefit from cloud-based social ecosystems; 

2. New-product development improvements will draw from more external innovation and multi-sourced data; 

3. A more global operating model will drive new partnerships and expanded markets

Cloud Security Concerns

Whenever cloud computing for data management comes up, so too does the topic of security. Companies question whether the cloud is safe for intellectual property and regulatory compliance. A 2012 survey by IDG Enterprise found that 70 percent of respondents - “a significant margin” - saw security as the primary barrier to cloud adoption.

By 2014, survey responses evolved to calling for public and private cloud service providers to “create and communicate security policies to be considered a valued partner.” 

Interestingly, the 2014 survey (the most recent by IDG) also found that the cloud increases IT flexibility, innovation, and responsiveness, and that cloud investments continue to rise, with enterprise organizations investing in cloud solutions significantly more than SMBs.

CIO magazine responded to the cloud security issue with an article uncovering the “20 greatest myths.” Suffice it to say, “server huggers” don’t get much sympathy.

Jens Hoefkens, director of strategic marketing and research at PerkinElmer, adds that there are solid technology solutions for security concerns - from encryption of data both in flight and at rest - to the use of local servers.

Given that companies are dealing with sensitive research data (and often patient data), PerkinElmer designed its cloud-based system with a strong emphasis on maintaining data safety and privacy. All data - regardless of whether it is in rest or in transit - is subject to strong encryption and also uses Virtual Private Cloud (VPC) architecture for secure access. The Signals system deploys in multiple geographic regions to help customers meet various regulatory and legal requirements regarding the physical location of the data.

Life Science & Pharma Trending Toward the Cloud

For life sciences customers, and the pharmaceutical industry in particular, cloud-enabled solutions meet the needs of two big trends:

1. Outsourcing: in which pharmaceutical organizations are doing more work outside their firewalls

2. High content data: a growing list of technologies that require extremely large and costly computing and storage capabilities

While many life science organizations have moved certain parts of their operations to the cloud – Human Resource activities, e-mail, and even clinical trial data – for some reason R&D is the last holdout.

“Ninety percent of clinical trials are running on electronic data capture systems – all cloud-based,” says Daniel Weaver, senior product manager for translational medicine and clinical informatics at PerkinElmer. “So the most critical, valuable, and expensive data they generate is already on the cloud. The concept that research data can’t be on the cloud doesn’t hold up to scrutiny.”

In fact, researchers in need of the data – and of collaborating around it, seem to agree. A collective of genomics researchers recently called on funding agencies to help establish a “global genomic data commons in the cloud", to be accessed by authorized researchers worldwide. 

Cloud: Control and Flexibility

PerkinElmer’s leadership embraces a cloud-computing future, recognizing that data analytics is a business driver. The cloud is necessary to best meet the needs for data management, integration, and analysis. As a result, the company is creating informatics solutions that leverage the cloud’s benefits of lower costs, zero installation, Internet of Things, and collaboration: PerkinElmer Signals and Elements. 

Nicolas Encina, vice president of the Innovation Lab at PerkinElmer, says the company is making a purposeful shift to “connect our instruments and make them smarter by virtue of linking them up to a cloud infrastructure.” Structuring, storing, and mining data in the cloud, and linking to a visualization platform enables informed decision-making, from managing instruments to downstream data processing. 

Perhaps we have not achieved an intergalactic computer network or computing as a public service. But the cloud does provide life science and other organizations with a secure, seamless, flexible, and low-cost means of effective data management.

What benefits are you deriving from the cloud? Are you leveraging industry best practices in cloud computing?

New Tools for the Translational Researcher


Developing treatments that take individual variability into account (“personalized medicine”) has given rise to a new discipline in science: translational research or translational medicine. Scientists in this field work to translate biological phenomena into targeted, evidence-based medicines that improve health and treat disease by more optimally matching drugs and individuals.

Currently, the field of translational medicine research is accelerating, with 90 percent of pharmaceutical companies reported to be engaged in some translational projects as a means of reducing cost while improving outcomes. 

This translational revolution affects academic research as well. For instance, the National Institutes of Health created the National Center for Advancing Translational Science (NCATS) in 2012 to speed the translation of basic research into new treatments and cures for patients – moving more quickly from “bench to bedside.” The goal is to merge basic, preclinical, and clinical research with clinical implementation and public health data to develop new approaches and demonstrate usefulness. 

Moreover, translational medicine is the focus of many governmental initiatives around the world including:

• Genomics England, a company wholly-owned and funded by the UK Department of Health, was set up to deliver a flagship project to sequence 100,000 whole genomes from NHS patients by 2017. 

• The Obama Precision Medicine Initiative, announced at the beginning of the year. 

• And collaboration between public and private institutions, like the human genetics initiative Regeneron launched last year with Geisinger Health System of Pennsylvania and the National Human Genome Research Institute. 

Top Challenges for Translational Research

But research cost and complexity are among the top challenges for clinical research and translational projects, according to NCATS. Contributing to cost and complexity are the growing sources, types, and volumes of data stemming from newer high-content techniques in translational research, including: 

• Digital pathology

• Multiplexed flow cytometry

• Next-generation sequencing

• Proteomics

• Metabolomics

• Genomics

• Cellular assays

Translating Data Management & Analytics into Knowledge

Since Moore’s Law has propelled technical innovation, with faster and more precise systems generating vast sums of data, the challenge has become effective data management and data analytics. How do we make sense of the data and convert it into knowledge?

Current software solutions are ill-equipped to help translational researchers search, access, integrate, and analyze all the data that could help them make that next breakthrough. Therefore, as the field of translational medicine continues to grow, researchers need best-in-class solutions that lend speed and ease to their work. Self-serve access to a wide variety of data, using an informatics solution designed specifically for translational medicine workflows, will enable these researchers to more quickly and easily identify and manage the biomarkers that are essential to realizing the promise of personalized medicine.

“Unless you can start harnessing data and making sense of it, in an automated way, with systems that are engineered to solve big data problems, you’ll be overwhelmed by the data very quickly,” says Nicolas Encina, vice president of the Innovation Lab at PerkinElmer . “You can no longer effectively manage this data manually, and you certainly can’t analyze it or process it manually either.”

Introducing Signals™ for Translational

As a company dedicated to providing products and services that help researchers answers questions that improve life, PerkinElmer has built, from the ground up, a cloud-based data management and aggregation platform designed specifically to address the translational workflow. 

PerkinElmer Signals™ for Translational offers out-of-the-box support for the complete precision medicine workflow – from data acquisition to biomarker discovery to validation. The purpose-built, Software-as-a-Service (SaaS) platform easily integrates experimental and clinical data, enabling translational scientists to search for, retrieve, and analyze relevant aggregated data from across internal and external sources. PerkinElmer Signals™ has been designed with flexible and scalable data models to provide the scalability, agility, and collaborative environment required to support modern life science research.

“Too often, people think about data oriented from the informaticist’s or technologist’s point of view,” says Daniel Weaver, senior product manager for translational medicine and clinical informatics at PerkinElmer. “Signals for Translational presents the data in a way a regular scientist will be able to understand. It’s organized around concepts a scientist gets, around the subjects of clinical trials, patient visits, samples collected, etc. We view it as the next generation of how users will interact with data – by connecting instruments to a global cloud environment and serving as a bridge from the laboratory to the Internet of Things.

By connecting instruments and systems involved in translational research to the cloud, PerkinElmer offers researchers and project managers more insight into how the translational project is performing, when data is available, and what the data is telling them – in a sense, becoming a central nervous system for the connected research environment.

Get to Know PerkinElmer Signals™ for Translational

If you are interested in learning more on PerkinElmer Signals™ for Translational, we’re offering a webinar looking at examples of use in areas as varied as Translational Medicine and High Content Screening, led by Jens Hoefkens, director of strategic marketing and research at PerkinElmer. 

You will learn how PerkinElmer Signals™ will enable you to: 

• Access all your research and clinical data the way you want to, when you need to

• Develop and validate your hypothesis with integrated analysis solutions

• Enable effective collaboration within and across organizational boundaries

“Pharmaceutical companies are poised to generate very large volumes of complex datasets,” Hoefkens says. “The webinar will cover how Signals supports modern life science research with flexible and scalable data models.” 

How is cost and complexity affecting your translational research? 

Big Data Analytics: Finding & Developing New Drugs Faster


Big Data’s reach is stretching farther and farther. Finance, marketing, healthcare, life sciences – virtually every industry is looking to gain a competitive advantage from using data analytics and business intelligence platforms to uncover more insights from data - faster. According to advisory firm EY, research by the Economist Intelligence Unit indicates that 77 percent of companies that maximize how they use data are also ahead in financial performance.

The EY’s “Order from Chaos” report looks at Big Data from a good news/bad news perspective: Powerful data and visual analytics and the maturing of open architecture, cloud computing and predictive analytics are helping organizations get better with data, but many organizations aren’t moving fast enough to keep up. One reason: the complexity of data and its myriad sources. PerkinElmer’s partner, TIBCO, recently published a post on the promises and pitfalls of Big Data. In it, they also noted that the sheer volume of data, the speed with which it is generated, and the ‘siloing’ of multiple data sources were overwhelming companies.

Data Analytics Driving Research & Development

According to McKinsey, the pharmaceutical industry is seeing a growth in data from multiple sources, including the R&D process, retailers, patients and caregivers. Breaking it out of data silos and putting it to immediate good use is the trick. “Effectively utilizing these data will help pharmaceutical companies better identify new potential drug candidates and develop them into effective, approved and reimbursed medicines more quickly,” state the McKinsey analysts.

When it comes to effective use of data, however, the benefits of a business intelligence platform are enhanced by domain expertise. Life science and pharmaceutical R&D data generated from multiple, different, disparate sources requires effective integration as well as powerful analytical and visualization technologies to draw conclusions that give businesses leverage. Purpose-built data analytics solutions - built from the ground up by people who understand the workflows, the studies & experiments, as well as the data needs of the researchers and scientists - provide a more robust, relevant experience than one-size-fits-all solutions.

The Future of Life Science Data Analytics

In its report on Big Data in pharmaceutical R&D, McKinsey invites us to “imagine a future” where these things are possible:

• Predictive modeling - leveraging available molecular and clinical data - helping to identify new potential candidate molecules with a high probability of success as drugs that safely and effectively act on biological targets

• Clinical trial patient profiles improving with significantly more information, such as genetic factors, to enable trials that are smaller, shorter, less expensive, and more powerful. 

• Real-time monitoring of clinical trials rapidly identifying safety or operational signals, leading to action that avoids delays or potentially costly issues.

• Electronically-captured data flowing easily between functions, instead of being trapped in silos -- powering real-time and predictive analytics to generate business value.

The solutions to these challenges – or opportunities – can be addressed by using advanced analytics and applying appropriate visualizations.

EY advises that in order to harness the power of big data and advanced analytics, companies should manage data and analytics projects as a portfolio of assets – similar to a financial investment portfolio. It’s important to use an agile analytics approach to balance value. Here are some life sciences analytics from an “ideal portfolio.”

Functional AreaAnalytics
Research & DevelopmentChemistry and Lead Discovery
Genomics Data Analytics
High Content Screening
High Throughput Screening
Quantitative Pathology
Flow Cytometry
Translational Research
Clinical DevelopmentTranslational Medicine
Project and portfolio management
Clinical Trial Operations
Clinical Trial Data Management
Risk-Based Monitoring
Clinical Trial Data Review
Health Outcomes
Real-Time Data Analytics for Biopharmaceuticals

There is evidence that pharmaceutical companies are incorporating real-time analytic solutions to effectively analyze key data without the time lags associated with previous analysis methods.

• A top 10 global pharmaceutical company sought to reduce the time, cost, and risk of running its clinical trials, while accelerating time-to- market. Deploying a data visualization and analysis platform contributed a 20-40 percent productivity improvement in its clinical data review, saving three to four days per month in a 10-week study. 

• Typical challenges in preclinical and clinical safety assessment are significantly minimized by interactive graphical data analysis. At Novartis, this approach has proved to be “efficient, powerful, and flexible” in improving both detection and systematic assessment of safety signals.

• As part of Roche’s “Fail Fast” strategy, the right analytics platform helps safety scientists and data scientists work collaboratively to solve queries from the safety science group. Data Provision Specialist Joel Allen says the ability to analyze and visualize data - to correctly answer queries in the most expeditious way - leads to better data-driven decisions. 

EY’s report predicted great benefit “if life sciences organizations are able to apply their acumen with big data and analytics to drive decisions and engage smart collaboration.” Yet Gartner cautioned that by this year, likely 85 percent of the Fortune 500 organizations would fail to effectively exploit big data for competitive advantage. 

At PerkinElmer, we’re applying our expertise in the life sciences, pharmaceutical R&D and clinical development to provide informatics solutions and services that help you take advantage of everything Big Data has to offer. You can learn more about PerkinElmer's informatics solutions powered by TIBCO Spotfire®, the leading data analytics and visualization platform at the heart of these three customer success stories above. 

How is your company using all of its data to drive its competitive advantage?

High Content Screening Improves Pipelines, Advances Drug Discovery


From Screening Drug Compounds to Better Understanding Disease, Image-based HCS Maximizes Value of Data

Extracting Maximum Value from Phenotypic HCS Data

It’s well known that if you’re going to fail in drug discovery, it’s best to do it early. No one wants to waste energy, effort, and – more importantly – time and money taking a compound to clinical trials, only to see it fail...late in the process and at great cost. For a more productive discovery pipeline, image-based high content screening (HCS) is one phenotypic drug discovery strategy that maximizes data value.

According to Nature, phenotypic screening is a strategy for the identification of molecules with particular biological effects in cell-based assays or animal models. The technique gives researchers data on hundreds of parameters – over hundreds of wells and hundreds of plates. Unlike target-based approaches, phenotypic imaged-based screens without a known target offer a “boundless number of opportunities when it comes to what the numbers will tell you,” says Dr. Philip Gribbon of the Fraunhofer Institute.

The goal in phenotypic screening is the detailed multiparametric characterization of the cellular phenotype. It’s important to capture all the information the cell is providing, rather than single parameters. With comprehensive phenotypic information, the proper tools can help you derive actionable biological conclusions.

Using physiologically relevant model systems and leveraging rich information from image-based, high content screens – at the beginning of compound testing – helps discerns those phenotypic changes that are present without undesirable effects on the system of interest.

We know that phenotypic screening strategies were more productive than target-based drug discovery strategies in discovering first-in-class small molecule drugs, from 1999 to 2008. So while it’s not a new technology, phenotypic screening is enjoying renewed attention. 

Reaching HCS’s Full Potential

In a 2014 Journal of Biomolecular Screening article (Increasing the Content of High-Content Screening: An Overview) authors Shantu Singh, Anne E. Carpenter, and Auguste Genovesio “assess whether HCS is as high content as it can be.” They found the vast majority (60-80 percent) of HCS used univariate analysis tools and only one or two image-based features in the analysis. A continuing need for better analytical approaches was needed, they said, but concluded: “As data analysis approaches for these more complex types of HCS experiments become well developed and incorporated into easy-to-use software, we anticipate more powerful applications of HCS to blossom, the value of a typical experiment to increase, and ultimately the technology to be more widely adopted.”

That was music to our ears, coming at a time when we introduced High Content Profiler™, specifically to address the needs for a more robust, interactive, easy-to-use, fast, and powerful solution to analyze HCS images. Singh, et al, argued that “Advanced data analysis methods that enable full multiparametric data to be harvested for entire cell populations will enable HCS to finally reach its potential.”

So How Can You Reach Your Full Potential in Phenotypic HCS?

If you’re looking to complement your target-based drug discovery with phenotypic screening and image analysis, make sure you’ve got the best solutions – namely, no-compromise speed and sensitivity instruments, and the ability to manage, analyze, and visualize the mass of data from such screening.

For immediate phenotypic validation of your hits, you want:

• higher-throughput, extremely sensitive imaging

• automatic image transfer and powerful image segmentation and analysis

• advanced statistics, machine learning, and data analysis methods

From image acquisition to hit selection, PerkinElmer’s uniquely comprehensive phenotypic screening solutions help you generate rich, multi-parametric data with integrated downstream analysis, leading to better hypothesis generation and validation. In this brief video, customers discuss some of the advantages they’ve found using our HCS systems and informatics platforms to accelerate phenotypic screening workflows and better understand disease.

What you may not be aware of is a new version of High Content Profiler, still powered by TIBCO Spotfire®, addresses more than just HCS data; it covers a wide variety of analytical needs for high content analysis, including: 

• Drug profiling and clustering
• Cytometry analysis
• High throughput screening
• Curve fitting
• High dimensional data analysis

If your target-based approach isn’t delivering the results you need, you may want to consider phenotypic HCS. The technologies available today to capture stunning images and multiparametric data from each cell give you far more options to pursue – and as Dr. Gribbon says, boundless opportunities. Today, we are far better equipped to gain significant insights from all the data produced by HCS – maximizing the value of that data.

The right tools make all the difference. As leaders in the field of HCS, PerkinElmer provides the most comprehensive selection of integrated solutions for successful phenotypic-based drug discovery.

Are you using target-based or phenotypic imaging-based drug discovery strategies? If you do, what tools are you using to maximize the impact of your drug discovery data?

Clinical Trials Management: Spot Problems Earlier, Easier with Visual Analysis


The number of clinical trials has grown more than 35 times since the National Institutes of Health first made its site public in 2000. And the data incorporated in the 195,770 trials registered so far in 2015 has expanded to include not only traditional clinical measures, but also translational and outcomes data. 

With that much data, and the proliferation of technologies to support its creation, how can Clinical Project Managers (CPMs) and Clinical Research Associates (CRAs) improve visibility into their trials? How can they spot problems and address issues earlier in the clinical process? And lastly – how can they do these things without attending coding bootcamp or becoming a computer scientist?

Go for Visual Analytics

What’s needed for effective clinical trial management are unifying solutions that enable analytically driven decision-making. Faster, dynamic solutions enable the user to quickly analyze disparate data from multiple sources to create a complete picture of what’s occurring in clinical development – as it happens. 

It’s no longer a clinical best practice to wait six months for data to be locked, cleaned, and analyzed. Traditional reporting methods are too slow, cumbersome, and time consuming. Trial data is needed from ‘First Patient In’ -not at discrete time points, but rather in an ongoing manner. Visual data analytics based on electronic data capture make this possible. 

Incorporating visual analysis into clinical operations provides timely information and actionable insight to keep trials on track. Program, country, and study managers and monitors can make decisions based on live, interactive scorecards that track everything from planned vs. actual budgets, to study milestones including Investigator Review Board approvals and patient visits. 

Choosing the right visual analytics platform can help companies achieve clinical operational excellence. But analytics platforms are only as effective as the underlying expertise available to platform users. At PerkinElmer, for example, our clinical analytics platform - powered by TIBCO Spotfire® - is backed by our years of experience:

     • Building advanced analytics solutions to cover drug development workflow needs

     • Breaking rigid data silos to power real-time and predictive analytics

     • Offering value-added analytics consulting services to adapt the solutions to specific client needs.

Empowering Risk-Based Monitoring

FDA, EMA, and PMDA all recommend risk-based monitoring (RBM) of clinical investigations to enhance patient safety, improve data quality, and drive efficiencies. Visual analytics provide valuable insights for RBM, as it accelerates data aggregation through continuous collection and automated consolidation. 

To confidently identify and assess issues early enough to improve study safety and efficiency, technology platforms must enable continuous monitoring with near real-time intuitive visualizations, analytic dashboards, and applications.

Visual Leads to Virtual

Virtual biotechs – small companies with a few executives overseeing the outsourcing of biopharmaceutical R&D – have emerged in the last ten years in response to tightening capital markets. They rely on the leanest development teams and outsourcing to achieve clinical proof of concept for a drug candidate.

This new drug development model coincides with the emergence of faster, flexible visual analytics and business intelligence tools which assist small, nimble companies in their drug development efforts. Virtual biotechs use flexible platforms with real-time access to aggregated data and programs for trial management, monitoring, data analyses, and business operations. The right platform can take a virtual firm and its partners from initial study startup with applications for trial timelines and progression to Phase III project management. 

Spotting Outliers, Trends, and Problems

At its heart, effective clinical trial management means finding the things that are going wrong, or have the potential to go wrong, early. The right platforms for visual analytics – for both business and science intelligence – help the user more easily and rapidly spot the outliers, find the trends, and unearth the problems that are buried in the sea of clinical operations data. 

Making the change from traditional reporting and query tools to a visual analytics platform means less time preparing data, and far more time acting on the insights from it. 

Are you using the right visual analytics platform for clinical trial management and data analysis?

Don’t Buy That Paper Lab Notebook!

Why ELNs, scientific apps and visual analytics belong in college classrooms

Future scientists are heading back to the classroom now, but will their institutions be fully preparing them for careers in science? Not if they’re relying on outdated paper lab notebooks and ad hoc software for reporting! With the start of this academic year, there is ample proof that educators are using technology to their students’ advantage.

Colleges and universities can cater to their techno-savvy student body with technology solutions that better prepare them for the lab environments where they’ll actually work as professional scientists, researchers, and principal investigators.Think electronic lab notebooks (ELNs) or business intelligence and visual analytics platforms.

Most undergraduate and graduate students arrive on campus armed with smartphones, tablets, and laptops – tools they have been using at least since middle school. They enroll in classes, check grades, and communicate with professors online. 

But in many science labs, students are still using general campus online portals like Moodle or Blackboard to submit lab results for review and grading.

Engage Students with Technology

Don’t relegate those mobile devices simply for note-taking or downloading the professors’ PowerPoint slides. Academia can actively engage technology and software in the science curriculum. 

Consider what Dr. Layne Morsch, a professor in the Department of Chemistry at the University of Illinois at Springfield, is able to do. Professor Morsch has used ChemDraw® for iPad® and Elements cloud collaboration platform with his chemistry students since 2013 and 2014, respectively.

“Nobody sleeps in my class,” says Professor Morsch. Technology has enabled him to “flip” his organic chemistry course – delivering lectures online while using class time for active engagement using Elements for collaboration and ChemDraw for iPad for chemical drawing and sharing. 

With ChemDraw for iPad, Professor Morsch can instantly send a chemical problem to students and have them send answers back, with a simple swipe of the screen. With Elements, his students can create and share experiments, fostering a greater sense of collaboration in learning as students gain access to all manner of data, from spectra and spreadsheets to structured drawings. 

Professor Morsch noted “an obvious increase in classroom participation and engagement with the material” as he leveraged the latest in technology.

A Worthwhile Investment in Scientific Software

According to University of Illinois chancellor Susan Koch, technology has financial benefits too - Noting that a college-level chemistry text book can cost $400, whereas its electronic version is $125. Software vendors often offer academic or student pricing, and PerkinElmer has been a long-time industry leader in offering site licenses for academia.

And while an electronic solution will obviously cost more than a $20 paper lab notebook, the benefits of technology far outweigh the investment.

Paper can be lost, stolen…or eaten by the dog. It sits isolated and can’t be broadly shared. And when students graduate, their knowledge and data go with them. 

Electronic documents, on the other hand, can be shared in real time with professors. Students using technology are more prepared for their class assignments, since materials and documentation are captured and archived electronically.

Unless they forget their tablet or laptop, students are ready to go. The results of their experiments are archived and available to influence future students. The student/professor collaboration and communication also speeds feedback and grading.

Career Ready Scientists

As industry moves away from paper reporting, students benefit from learning on solutions that will carry them into professional careers. Professor Morsch states most of his students are interested in industrial employment or health sciences careers – positions that use ELN, business intelligence, and visual analytics solutions.  

“If we want to prepare our students for these jobs,” he says, “we need to help them become more used to archiving and communicating their work electronically.”

Learn more about PerkinElmer ELN, business intelligence and visual analytics platforms and solutions.