ChemDraw 17 - Chemistry at the Pace of Now

Wistia video thumbnail

Over the last 30 years, science has evolved rapidly…and so has ChemDraw®. 

With ChemDraw® 17’s updated features and benefits, the latest version of the world’s top chemistry drawing program keeps you right where you need to be – at the cutting edge of science. ChemDraw® 17 features all of the functionality of ChemDraw® 16 plus a number of innovative features to further accelerate your research.

Trusted by more than one million users, ChemDraw® is the preferred platform of chemists and biochemists to draw, store, analyze and share chemical structures and reactions. 

Introducing Hotkeys – Complex Chemistry at a Keystroke

Among our new features is Hotkeys, placing the most complex chemical drawings just a few keystrokes away. Whether it’s a huge biomolecule or a complex reaction scheme, Hotkeys make it easier and faster than ever to draw complex chemical structures. 

If biochemistry is your focus, we’ve got great news for you in ChemDraw® 17!  We’ve added HELM notations to keep pace with the progression of biomolecular science. HELM (the Hierarchical Editing Language for Macromolecules) is Pistoia Alliance’s emerging global standard for representing and sharing large biomolecules


Integrated HELM Notation 
& Metadata Tagging

HELM notation is now fully integrated, so you can easily and quickly share your research with the world.

With Metadata tagging, it’s simple to add defined or ad hoc metadata to your document. ChemDraw® provides an easy, intuitive way to mark up your documents to enforce corporate standards or add additional criteria to enable search & recover.

ChemDraw® 17 also supports the latest in standards Compliance. Data elements and structures are uniquely identified in accordance with the most current standards to keep pace with evolving regulatory and quality requirements

Chemistry at the Pace of Now. 

These are just a few of the new additions to ChemDraw®’s suite of features that over a million users trust every day to quickly & accurately draw, store, analyze and share their chemistry – whether they are in the lab or working in the cloud. Learn how to enjoy chemistry at the cutting edge. 

Download a Free Trial and learn about the power of PerkinElmer’s ChemDraw® 17. 

Focusing on Innovation to Solve Customer Problems

Innovation is the lifeblood of any organization. As such, how organizations define and foster a culture of innovation is critically important to their success. True innovation means improving processes and products, and finding new ways to serve customers.   

From Doblin, Deloitte Consulting's innovation practice, we know that there are 10 types of innovation  – far beyond the obvious product development, which Doblin says delivers the lowest return on investment and least competitive advantage.

So, beyond product innovation, what can organizations do to be fully innovative? Doblin breaks its 10 innovation types into three broad categories of configuration, offering, and experience. From how businesses are organized, to how products and services are made available to customers, to how customers interact with brands -- all are potential opportunities for innovation.

The Ultimate Innovation: A Problem Solved

While customers will always seek product improvements, what they are ultimately looking for are solutions to their problems. Solving our customers’ problems -- whether in the lab or how they engage with us — is the main driver of our innovation efforts. 

To help customers make informed decisions based on earlier and more accurate insights, PerkinElmer makes our numerous subject matter experts available. They possess the knowledge and expertise, not only in detection, imaging, informatics and services, but also in the scientific application of our solutions across a broad range of industries. Furthermore, our partnerships with customers strengthen our understanding of their needs and our ability to help them do their work better.

Innovation Tools 

Collaboration within PerkinElmer, with our customers, and with academic and industry leaders is essential to innovation. It’s one of the reasons we’ve deployed Spigit, the cloud-based ideation and innovation management platform. We’ve used it to drive internal innovation challenges and most recently, to include our one million ChemDraw users in the ChemDraw Innovation Challenge.

Also internally, we drive collaboration with Innovation Summits that bring together over 120 R&D scientists and product managers from around the company. These events feature incredible speakers, a technology and innovation fair, and in 2015, 25 teams of employees submitted projects for a chance to win funding to develop their ideas.  

As part of the 2016 Innovation Summit, we recognized two outstanding researchers who have demonstrated a long history of invention, commercial success, and thought leadership at PerkinElmer: Andy Tipler, a leader and innovator in the area of gas chromatography for 30 years, and Dr. Veli-Matti Mukkala, for three decades of work and leadership in the area of lanthanide chelate chemistry that underlies many of our reagent technologies. 

Precompetitive Research Initiatives

Another method of driving innovation, and lowering the cost of R&D, is participation in alliances and organizations that lead precompetitive research initiatives. PerkinElmer has joined a number of such organizations, including:

In doing this, we ensure that we’re part of the industrywide dialogues driving standards and best practices that will move science and research forward.  

Organizational Focus

In the fall of 2016, PerkinElmer named Karen Madden, formerly president of our Informatics business, to a new position as Vice President of Technology and Innovation for the entire organization. Madden leads a team that identifies new markets, technologies, and customer-driven product offerings. She is also focused on infusing the customer voice into our innovation efforts and challenging conventional thinking and approaches. 

In addition to Innovation Summits that recognize and motivate employees, Madden says meaningful culture changes can stimulate an innovative environment. She points to a story that NASA engineer Adam Steltzner told to PerkinElmer employees about his work at the Jet Propulsion Lab to land the rover Curious on the surface of Mars.

The culture of innovation arises from the brutal combat to which ideas are subjected,” Steltzner had said. Madden agrees that when ideas go to battle with other ideas and suggestions, the idea which emerges is the strongest and best.

“Very importantly, the people with the ideas can walk away feeling good and not that they themselves have been through a battle,” she says. “This was key in Adam’s experience of finding the most innovative solutions and made the mission a success.”

What Does All of This Mean For Our Customers?

Our promise is more than better products; it’s better solutions to customer problems. Driving innovation throughout our organization, in all our customer interactions, is becoming more intentional. 

That kind of drive was behind our innovative PerkinElmer Signals™ for Translational informatics platform. In response to difficulties working with siloed data that was hampering discovery, this solution enables translational scientists to search and integrate clinical and research data and seamlessly explore and make new biomarker discoveries using the analytics and visualization power of TIBCO Spotfire®

It’s also why we partner with customers like Johnson & Johnson, where we provide onsite expertise at its JLABS locations. Our services – from validation to scientific and multivendor asset management – are helping to reduce lab complexities and increase efficiencies.

When we are focused on finding new and better ways to help customers overcome problems and achieve their goals, that’s being innovative. 


How Analytics Centers of Excellence Improve Service & Save Costs

Centers of Excellence: Centralizing Expertise
The “Center of Excellence” as a business model has an assortment of definitions and uses. In general, such “centers” are established to reduce time to value, often by spreading multidisciplinary knowledge, expertise, best business practices and solution delivery methods more broadly across organizations.

They have been identified as “an organizing mechanism to align People, Process, Technology, and Culture” or - for business intelligence applications - as “execution models to enable the corporate or strategic vision to create an enterprise that uses data and analytics for business value.” Still others define these centers as “a premier organization providing an exceptional product or service in an assigned sphere of expertise and within a specified field of technology, business or government…

Using a CoE to Improve Business Intelligence
In approaching how the Center of Excellence (CoE) concept might improve business intelligence (BI), analytics, and the use of data in science-based organizations, PerkinElmer Informatics has developed an Analytics Center of Excellence to deliver service for our customers.

As a framework, the CoE offers ongoing service coverage by experts from a variety of domains, including IT & architecture, statistics and advanced analytics, data integration & ETL, visualization engineering and scientific workflows. In many cases an expert is located at your facility and then leverages a wider range of remote staff, to provide support, reduce costs, and eliminate red tape and paperwork.

There are four pillars to our Analytics CoE for your organization: 

Architecture Services
Mainly for IT, this covers architecture strategy, sizing and capacity planning, security and authentication, connectivity and integration planning, and library management

Governance Services
Centralizing planning, execution and monitoring of projects, Program Management approach to managing multiple work streams, Steering Committee participation, SOPs and best practices, and change management

Value Sustainment Services
Expertise for subject matter consulting, support, hypercare, roadmap and future planning, and analytics core competency

Training & Enablement Services
Training needs assessment, training plans, courseware development, training delivery and mentoring

Cost Savings with Standardized BI Solutions
PerkinElmer’s Analytics CoE leverages TIBCO® Spotfire to help our customers get the most out of this technology as quickly as possible - from the experts. Very often - especially at mid- to large-enterprises - the question is asked, “Why aren’t we standardized on a single BI solution?”

It’s a good question.

Rather than investing time, effort, and money in evaluating, implementing, and maintaining and updating several BI solutions, not to mention training staff to use them, considerable cost savings can be gained from deploying a standard business intelligence solution across the enterprise. And the savings can be further supplemented because the Analytics CoE covers both foreseen and unforeseen needs.

Under an Analytics CoE implementation, cost savings are derived from:

  • Economy of scale from a suite of informatics services
  • Reduced administration efforts for both customer and vendor
  • “Just-in-time” project delivery that engages the right resources at the right time

Reducing the Pharma Services Budget
After converting to the Analytics CoE model, a top 25 pharmaceutical company saved 50% on its services budget, relative to TIBCO® Spotfire. This was possible because the services were bid out once - not for every service engagement. Purchasing service engagements was significantly less fragmented, and the high costs of supporting multiple tools & platforms and responding to RFPs was greatly reduced.

Standardizing on an Ongoing Service Model
Centralizing around a formal service model focuses management of the vendor relationship on a single partner – who truly becomes a partner as they manage projects across multiple domains and departments. 

The Analytics CoE model, also called competency centers or capability centers, oversees deployments, consolidation of services, dashboard setup and platform upgrades - all without the additional burden of new RFPs, vetting of new vendors, and establishing new relationships.

The benefits of standardizing on an ongoing service model, centered on a standard BI platform, include:

  • Holistic approach to deploying analytics solutions across the organization
  • Cost savings from reducing the number of tools used 
  • IT organization isn’t spread too thin as it no longer has to support multiple systems
  • Greater departmental sharing
  • Improvements beyond the distributed model

In addition, there are numerous reasons for analytical organizations to adopt an Analytics CoE:
  • Program Management managing multiple project workstreams and chairing Steering Committee meetings to provide management insight into solution delivery.
  • High quality of subject matter expertise (SME) available for your projects; SMEs are pulled in as needed and are billed against CoE.
  • Significant savings over typical daily rates – up to 50%.
  • Flexible engagement period.
  • Hourly rate fees move from the FTE model to “pay for what you use” further reduce costs.
  • Multiple projects billed against Analytics CoE.

Are you ready for true service excellence in your data-driven organization? Find out if PerkinElmer’s Analytics Center of Excellence is a good fit.

Contact us at informatics.insights@PERKINELMER.COM

A Nobel for Autophagy: High Content Screening is Essential to Understanding this Cell Recycling Mechanism

Although the concept of autophagy — the process for degrading and recycling cellular components — has been around since the 1960s, a deeper understanding of this cellular mechanism wasn’t realized until Yoshinori Ohsumi  began experimenting with yeast to identify the genes involved in autophagy, beginning in the 1990s. His discoveries won him the 2016 Nobel Prize in Physiology or Medicine, announced this past October.

Ohsumi’s work “led to a new paradigm in our understanding of how the cell recycles its content,” the Nobel Assembly at Karolinska Institutet, which awards the prize, announced. “His discoveries open the path to understanding the fundamental importance of autophagy in many physiological processes.” 

Disrupted or mutated autophagy has been linked to Parkinson’s disease, Type 2 diabetes, cancer, and more. This has major implications for studying human health and developing new strategies for treating disease.

Breaking the Bottleneck: The Role of HCS in Drug Discovery Research 

Autophagy is best understood by applying high content screening (HCS) - also referred to as high content analysis (HCA) - analytical approaches. HCS/HCA are sophisticated image-analysis and computational tools that have become useful in breaking the industrial biomolecular screening bottleneck in drug discovery research.  The bottleneck stems from image processing, statistical analysis of multiparametric data, and phenotypic profiling – at both the individual cell and aggregated well level. 

HCS tools, described as “essentially high-speed automated microscopes with associated automated image analysis and storage capabilities”, are necessary — as high throughput screens — to identify cells, recognize features of interest, and tabulate those features. 

 It is no small task. To validate and automate a phenotypic HCS analysis requires:

  •  • data management
  • image processing
  • multivariate statistical analysis
  • machine learning (based on hit selection)
  • profiling at the individual and aggregated well-level
  • decision support for hit selection

HCS in Phenotypic Profiling of Autophagy

We validated a phenotypic image and data analysis workflow using an autophagy assay. Secondary analysis provided an automated workflow for extensive data visualization and cell classifications that furthered understanding of the multiparametric phenotypic screening data sets from three different cell lines.

 An end-to-end solution that includes reagents, instruments, image-analysis tools, and informatics facilitates screening breakthroughs and successful screening experiments.

The solution should serve both small-scale experiments and analysis of a small number of plates and automated methods for larger data sets

Inspection of the data can be done using an unsupervised machine learning technique called Self-Organizing Map algorithm – a type of artificial neural network that clusters similar profiled data points together. In our autophagy assay of the HeLa, HCT 116, and PANC-1 cells, this analysis showed that the phenotypic response to chloroquine is very different in the three cell lines – something almost indistinguishable by eye.

Semi-supervised machine learning methods were used to perform Feature Selection and Hit Classification. By reducing the number of parameters to just the most relevant ones, hit stratification and classification enabled identification of which wells are ‘autophagosome positive’ or ‘autophagosome negative.’ 

Both supervised and semi-supervised machine learning led to similar EC50 curves in the autophagy assay. The supervised linear classifier is helpful whenever the phenotypes are predicted by the user, while unsupervised classification might be better suited for applications with an unknown number of phenotypes. Read the full application note here.

While the underlying pathways still need further research for complete understanding, we are able today to merge phenotype and genotype to help measure autophagy quantitatively in different cell lines, using end-to-end HCS solutions.

Serving industry with HCS solutions for more than a decade, PerkinElmer offers a number of integrated solutions to fully enable autophagy analyses at the speeds required for today’s drug discovery

Interested in incorporating autophagy into your research, and leveraging the available HCS technologies that can analyze, classify, and interpret phenotypic screening data to deliver fast insight? 

Check out this webinar to learn about an autophagy assay across three cancer cell lines. This assay was done to validate and automate a phenotypic HCS analysis workflow using PerkinElmer’s Columbus and High Content Profiler, powered by TIBCO Spotfire. 

Machine Learning

Are You Ready for Machine Learning?

Some users are so enthusiastic about Machine Learning (ML) that Gartner added it for the first time to its Hype Cycle for Emerging Technologies in 2016 based on its ability to revolutionize manufacturing and related industries. Forrester Research says enterprises are “enthralled” by the potential of ML “to uncover actionable new knowledge, predict customers’ wants and behaviors, and make smarter business decisions.” 

It’s not just for manufacturing or retail, however. Computers that can “learn” have applications in the life sciences as well. Predictive modeling can, for example, assist with diagnostics - whether from clinical or molecular data, or from image recognition.  A study at Stanford University showed that ML more accurately predicted outcomes for lung cancer patients than did pathologists.

What is Machine Learning? 

Gartner defines it as “a technical discipline that provides computers with the ability to learn from data (observations) without being explicitly programmed. It facilitates the extraction of knowledge from data. Machine learning excels in solving complex, data-rich business problems where traditional approaches, such as human judgment and software engineering, increasingly fail.”

According to Tom Mitchell, author of Machine Learning (which in 1997 was one of the first textbooks on the subject), ML can be defined as any computer program that improves its performance (P) at some task (T) through experience (E). 

It works, in part, by finding patterns in data. It has typically been the domain of proficient data scientists, software engineers, IT professionals, and others who understand deep learning, neural networks and similar specialized subjects. In short, ML has felt off-limits to many subject matter experts – think of those pathologists in the Stanford study – who could benefit from ML’s algorithms.

“The analytics revolution becomes real when the average person becomes more comfortable with advanced analytics,” says Dr. Jamie Powers, DrPH, Director of Real World Evidence and Data Science at PerkinElmer Informatics. “Not that it’s simple or easy, but the barrier to entry isn’t as high as one might think.”

And getting on board is important, says Dr. Powers, if organizations don’t want to be left behind.

“The insights that can be gained from statistical analysis and machine learning are far greater than what a pair of human eyes can possibly offer,” he says. “What machine learning can do is direct where the humans should focus.”

“Starting Small” with Machine Learning

The beauty of machine learning, compared to routine statistical prediction, for example, is in its ability to let users experiment. Even if “you’re not there yet” because you still struggle with data and basic analytics, you can start small and explore with ML.

For example, consider an analysis we ran using a publicly-available dataset (used in previous datamining competitions) from a Wisconsin breast cancer study. It’s a small dataset from 699 patients, and we ran several algorithms to test diagnostic accuracy (benign or malignant) based on nine tumor characteristics. The goal was to find an ML algorithm with greater than 95% predictive accuracy. 

What tool did we use? The TIBCO Enterprise Runtime R (TERR) engine embedded within TIBCO Spotfire. Beyond its visualization and dashboarding capabilities, TIBCO Spotfire offers advanced analytics functionality for machine learning applications. TERR lets you work in Rstudio to develop, test, break, and iterate with algorithms – but it makes R run 10- to 100-times faster than open source R.

We tested seven different algorithms. Our results showed six of the seven machine learning algorithms yielded 95+% accuracy. While a small sample set, it’s enough to demonstrate that sometimes a simple model with high predictive accuracy is good enough. 

If your next steps include trying ML on your TIBCO Spotfire and TERR installation, here are three tips and one caveat to get you started:

  1. 1. Follow a process. Machine Learning is about process to determine which algorithm to use, saving you from countless other decisions to make with a more manual evaluation of available data.
  2. 2. K-fold cross-validation is a best practice for training algorithms. Maintain 80/20 data split for final predictions, to have a hold-out sample to test the final algorithm.
  3. 3. Try ensembles – or combinations of algorithms to see if they deliver the accuracy and insights sought.
  4. 4. The caveat: Machine learning (or statistical prediction) is NOT a substitute for proper experimental design and/or sample selection.

From science fiction to real science, Machine Learning is finding new applications and new devotees who want to use its power to solve problems and answer seemingly unanswerable questions. 

“If you have Spotfire and R, you can start with small projects,” Dr. Powers says. “The technology exists, the information is there. Machine learning is more accessible than it’s ever been. So, are you ready for machine learning? I would say anyone who’s willing to learn about it is ready.”

Watch Dr. Powers’ talk at Advanced Pharma Analytics 2016 to see how you can begin incorporating machine learning to benefit your organization.

Risk-Based Monitoring: Separating the Risk from the Noise

Clinical development professionals are tasked with making sure every trial site runs efficiently, follows protocol and generates the highest quality data. With increasing clinical trial length and cost, Risk-Based Monitoring (RBM) is growing exponentially amongst sponsors and CROs.

In fact, regulatory agencies are now strongly recommending a risk-based approach to monitoring – encouraging sponsors & CROs to focus resources on sites that need the most monitoring. The FDA has issued guidance (Guidance for Industry Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring) with the stated goal: 

“…to enhance human subject protection and the quality of clinical trial data by focusing sponsor oversight on the most important aspects of study conduct and reporting… The guidance describes strategies for monitoring activities that reflect a modern, risk-based approach that focuses on critical study parameters and relies on a combination of monitoring activities to oversee a study effectively. For example, the guidance specifically encourages greater use of centralized monitoring methods where appropriate.”

In this video, we share how PerkinElmer Informatics helps companies implement a Risk-Based Monitoring approach to clinical trial development. 

Historically, clinical trial monitoring has always depended on 100% source document verification (SDV) and other on-site monitoring functions to ensure patient safety and data integrity – requiring up to 30% of the $2.6 billion it takes to bring a new drug to market.

This costly practice has been found to have almost no impact on data quality or patient safety – spurning regulatory agencies to encourage solutions such as RBM platforms that integrate data from different trials, at different locations and in different formats. In fact, RBM proponents believe the approach will return an overall reduction in trial clinical costs of up to 20% – making it equally attractive to both sponsors and clinical site managers. 

Watch the video to learn more about clinical trial management and implementing an RBM approach. 

Why the Cloud is the Clear Choice

There’s nothing cloudy about it – the Cloud offers a number of advantages over on-premises computing solutions.

As life sciences researchers work with higher volumes and more variety & complexity of data - and as the organizations they work for become more sensitive to the costs of dedicated IT resources - cloud computing emerges as a solution for a multitude of challenges.

Not surprisingly, Amazon Web Services, a provider of cloud computing services (which - full disclosure - PerkinElmer uses) identifies six main benefits of cloud over traditional computing:

  • Trade capital expenses for variable ones: rather than invest in on-premise data centers and servers, cloud computing lets you “pay as you go,” more like a utility. You only pay for what you use, avoiding steep upfront capital costs.
  • Gain massive economies of scale: sharing services with hundreds of thousands of cloud users lowers your variable cost.
  • Eliminate guessing at capacity: on-premise requires precise forecasting, but too often you end up with either idle servers or overloaded ones. Cloud enables perfectly scalable, just-what’s-needed service - without the guessing.
  • Increase speed and agility: the cloud lets you release new IT resources and updates in a click, for a “dramatic increase in agility.”
  • Don’t pay to run and maintain data centers: spend instead on growing your business (or expanding your research), not “racking, stacking, and powering servers.”
  • Go global in minutes: deploy applications in regional clouds worldwide to lower latency and enhance customer experience.

These are key advantages for any life sciences organization, and Amazon is not alone in its analysis. The Yankee Group  found that, on average, the total cost of cloud-based Software-as-a-Service (SasS) offerings is 77 percent lower than systems on-premises. 

Comparing the Cloud With Physical Infrastructure 

Sure, you’ll pay more in subscription fees for cloud services versus licensing fees for software, but the costs for customization and implementation of licensed software, the hardware to run it, the IT personnel to manage and maintain it, and the time an expense of training add up to a total cost that far exceeds cloud computing.

Our own analysis, comparing PerkinElmer Signals for Translational  to a popular open-source system based on a traditional software model shows total cost of the cloud model to be nearly half that of the on-premises one.

 Advantages of the Cloud: More than Cost-Savings 

On-Premises ModelCloud-Based Model
Requires new HW, SW, & IT supportNo new HW, SW or IT involvement
Implement in months/yearsImplement in days/weeks
Time consuming, costly upgradesAutomatic, disruption-free upgrades
No innovation velocityConstant improvement, enhancement
Designed for techie usersDesigned for business users
Time-consuming to rolloutAutomatically accessible worldwide
High upfront cost and TCOLow annual subscriptions and TCO
High risk: big upfront purchaseLow risk: try first, cancel at any time

This chart shows several additional advantages of the cloud over the on-premise model. With the cloud, plan on scaling up more quickly and deploying complex algorithms more frequently. Accelerated and automated updates are less disruptive to your users, keeping them focused on answering the questions their research continually poses.

The cloud has also enabled a new way of working – distributed research and development. Whereas the old centralized mainframes both consolidated and isolated data, the cloud fosters collaboration with external partners and opens access to emerging markets and public data.  

Cloud-Based Computing is Working

For proof the cloud is delivering on its promises, look no further than its use. Cisco, in its Global Cloud Index: Forecast and Methodology, 2014-2019 report, predicts that 86 percent of workloads will be processed using cloud data centers - versus only 14 percent by traditional data centers. Cloud data centers handled a workload density of 5.1 in 2014, but that will increase to 8.4 by 2019 – compared to just 3.2 for traditional data centers.

There is one caveat which we’ve addressed in this blog before – the issue of security in the cloud. While it is our opinion that there are solid technology solutions for security concerns, some in the industry are addressing this for now by using a hybrid of both private cloud and on-premise computing for mission-critical workloads. 

Due to the “strengthening of public cloud security” however, Cisco predicts that public cloud will grow faster (44 % CAGR  2014-2019) than private clouds (16% CAGR for the same period). They estimate more workloads in the public cloud (at 56 percent) than in private clouds (at 44 percent) by 2018.

Leveraging Big Data in Academic Research

This blog and others have celebrated Big Data for the big insights it can provide – and has already brought – in applications as diverse as finance, marketing, medicine and urban planning. Perhaps the slowest — or most cautious — in big data application has been academia - specifically at academic research institutions whose work contributes to pharma, biotech, drug discovery, genomics, and more.

Some of this is attributed to the cost for academic institutions to acquire software and systems to analyze big data, as well as the complexity of integrating and keeping up with multiple, evolving solutions. One survey found that access to data sets was the No. 1 problem for academic institutions interested in applying big data to research. 

Open Access to Big Data?

Late last year, four leading science organizations called for global accord on a set of guiding principles on open access to big data, warning that limitations would risk progress in a number of areas - from advanced health research to environmental protection and the development of smart cities. The “Open Data in a Big Data World” accord seeks to leverage the data revolution and its scientific potential, advocating that open data would maximize creativity, maintain rigor, and ensure that “knowledge is a global public good rather than just a private good.”

Collaboration around Precision Medicine

Big data is making a bigger impact as genomic centers, hospitals, and other organizations – particularly cancer centers – collaborate around precision medicine. These efforts use genomic data to make treatment decisions even more personalized. Precision medicine both generates and depends on Big Data – it is a source for much of the volume, variety, and velocity of data, and requires analysis to turn it into useful insights for drug discovery, translational research, and therapeutic treatments. 

The Need for Searchable Databases

While the federal investment of $215 billion in the Precision Medicine Initiative is certainly welcome, scientists argue that additional funds and collaborations are needed. One such collaboration, the New York City Clinical Data Research Network (NYC-CDRN), has created “a robust infrastructure where data from more than six million patients is aggregated.” Dr. Mark Rubin of Weill Cornell Medical College argues that searchable databases like the NYC-CDRN are needed so that researchers and clinicians can “better design studies and clinical trials, create personalized treatment plans and inform medical decisions” for optimal, data-informed care.

It appears the marketplace is listening.

Yahoo! announced it was releasing “a massive machine learning dataset” comprised of the search habits of 20 million anonymous users. The dataset is available to academic institutions and researchers only, for context-aware learning, large-scale learning algorithms, user-behavior modeling, and content enrichment. Both Facebook and Google have similarly opened access to artificial intelligence server designs and machine learning libraries, respectively, for academic as well as industry use.

In the face of another federal initiative, the Cancer Moonshot, the Oncology Precision Network has decided to share aggregated cancer genomics data as it aims to “find breakthroughs in cancer care by leveraging previously  untapped real-world cancer genomics data while preserving privacy, security, and data rights.” The Cancer Moonshot itself integrates academia, research, pharmaceutical, insurance, physician, technology, and government organizations to find new immunotherapy-based solutions for cancer care by 2020.

These and other efforts like them let academic researchers cover new ground now that they have access to previously unavailable datasets. In addition, nonprofit and academic institutions are analyzing data from previously conducted research, linking databases, and leveraging machine learning to further accelerate precision medicine development.

A Big Data Guinness World Record

A Guinness World Record ought to be proof of such an impact. The title for “fastest genetic diagnosis” went to Dr. Stephen Kingsmore of Rady Children’s Hospital-San Diego. He used big data and new technologies to successfully diagnose critically ill newborns in 26 hours, nearly halving the previous record of 50 hours. 

This is the kind of breakthrough that researchers – both in industry and academia – are hoping big data can deliver more of. 

Academic Tech Lags Behind Industry, but Big Data Solutions Could Help Close the Gap

But it means overcoming challenges that keep big data technologies out of the hands of academics. Post-docs and grad students want to find at their universities the technologies they will also use in their careers. Academic institutions want to keep costs low while protecting intellectual property. They want access to that IP, even as students graduate, so that future students can benefit from work that has already been done. 

Researchers want to analyze data within context and supported by big data findings. Systems and solutions that let them combine efforts with industry will go a long way toward increasing the discovery and development of highly accurate clinical diagnostic tests and treatments.

What is holding your academic research institution back from deploying big data technologies? Do you need a plan for implementing big data in your organization? Download our Free Trials 

100-days of #Spotfire®

Since 1986, PerkinElmer Informatics has been supporting researchers across industry and academia, with market-leading intelligently designed software solutions. Today we are thrilled to announce the start of the 100-days of Spotfire® - celebrating one of the most powerful tools in the PerkinElmer Informatics catalog, TIBCO™ Spotfire®.

Big Data is only Getting Bigger…

As a scientist, you know what it’s like to struggle with vast amounts of complex data from a wide array of sources. As data outputs increase, so does the pressure to find the needle in the “data” stack that leads to your next big discovery. Our pursuits for data-driven insights & agile decision-making has never been greater. 

Spotfire® provides dynamic analysis and visualization tools that will tackle the most cumbersome data sets with just a few clicks of the mouse. What’s more, Spotfire® not only finds answers to the questions you have, uncover the answers to questions you didn’t realize needed answers.

Start the 100-day revolution

Over the next 100 days, we challenge you to take part in the revolution of taking back your data and unlock insights that will dramatically change the way you look at your research outputs. Each day we will be showcasing a variety of posts covering topics such as:

  • How to use Spotfire® to easily analyze data from multiple sources—  including biological assay data & chemical structures and properties , cellular images, genomics & proteomics data.
  • •Simple steps to creating and customizing visualizations and dashboards. 
  • •Stories of how labs like yours apply Spotfire® to their research
  • •Time-saving tips, shortcuts and feature spotlights
  • •The available application stories around Lead Discovery® and OmicsOffice® to get more insights out of your research.
  • •Ways to easily share your Spotfire® analytics and dashboards with your colleagues

Plus: we’ll also be hosting a variety of in-person and online events and be offering flash promotions and discounts throughout the next 100 days.  Stay tuned by following us here on LinkedIn, across our LinkedIn Groups, Facebook and Twitter 

ISO Certification 9001:2008 for Informatics R&D and Global Support


Do Standards Have a Place in Software Development?

Standards would seem to be anathema to software developers, who might protest that their use would stifle the creativity and flexibility required for agile or iterative development.

By their very definition, standards are a model or example established by authority, custom, or general consent; something set by authority as a rule for the measure of, among many things, quality or value.

Is it possible to create standard requirements to improve the quality or value of software, without affecting the very creativity needed to achieve what the software or application is being designed to do? The simple answer is yes. ISO 9001:2008 certification can improve quality management systems for software. The International Organization for Standardization (ISO) establishes requirements for quality management systems in organizations that seek: 

• to demonstrate their “ability to consistently provide product that meets customer and applicable statutory and regulatory requirements” and

• to “enhance customer satisfaction through the effective application of the (quality management) system.” This includes establishing processes for continual improvement.

Because ISO 9001:2008 requirements are “intended to be applicable to all organizations, regardless of type, size, and product provided,” they apply well to quality management in software development and service.

This year, PerkinElmer received ISO 9001:2008 certification for its Informatics R&D and Global Support functions - both of which are important to customer satisfaction.  Guido  Lerch, the company’s executive director of quality control and assurance for informatics, and Lillian  Metcalfe, quality system manager, say they saw an opportunity with ISO 9001:2008 certification to proactively invest in implementing standards and thus the overall quality of software delivered.

Measure Inputs and Outputs

Certification provides a framework under which individual companies create the processes and procedures that lead to quality products and services.  It is not ISO mandating what the certified organization does, but rather the organization seeks certification of standards it devises and applies to processes.

For software development, PerkinElmer has not restricted what its development teams can do. Instead, the Company structured the “inputs and outputs” – the precursors for starting development and the processes to evaluate what they release.

“In the middle, we want our developers to go off and experiment and try lots of things,” Lerch says. 

Adherence to ISO 9001:2008 requirements assures that R&D and global support processes are clear, documented, and monitored, and that people are trained. It creates checks and balances to monitor the effectiveness of the quality management system, which leads to improved product quality and more satisfied customers.

It can also reduce the duration or scope of customer audits, as customers gain confidence from the knowledge that standards and processes are in place to develop and build products in a consistent manner. “There are hundreds of questions they won’t need to ask us,” according to Lerch, since PerkinElmer can explain its processes.

ISO 9001:2008 gives customers confidence they know exactly what they are deploying because the software has gone through a thorough testing regimen that follows certified procedures. In addition to knowing there are quality standards in place, there are also two people – Lerch and Metcalfe – whose roles are dedicated to the quality management of all PerkinElmer software released.

Two Important Points

While PerkinElmer Informatics has received ISO 9001:2008 certification for its R&D and Global Support functions, the company has been following such guidelines in principle for many years. Certification formalizes the company’s efforts.

ISO 9001:2008 has now been updated to ISO 9001:2015. PerkinElmer has three years to recertify under the new standards, and is committed to not only achieving certification, but maintaining it.

How confident are you in the quality management of the software and applications you’re deploying? Does ISO 9001:2008 certification increase that confidence?