Revolutionizing global health: Changing healthcare from reactive to preventative

 The blue line represents the lifetime cost of sick care vs. the red line cost of healthy care.

Even though the number of people living past 100 is increasing exponentially at the moment, for the first time in the last 200 years a child born in the United States has a lower life expectancy than his or her parents.

Two of the biggest health challenges facing modern society include cancer and cardiac disease. With ever-increasing knowledge and medical capability regarding disease prevention and treatment, the tools to reverse the decline of life expectancy lie within our reach.

Yet increasing life expectancy also means increasing medical expenses. The most medically expensive years of an individual’s life are the last five; by increasing the amount of time a person continues to age also increases the amount of money that person must spend to sustain life.

When it comes to health and longevity, there are many interesting relationships occurring in today’s society related to lifestyle. Countries with the highest energy consumption rates are topping the charts with lowest life expectancy. The average number of calories consumed daily has increased more than 50 percent in the last 40 years.

These figures strongly suggest that food addictions to products such as sugar and corn, which comprise a large proportion of foods found on today’s modern grocery store shelves, are playing a devastating role in the decline of health and life expectancy.

To combat these circumstances, we must revolutionize the way we approach healthcare today. Instead of practicing medicine reactively by treating symptoms and diagnoses, a cultural change needs to be made so that as a society, we embrace preventative, personalized healthcare. By spending more money and energy upfront to understand an individual’s personalized environmental health risks, the long-term effect is longer life and decreased medical expenses through effective disease prevention.

Join experts and peers on May 8 and 9, 2013, in Newton, Massachusetts in attendance of the Revolutionaries for Global Health Summit. The summit, hosted by PerkinElmer, will feature numerous presentation tracks focusing on subjects relating to global health, such as next generation sequencing, in vivo imaging, targeted small molecules, tissue and cellular imaging, proteins and biologics, informatics, epigenetics, cellular and tissue imaging, and biotherapeutics.

Click here to watch a video PerkinElmer Life Sciences and Technology President Kevin Hrusovsky give a presentation on modern health trends at a previous RGH Summit.

Complete your free registration to attend RGHS in Newton here.

Use hashtag #RGH13 to follow related discussions on Twitter.

Obama's BRAIN initiative: Impact on neuroinformatics and personalized medicine

Photo Credit: neurollero via Compfight cc

The announcement made earlier this month by the Obama administration, that the government plans to make a founding investment of $100 million next year to kick off a multi-year initiative to map the human brain, will result in substantial ramifications for the field of neuroinformatics and the future of personalized medicine.

The proposed effort to map the brain’s cells and neural connections in entirety, called the BRAIN initiative, will make strides towards understanding cause and treatment of diseases such as Parkinson’s, Alzheimer’s, stroke and brain injury.

Yet some of the biggest effects of the project will be seen in the advancement of computational technology and data analysis. The acronym BRAIN stands for Brain Research through Advancing Innovative Neurotechnologies, which makes clear that the project will push the technological envelope to develop equipment and computational capability to allow scientists to track electrical activity at the micro level of individual cells and connections.

New methods for sensing and recording electrical activity in the brain will need to be developed in order for researchers to track brain processing patterns in greater number and speed, ideally “at the speed of thought”, said the White House.

The field of neuroinformatics will be greatly affected as new IT solutions for analyzing massive sets of brain-related data will need to be developed in order to facilitate the collection of data from trillions of points in the brain. The implications of the increased data-crunching ability will no doubt extend far beyond neuroscience to affect the manner in which big data and computational processing is handled across various industries.

Down the line, just as advancements in whole genome sequencing have opened the door for widespread access to genetic testing as a method to manage personal health, mapping the human brain could introduce individualized brain maps as another accessible tool to understand, prevent and treat disease at the level of an individual patient.

Creative data capture facilitates faster drug discovery

Photo Credit: Auntie P via Compfight cc

Drug discovery researchers can work more efficiently to bring new therapies to market using innovative user-defined data capture solutions to document experiments and analyze data.

Within large research and development organizations, the need for standardized data reporting and analysis conflicts with the need for flexibility within experimental data capture. Scientists often resort to self defining data capture parameters on a case-by-case basis, which make it difficult to compare data across experiments, perform consistent data analyses and share data within a research organization.

The novel approach of providing user-defined data capture mechanisms within a structured organizational network allows researchers to be flexible while still ensuring that the data they are recording will be analytically relevant within the larger sense of a laboratory’s data collection.

Scientifically-intelligent software platforms not only allow for creative data capture but also reduce or eliminate the time scientists traditionally have had to spend transcribing and combining data from different sources. Automated merging of data allows for information to be combined through the introduction of metadata to support input from different sources of capture, and removes the possibility for manual transcription errors to occur. This merge of data is achieved without the need for writing code, which reduces the need for substantial IT support traditionally necessary to maintain an organized, searchable collection of experimental information.

Read our white paper titled “User-Defined Data Management Solutions Free the Drug Discovery Researcher” to learn more specifically how pharmaceutical scientists can leverage informatics to improve their research and development processes.

Built-in laboratory safety: ELN system learns from past failures to warn scientists

 Photo Credit: Thompson Rivers via Compfight cc

Laboratories operate under extensive rules and regulations in order to prevent hazardous accidents from occurring through chemical or biological experimental processes. Even with preventative measures and protective guidelines in place, the experimental nature of laboratory research means accidents are inevitable.

The challenge for scientists is how they can organize their data in a manner that allows them to learn incrementally from each accident that occurs. For humans to document the parameters that caused accidents and then distribute that information in an effective way to their peers is a daunting task. A solution developed by Bristol-Myers Squibb chemistry safety officers demonstrates how an electronic laboratory notebook enterprise network can be used to deliver automated warnings based off of historical data.

While traditional laboratory safety methods require researchers to pull information from various authoritative sources and guidebooks on safety recommendations, an automated safety system rigged within an ELN eliminates risks of human error and oversight by “pushing” safety parameters onto researchers.

The key to creating a safety system is realizing the important role the electronic laboratory notebook plays in experimentation: recording the experimental plan and parameters in the ELN is essentially the last step a scientist takes before carrying out an experiment. This allows the ELN to take on a gate-keeping role through the implementation of a safety net, drawing on historic accidents to warn scientists when experiment plans mimic past failures.

Not only does the safety system alert the primary researcher of a potentially hazardous experiment, the system can refer researchers to appropriate procedural guidelines and notify members of a laboratory’s safety committee that a potentially hazardous experiment may be imminent..

The ELN safety customization created at Bristol-Myers Squibb was recognized by the 2012 Bio-IT World Best Practices Awards competition. Other scientific organizations are now turning to Bristol-Myers Squibb asking for advice on how to construct their own ELN-based safety enforcements.

Read a detailed account of the Bristol-Myers Squibb solution here, which was developed on top of our E-Notebook® ELN: http://bit.ly/XMuzpF

Researchers turn to cloud-based analysis to accommodate influx of genetic data

 Some genetic sequencing providers are opting to upload client genome information to cloud-based analysis networks

Photo Credit: atzu via Compfight cc

Questions about privacy arise again on the scene of genetic sequencing as medical researchers turn to cloud-based bioinformatics software to analyze processed sequences.

With genetic sequencing rapidly becoming a more and more affordable service for public consumption, genetic sequencing providers are growing more concerned with how to crunch the sequencing data rather than with the sequencing technology itself. As the field of personalized medicine continues to expand and individuals opt to have their genomes sequenced for as little as $1,500, the costly part for sequencing providers will be to maintain enough servers and personnel to analyze and identify information in the completed sequences.

Some genetic sequencing providers have already opted to outsource those analytic processes to cloud-computing software platforms, which allows for the genetic information to be analyzed without the financial investment in the physical infrastructure needed to house on-site analytical processes. Clients’ genetic information is uploaded into the cloud and can be analyzed for as little as $100 cost.

Many individuals may find it daunting to consider that their personal genetic information is being released to a cloud-based computing network. While cloud-based genetic analysis is no more likely to jeopardize personal privacy than on-site computing, it is a sign of how much more readily genomic information is being collected and analyzed. Innovative solutions such as cloud-based computing will continue to expand to accommodate increasing bandwidth needs.

The emergence of cloud-based genetic sequencing analysis demonstrates that the field of genomics may be advancing faster than regulatory agencies can keep up. The problem of privacy when it comes to genetic information is not specific to cloud-based computing - there currently are no federal regulations in place to protect individual privacy when it comes to genetic information, and state regulations are varied and riddled with inconsistencies. Just this past October, the Presidential Commission for the Study of Bioethical Issues made twelve concrete recommendations about the regulation of genetic information, but none of those regulations have been put in place yet.

To read more about cloud-based genetic sequencing, check out this March 19 article published by Nature.com: “Gene-analysis firms reach for the cloud”.

Learn more about genetics and privacy by reading these recent blog stories:

DNA Anonymity Part I: Current state and concern

DNA Anonymity Part II: What privacy can we expect in the future?

Ethical recommendations: Whole genome sequencing and privacy

Data discovery and visualization replace reporting in BI

 

Photo Credit: me and the sysop via Compfight cc

Traditional data reporting just isn’t up to snuff when it comes to generating meaningful data for business leaders to make decisions, a fact solidified by Gartner Inc.’s decision to rename its annual BI quadrant rankings report from “Business Intelligence Platforms” to “Business Intelligence and Analytics Platforms”.

“The dominant theme of the market in 2012 was that data discovery became a mainstream BI and analytic architecture,” Gartner’s report stated.

So what does that mean for scientific companies looking to harness the most powerful data analysis software in 2013?  It means looking beyond data reporting to find tools that not only are capable of extracting relevant data but also present that data in a scientifically-meaningful way.

As the era of “big data” continues to generate more and more information, the ability to capture and understand meaningful data goes a long way. A recent story published by NPR called big data the equivalent of the steam engine in terms of technological impact. Whereas the collection of big data from digital activity can offer insight into human behaviors, likewise the collection of data points from scientific research can offer valuable insights into experimental patterns and materials behaviors.

And the future of harnessing all the information contained within that data? That’s where data visualization comes in: Visualization provides an intuitive method for researchers to sift through and expose relationships between data sets. By replacing rows and columns of data with pictures and charts to graphically represent information, users can absorb information in real-time and also locate information much faster using visual discovery tools.

Critically, data visualization allows for optimization of personnel resources. Whereas traditional enterprise-level data reports often necessitate the efforts of an IT support group to write code allowing end-users to use query language to find data, data visualization allows end-users to both enter and retrieve data from enterprise systems.

Not only does data visualization allow more personnel to be dedicated to discovering patterns within data, it allows those people to make better and more informed discoveries. A 2013 scholarly article found that when doctors and patients processed and discussed diagnosis findings, visual aids increased comprehension of probabilities and numerical data.

This finding isn’t just meaningful for data relevancy in the medical field – it demonstrates that humans innately are better at understanding data when it is presented graphically. Not surprisingly, this natural proclivity for humans to draw meaningful information from visual representations is what makes data visualization tools so powerful when used in data analysis.

Organized creativity: end-to-end research data management

Photo Credit: Alfred Hermida via Compfight cc

Between all the procedural methods, standards for documentation and experimental parameters, it can be difficult to remember that research is intended to be an inherently creative, exploratory and innovative process.

Yet with so many regulatory practices in place to validate and document research data, that creative element can become stifled in the experimental process. Traditional data documentation and analysis software limits scientists to working within data sets instead of across data sets.

The challenge that emerges for scientists is to find software solutions that allow them to work creatively within scientifically-intelligent organizational platforms. While using data reports as the only source of analysis findings prohibits comprehensive findings, dynamic and fluid data visualization platforms engage researchers with data in an intuitive and interactive manner.

Especially for scientists working within large research organizations, the need for organized flexibility is critical to R&D successes. End-to-end integration of advanced informatics solutions allows not only for organized data capture, but also makes this data available and maneuverable through tailored data visualization programs.

By removing barriers to creativity through effective data capture and analysis systems, scientists are enabled to do what they do best: create and discover. Instead of spending hours transcribing, organizing and sorting through traditional paper records, electronic laboratory notebook (ELN) technology can give back several hours of weekly productivity to each scientist. That time can be spent in experimentation or engaging in advanced, out-of-the-box analysis using data visualization software that allows for data filtering, manipulation and cross-referencing.

In the end, the goal of facilitating end-to-end comprehensive data solutions is to generate more research and more results. The more efficient it is to record and sift through data, the quicker research can benefit us all as new discoveries give way to new products and therapies entering the marketplace.

Now, read a more in-depth take on this subject, titled “Busting Drug Discovery Bottlenecks”, published in Drug Discovery and Development.

ChemBioDraw 13: Best internet tutorial videos Part II

It’s been a few months since November when we shared on our blog the compilation of the best internet tutorial videos for ChemBioDraw. With last week’s launch of the second ChemDraw Magic video created by “ChemDraw Wizard” Pierre Morieux, Ph.D, we decided it was time to pull together some of the great how-to videos that have hit the internet since November.

First and foremost, we of course have to list “ChemDraw Magic 2”. This second video was published after the original “ChemDraw Magic” video was posted in November to widely-received rave reviews by ChemBioDraw users. The video was so well circulated that Morieux is now employed by PerkinElmer Informatics as an applications specialist.

In his second video, “ChemDraw Wizard” Morieux spends time showing off tips and tricks for several new features introduced in the latest version of ChemDraw, ChemBioDraw 13.0. Video viewers learn how to draw peptides and nucleic acids using the new biopolymer tool, as well as learning time-saving shorts cuts such as saving “style sheets” to pre-set document settings and engaging the hotkey for the Marquee tool.

ChemBioDraw users can also reference the latest how-to videos posted in our own tutorial series, “ChemBioDraw 13.0: Step by step”:

Thin Layer Chromatography tool in ChemBioDraw

Struct=Name in ChemBioDraw

Improve workflow and file management with ChemBioDraw and ChemBio3D

Predicting proton and carbon NMR shifts with ChemBioDraw13

To stay up to date on the latest ChemBioDraw tutorial videos, follow PerkinElmer Informatics on Twitter and Facebook to receive weekly links to the latest “ChemBioDraw 13: Step By Step” videos.

Are there any other ChemBioDraw how-to videos you would like to see? Contact us via social media and give us your suggestions, we’d love to help you get the most out of your ChemBioDraw use.

PKI Informatics: Two Gartner reports, two products named market leaders

Photo Credit: woodleywonderworks via Compfight cc

Within the span of one month, two products from PerkinElmer Informatics’ suite of scientific technology and software offerings have received winning reviews in reports by Gartner, Inc.

Hot on the heels of a January report in which Gartner documented the ways in which PKI Informatics’ E-Notebook® continues to lead the industry in ELN enterprise technology, Gartner has released a February report entitled “Magic Quadrant for Business Intelligence and Analytics Programs,” in which the TIBCO Spotfire® software platform was named a market leader in business intelligence and analytics.

The report names the TIBCO Spotfire® software platform’s strong product vision as one of the most influential reasons why the platform continues to remain ahead of competitors in the advancement of BI and data analysis capabilities. Real-time analyses coupled with mobile-ready dashboards allow users of the TIBCO Spotfire® software platform to access code-free data modeling and analysis from a variety of entrances to an enterprise network.

TIBCO Spotfire software platform users report “success in terms of expanded usage over the past year” and “above average ease of use” while the platform has demonstrated the “highest complexity of user analysis scores of any vendor”, according to Gartner’s report.

Through our strategic alliance with TIBCO Software Inc., the PKI Informatics team is excited to continue the development of vertical applications to tailor-fit the TIBCO Spotfire software platform for superior data visualization within science industries. By combining the TIBCO Spotfire software platform with our E-Notebook® ELN technology, scientists can eliminate the need for IT support while achieving best-in-class data visualization, organization and analysis.

Gartner’s recent reports recognizing our E-Notebook ELN enterprise solution and the TIBCO Spotfire software program as leaders in electronic laboratory notebook and data analysis markets acknowledge our efforts to grant customers in scientific research access to the most cutting-edge information systems, and we thank our customers for continuing to choose us as their scientific informatics provider.

AstraZeneca case study: Quantifiable and qualitative impacts of ELN enterprise implementation

Photo Credit: Philippe Boivin via Compfight cc

Following up on our announcement that Gartner, Inc., concluded in a January 2013 report that PerkinElmer Informatics exerts “superior” capability to implement analytical instrumentation integration with E-Notebook® enterprise ELN technology, we’d like to revisit an AstraZeneca case study that highlights the quantitative and qualitative advantages of ELN enterprise software.

Most commonly, quantitative benefits of ELN enterprise implementation are readily recognized and documented through statistical figures and calculations. In this AstraZeneca case study, these advantages are described as follows:

    - Average productivity improvement in medicinal chemistry: 10%

    - ELN technology increased the amount of experimental productivity by decreasing the amount of time necessary to practice documentation

    - Patent preparation made more efficient through time-saving techniques that allow chemists to easily convert electronic documentation into patent submissions

    - Materials weight, quantity and yield calculated automatically in real time

    - Ability to electronically clone replicate reactions reduced documentation time

    - Report and presentation preparation time reduced by availability of precisely and orderly documented electronic experiment records

    - Creation of searchable, comprehensive database of documentation

Less commonly discussed are the intangible advantages achieved through ELN implementation, but the AstraZeneca case study does an excellent job of highlighting qualitative benefits experienced after integration with E-Notebook enterprise ELN.

One of the most important benefits achieved through ELN implantation in a global organization is the ability to cross cultural and geographic boundaries to stimulate and promote teamwork, collaboration and discovery through shared access to team’s data.

Highest-quality documentation and record keeping are also enforced through systematic ELN data input practices and regulations, allowing for meticulous electronic audit trails and digital certification of experiment authenticity.

Furthermore, ELN enterprise software acts as a launching platform to engage additional integrated electronic programs and services. From the PerkinElmer Informatics suite, products like ChemBioOffice® and the TIBCO Spotfire® software platform are easily implemented to complement and increase ELN capabilities.

Read the Astra Zeneca case study in its entirety by clicking here, or read our news release announcing our success in Gartner’s recent IT report by clicking here.

To learn about E-Notebook for Chemistry enterprise ELN, click here, for Biology, click here.