There is no denying phenotypic screening offers many advantages for drug discovery. As we’ve previously noted in this blog, many pharmaceutical companies have increased their use of phenotypic screens after it was reported that the majority (28) of first-in-class small-molecule drugs receiving FDA approval from 1999-2008 traveled the phenotypic path, vs. a target-based approach (17).
The Phenotypic Bottleneck
However, the trend towards empirical models involving organisms (versus mechanistic models at the molecular level) increases the complexity of the analysis – and therefore processing time. Researchers conducting phenotypic screening campaigns at pharmaceutical companies that process approximately 500,000 compounds calculate image analysis time of at least three months - even if they could keep image analysis time to one hour per 384 well plate.
That bottleneck stems not only from the volume of compounds to be processed, but the number of parameters. There are upwards of thousands and many multiple-defined parameters, which must be analyzed to discover and understand a relevant molecular mechanism of action (MOA). It’s a far more complex analysis than single target. Roche calls phenotypic screening “the most refined and delicate of all assays.”
Research Challenges Facing High-Content Image Screening
Cell biologists in pharma, biotech, and academic and institutional research, working with microscopy and high content imaging technology face three IT-related challenges:
• Data storage & management
• System architecture
• Speed and performance
Data storage & management – Researchers need a suitable database to transform high content screening analyses into robust biological conclusions. They are looking to store all the results from raw image data, image analysis results, secondary analysis, hit stratification, metadata, and phenotype, while being able to perform searches, conduct advanced analytics, share data, and collaborate around results.
Unique in phenotypic screening is the vast amount of data that is generated as raw data. A single screen can produce a few terabytes of high resolution images. Another critical need is the ability to integrate screening data with other data types, from genomics and high-throughput screening to tissue, in vivo, and more.
System architecture – While on-premise solutions give a pharmaceutical company close control over its data, it can be expensive to keep current with server and computing requirements. It doesn’t take long for a server to outgrow its capacity and for computing power to struggle to keep up, as newer technologies churn out more data even faster. Leveraging the web and sharing data with external partners - such as contract research organizations - also strain IT resources.
Performance – Phenotypic screening needs image analysis to catch up with image acquisition speeds. Batch analysis of large high-content screening campaigns, in particular when analyzing complex cellular assays with texture and morphology changes, can take weeks or months – an unacceptable delay. And users want to parallelize more functions – import, analysis, delete, etc.
A Cloud Solution
Rather than investing in more on-premise hardware and systems, pharmaceutical and biotech companies can find the computing power they need in the cloud. And, just as the sharing economy means people can get rides without buying a car, for example, the cloud means researchers can get computing power without buying hundreds of PCs and numerous servers. Providers like Amazon and Microsoft let researchers “rent” high-performance computing power for just the amount of time required. Compared to the pharmaceutical case calling for three months of analysis time, using the cloud and high performance computing can bring it down to two or three weeks.
Cloud-Based Phenotypic Screening
Moving phenotypic screening to the cloud, while a sound strategy for saving time and resources, requires the right software solutions. At SLAS in January and the High Content Analysis and Phenotypic Screening conference in February, PerkinElmer introduced its platform to offer cloud-based computing performance, machine learning, and storage for phenotypic screens (as well as non-high-content based screens). Signals for Screening combines the functionalities of the Columbus high-volume image data storage and analysis system with High Content Profiler automated hit-selection and profiling software, powered by TIBCO Spotfire® data visualization and analysis software – and enables it for the cloud. This brings the entire workflow for data analysis and phenotypic screening under one umbrella.
When built on top of Signals, PerkinElmer’s cloud-based data management and aggregation platform, Columbus and High Content Profiler not only perform image analysis, but also machine-based characterization of parameters in individual cells. In a single, seamless workflow, data from the instrument can be analyzed to answer a broad array of questions regarding each individual cell.
Speed Analysis Time and Save on IT Investment
Leveraging the cloud with high-performance computing software significantly speeds analysis time while reducing IT investment. Rather than spend more on additional PCs or servers, why not look into cloud-based solutions that offer functionality specific to drug discovery screening needs – both phenotypic and target-based?
Have you used or considered cloud-based drug discovery solutions?