Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Nat Biotechnol ; 41(8): 1060-1061, 2023 08.
Article in English | MEDLINE | ID: mdl-37568019
2.
SLAS Discov ; 27(8): 460-470, 2022 12.
Article in English | MEDLINE | ID: mdl-36156314

ABSTRACT

Recent efforts for increasing the success in drug discovery focus on an early, massive, and routine mechanistic and/or kinetic characterization of drug-target engagement as part of a design-make-test-analyze strategy. From an experimental perspective, many mechanistic assays can be translated into a scalable format on automation platforms and thereby enable routine characterization of hundreds or thousands of compounds. However, now the limiting factor to achieve such in-depth characterization at high-throughput becomes the quality-driven data analysis, the sheer scale of which outweighs the time available to the scientific staff of most labs. Therefore, automated analytical workflows are needed to enable such experimental scale-up. We have implemented such a fully automated workflow in Genedata Screener for time-dependent ligand-target binding analysis to characterize non-equilibrium inhibitors. The workflow automates Quality Control (QC) / data modelling and decision-making process in a staged analysis: (1) quality control of raw input data-fluorescence signal-based progress curves - featuring automated rejection of unsuitable measurements; (2) automated model selection - one-step versus two-step binding model - using statistical methods and biological validity rules; (3) result visualization in specific plots and annotated result tables, enabling the scientist to review large result sets efficiently and, at the same time, to rapidly identify and focus on interesting or unusual results; (4) an interactive user interface for immediate adjustment of automated decisions, where necessary. Applying this workflow to first-pass, high-throughput kinetic studies on kinase projects has allowed us to surmount previously rate-limiting manual analysis steps and boost productivity; and is now routinely embedded in a biopharma discovery research process.


Subject(s)
Data Analysis , Drug Discovery , Humans , Kinetics
3.
SLAS Discov ; 27(4): 278-285, 2022 06.
Article in English | MEDLINE | ID: mdl-35058183

ABSTRACT

Ion channels are drug targets for neurologic, cardiac, and immunologic diseases. Many disease-associated mutations and drugs modulate voltage-gated ion channel activation and inactivation, suggesting that characterizing state-dependent effects of test compounds at an early stage of drug development can be of great benefit. Historically, the effects of compounds on ion channel biophysical properties and voltage-dependent activation/inactivation could only be assessed by using low-throughput, manual patch clamp recording techniques. In recent years, automated patch clamp (APC) platforms have drastically increased in throughput. In contrast to their broad utilization in compound screening, APC platforms have rarely been used for mechanism of action studies, in large part due to the lack of sophisticated, scalable analysis methods for processing the large amount of data generated by APC platforms. In the current study, we developed a highly efficient and scalable software workflow to overcome this challenge. This method, to our knowledge the first of its kind, enables automated curve fitting and complex analysis of compound effects. Using voltage-gated sodium channels as an example, we were able to immediately assess the effects of test compounds on a spectrum of biophysical properties, including peak current, voltage-dependent steady state activation/inactivation, and time constants of activation and fast inactivation. Overall, this automated data analysis method provides a novel solution for in-depth analysis of large-scale APC data, and thus will significantly impact ion channel research and drug discovery.


Subject(s)
Data Analysis , Electrophysiological Phenomena , Electrophysiology , Ion Channels , Patch-Clamp Techniques
4.
SLAS Technol ; 27(1): 85-93, 2022 02.
Article in English | MEDLINE | ID: mdl-35058213

ABSTRACT

Biopharmaceutical drug discovery, as of today is a highly automated, high throughput endeavor, where many screening technologies produce a high-dimensional measurement per sample. A striking example is High Content Screening (HCS), which utilizes automated microscopy to systematically access the wealth of information contained in biological assays. Exploiting HCS to its full potential traditionally requires extracting a high number of features from the images to capture as much information as possible, then performing algorithmic analysis and complex data visualization in order to render this high-dimensional data into an interpretable and instructive information for guiding drug development. In this process, automated feature selection methods condense the feature set to reduce non-useful or redundant information and render it more meaningful. We compare 12 state-of-the-art feature selection methods (both supervised and unsupervised) by systematically testing them on two HCS datasets from drug screening imaging assays of high practical relevance. Considering as evaluation metrics standard plate-, assay- or compound statistics on the final results, we assess the generalizability and importance of the selected features by use of automated machine learning (AutoML) to achieve an unbiased evaluation across methods. Results provide practical guidance on experiment design, optimal sizing of a reduced feature set and choice of feature selection method, both in situations where useful experimental control states are available (enabling use of supervised algorithms) or where such controls are unavailable, using unsupervised techniques.


Subject(s)
Algorithms , Benchmarking , Machine Learning , Microscopy
5.
SLAS Discov ; 25(7): 812-821, 2020 08.
Article in English | MEDLINE | ID: mdl-32432952

ABSTRACT

Drug discovery programs are moving increasingly toward phenotypic imaging assays to model disease-relevant pathways and phenotypes in vitro. These assays offer richer information than target-optimized assays by investigating multiple cellular pathways simultaneously and producing multiplexed readouts. However, extracting the desired information from complex image data poses significant challenges, preventing broad adoption of more sophisticated phenotypic assays. Deep learning-based image analysis can address these challenges by reducing the effort required to analyze large volumes of complex image data at a quality and speed adequate for routine phenotypic screening in pharmaceutical research. However, while general purpose deep learning frameworks are readily available, they are not readily applicable to images from automated microscopy. During the past 3 years, we have optimized deep learning networks for this type of data and validated the approach across diverse assays with several industry partners. From this work, we have extracted five essential design principles that we believe should guide deep learning-based analysis of high-content images and multiparameter data: (1) insightful data representation, (2) automation of training, (3) multilevel quality control, (4) knowledge embedding and transfer to new assays, and (5) enterprise integration. We report a new deep learning-based software that embodies these principles, Genedata Imagence, which allows screening scientists to reliably detect stable endpoints for primary drug response, assess toxicity and safety-relevant effects, and discover new phenotypes and compound classes. Furthermore, we show how the software retains expert knowledge from its training on a particular assay and successfully reapplies it to different, novel assays in an automated fashion.


Subject(s)
Drug Discovery/trends , High-Throughput Screening Assays , Molecular Imaging , Signal Transduction/genetics , Automation , Deep Learning , Humans , Image Processing, Computer-Assisted , Microscopy , Software
6.
Drug Res (Stuttg) ; 68(6): 305-310, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29341027

ABSTRACT

Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development.


Subject(s)
Biological Science Disciplines/methods , Drug Industry/methods , Machine Learning , Software
7.
SLAS Discov ; 22(2): 203-209, 2017 02.
Article in English | MEDLINE | ID: mdl-27789754

ABSTRACT

Surface plasmon resonance (SPR) is a powerful method for obtaining detailed molecular interaction parameters. Modern instrumentation with its increased throughput has enabled routine screening by SPR in hit-to-lead and lead optimization programs, and SPR has become a mainstream drug discovery technology. However, the processing and reporting of SPR data in drug discovery are typically performed manually, which is both time-consuming and tedious. Here, we present the workflow concept, design and experiences with a software module relying on a single, browser-based software platform for the processing, analysis, and reporting of SPR data. The efficiency of this concept lies in the immediate availability of end results: data are processed and analyzed upon loading the raw data file, allowing the user to immediately quality control the results. Once completed, the user can automatically report those results to data repositories for corporate access and quickly generate printed reports or documents. The software module has resulted in a very efficient and effective workflow through saved time and improved quality control. We discuss these benefits and show how this process defines a new benchmark in the drug discovery industry for the handling, interpretation, visualization, and sharing of SPR data.


Subject(s)
Biosensing Techniques/methods , Data Analysis , Drug Discovery , Drug Evaluation, Preclinical/trends , Drug Design , Humans , Pharmaceutical Research , Software , Surface Plasmon Resonance , Workflow
9.
J Biomol Screen ; 12(8): 1042-9, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18087069

ABSTRACT

Recent technological advances in high-content screening instrumentation have increased its ease of use and throughput, expanding the application of high-content screening to the early stages of drug discovery. However, high-content screens produce complex data sets, presenting a challenge for both extraction and interpretation of meaningful information. This shifts the high-content screening process bottleneck from the experimental to the analytical stage. In this article, the authors discuss different approaches of data analysis, using a phenotypic neurite outgrowth screen as an example. Distance measurements and hierarchical clustering methods lead to a profound understanding of different high-content screening readouts. In addition, the authors introduce a hit selection procedure based on machine learning methods and demonstrate that this method increases the hit verification rate significantly (up to a factor of 5), compared to conventional hit selection based on single readouts only.


Subject(s)
Neurites/metabolism , Tissue Array Analysis/standards , Cluster Analysis , Multivariate Analysis , Quality Control , Reproducibility of Results
10.
Curr Opin Drug Discov Devel ; 8(3): 334-46, 2005 May.
Article in English | MEDLINE | ID: mdl-15892249

ABSTRACT

Lead discovery is a complex process that is intimately linked to chemistry, but which is also increasingly driven by biological sciences. In an industrial pharmaceutical research environment the process is defined by highly automated technologies for target identification and validation, compound library screening, and compound efficacy assessment. The huge volumes and complex dependencies of data produced by such large-scale experiments have led to a reassessment of data analysis processes, resulting in the development of novel data analysis strategies tailored to drug discovery. In this review, recent progress in data-driven research applications is reported, focusing on the use and processing of transcriptomics, proteomics and high-throughput screening data. The successful application of specialized data analysis procedures in many companies is discussed, which has resulted in significant improvements in decision-making processes for progressing therapeutic targets to promising leads.


Subject(s)
Computational Biology , Drug Design , Statistics as Topic , Technology, Pharmaceutical/methods , Animals , Decision Support Techniques , Drug Evaluation, Preclinical , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...