Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
SLAS Technol ; 25(5): 427-435, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32726559

ABSTRACT

Covance Drug Development produces more than 55 million test results via its central laboratory services, requiring the delivery of more than 10 million reports annually to investigators at 35,000 sites in 89 countries. Historically, most of these data were delivered via fax or electronic data transfers in delimited text or SAS transport file format. Here, we present a new web portal that allows secure online delivery of laboratory results, reports, manuals, and training materials, and enables collaboration with investigational sites through alerts, announcements, and communications. By leveraging a three-tier architecture composed of preexisting data warehouses augmented with an application-specific relational database to store configuration data and materialized views for performance optimizations, a RESTful web application programming interface (API), and a browser-based single-page application for user access, the system offers greatly improved capabilities and user experience without requiring any changes to the underlying acquisition systems and data stores. Following a 3-month controlled rollout with 6,500 users at early-adopter sites, the Xcellerate Investigator Portal was deployed to all 240,000 of Covance's Central Laboratory Services' existing users, gaining widespread acceptance and pointing to significant benefits in productivity, convenience, and user experience.


Subject(s)
Communication , Internet , Laboratories , Software , Humans , User-Computer Interface
2.
Database (Oxford) ; 20192019 01 01.
Article in English | MEDLINE | ID: mdl-30942863

ABSTRACT

Timely, consistent and integrated access to clinical trial data remains one of the pharmaceutical industry's most pressing needs. As part of a comprehensive clinical data repository, we have developed a data warehouse that can integrate operational data from any source, conform it to a canonical data model and make it accessible to study teams in a timely, secure and contextualized manner to support operational oversight, proactive risk management and other analytic and reporting needs. Our solution consists of a dimensional relational data warehouse, a set of extraction, transformation and loading processes to coordinate data ingestion and mapping, a generalizable metrics engine to enable the computation of operational metrics and key performance, quality and risk indicators and a set of graphical user interfaces to facilitate configuration, management and administration. When combined with the appropriate data visualization tools, the warehouse enables convenient access to raw operational data and derived metrics to help track study conduct and performance, identify and mitigate risks, monitor and improve operational processes, manage resource allocation, strengthen investigator and sponsor relationships and other purposes.


Subject(s)
Clinical Trials as Topic , Data Warehousing , Database Management Systems , Humans , Research Report
3.
Database (Oxford) ; 20192019 01 01.
Article in English | MEDLINE | ID: mdl-30854563

ABSTRACT

Clinical trial data are typically collected through multiple systems developed by different vendors using different technologies and data standards. That data need to be integrated, standardized and transformed for a variety of monitoring and reporting purposes. The need to process large volumes of often inconsistent data in the presence of ever-changing requirements poses a significant technical challenge. As part of a comprehensive clinical data repository, we have developed a data warehouse that integrates patient data from any source, standardizes it and makes it accessible to study teams in a timely manner to support a wide range of analytic tasks for both in-flight and completed studies. Our solution combines Apache HBase, a NoSQL column store, Apache Phoenix, a massively parallel relational query engine and a user-friendly interface to facilitate efficient loading of large volumes of data under incomplete or ambiguous specifications, utilizing an extract-load-transform design pattern that defers data mapping until query time. This approach allows us to maintain a single copy of the data and transform it dynamically into any desirable format without requiring additional storage. Changes to the mapping specifications can be easily introduced and multiple representations of the data can be made available concurrently. Further, by versioning the data and the transformations separately, we can apply historical maps to current data or current maps to historical data, which simplifies the maintenance of data cuts and facilitates interim analyses for adaptive trials. The result is a highly scalable, secure and redundant solution that combines the flexibility of a NoSQL store with the robustness of a relational query engine to support a broad range of applications, including clinical data management, medical review, risk-based monitoring, safety signal detection, post hoc analysis of completed studies and many others.


Subject(s)
Clinical Trials as Topic , Data Warehousing , Database Management Systems , Humans , Machine Learning , User-Computer Interface
4.
Database (Oxford) ; 20192019 01 01.
Article in English | MEDLINE | ID: mdl-30773591

ABSTRACT

Assembly of complete and error-free clinical trial data sets for statistical analysis and regulatory submission requires extensive effort and communication among investigational sites, central laboratories, pharmaceutical sponsors, contract research organizations and other entities. Traditionally, this data is captured, cleaned and reconciled through multiple disjointed systems and processes, which is resource intensive and error prone. Here, we introduce a new system for clinical data review that helps data managers identify missing, erroneous and inconsistent data and manage queries in a unified, system-agnostic and efficient way. Our solution enables timely and integrated access to all study data regardless of source, facilitates the review of validation and discrepancy checks and the management of the resulting queries, tracks the status of page review, verification and locking activities, monitors subject data cleanliness and readiness for database lock and provides extensive configuration options to meet any study's needs, automation for regular updates and fit-for-purpose user interfaces for global oversight and problem detection.


Subject(s)
Clinical Trials as Topic , Databases as Topic , Data Warehousing
5.
JAMIA Open ; 2(2): 216-221, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31984356

ABSTRACT

OBJECTIVE: We present a new system to track, manage, and report on all risks and issues encountered during a clinical trial. MATERIALS AND METHODS: Our solution utilizes JIRA, a popular issue and project tracking tool for software development, augmented by third-party and custom-built plugins to provide the additional functionality missing from the core product. RESULTS: The new system integrates all issue types under a single tracking tool and offers a range of capabilities, including configurable issue management workflows, seamless integration with other clinical systems, extensive history, reporting, and trending, and an intuitive web interface. DISCUSSION AND CONCLUSION: By preserving the linkage between risks, issues, actions, decisions, and outcomes, the system allows study teams to assess the impact and effectiveness of their risk management strategies and present a coherent account of how the trial was conducted. Since the tool was put in production, we have observed an increase in the number of reported issues and a decrease in the median issue resolution time which, along with the positive user feedback, point to marked improvements in quality, transparency, productivity, and teamwork.

6.
Clin Ther ; 40(7): 1204-1212, 2018 07.
Article in English | MEDLINE | ID: mdl-30100201

ABSTRACT

PURPOSE: Clinical trial monitoring is an essential component of drug development aimed at safeguarding subject safety, data quality, and protocol compliance by focusing sponsor oversight on the most important aspects of study conduct. In recent years, regulatory agencies, industry consortia, and nonprofit collaborations between industry and regulators, such as TransCelerate and International Committee for Harmonization, have been advocating a new, risk-based approach to monitoring clinical trials that places increased emphasis on critical data and processes and encourages greater use of centralized monitoring. However, how best to implement risk-based monitoring (RBM) remains unclear and subject to wide variations in tools and methodologies. The nonprescriptive nature of the regulatory guidelines, coupled with limitations in software technology, challenges in operationalization, and lack of robust evidence of superior outcomes, have hindered its widespread adoption. METHODS: We describe a holistic solution that combines convenient access to data, advanced analytics, and seamless integration with established technology infrastructure to enable comprehensive assessment and mitigation of risk at the study, site, and subject level. FINDINGS: Using data from completed RBM studies carried out in the last 4 years, we demonstrate that our implementation of RBM improves the efficiency and effectiveness of the clinical oversight process as measured on various quality, timeline, and cost dimensions. IMPLICATIONS: These results provide strong evidence that our RBM methodology can significantly improve the clinical oversight process and do so at a lower cost through more intelligent deployment of monitoring resources to the sites that need the most attention.


Subject(s)
Clinical Trials as Topic , Data Accuracy , Guideline Adherence , Humans , Patient Safety , Risk
7.
Curr Top Med Chem ; 12(11): 1237-42, 2012.
Article in English | MEDLINE | ID: mdl-22571793

ABSTRACT

Drug discovery is a highly complex process requiring scientists from wide-ranging disciplines to work together in a well-coordinated and streamlined fashion. While the process can be compartmentalized into well-defined functional domains, the success of the entire enterprise rests on the ability to exchange data conveniently between these domains, and integrate it in meaningful ways to support the design, execution and interpretation of experiments aimed at optimizing the efficacy and safety of new drugs. This, in turn, requires information management systems that can support many different types of scientific technologies generating data of imposing complexity, diversity and volume. Here, we describe the key components of our Advanced Biological and Chemical Discovery (ABCD), a software platform designed at Johnson & Johnson to bring coherence in the way discovery data is collected, annotated, organized, integrated, mined and visualized. Unlike the Gordian knot of one-off solutions built to serve a single purpose for a single set of users that one typically encounters in the pharmaceutical industry, we sought to develop a framework that could be extended and leveraged across different application domains, and offer a consistent user experience marked by superior performance and usability. In this work, several major components of ABCD are highlighted, ranging from operational subsystems for managing reagents, reactions, compounds, and assays, to advanced data mining and visualization tools for SAR analysis and interpretation. All these capabilities are delivered through a common application front-end called Third Dimension Explorer (3DX), a modular, multifunctional and extensible platform designed to be the "Swiss-army knife" of the discovery scientist.


Subject(s)
Drug Discovery , Software , Databases, Factual , Drug Industry
8.
J Chem Inf Model ; 51(12): 3113-30, 2011 Dec 27.
Article in English | MEDLINE | ID: mdl-22035187

ABSTRACT

Efficient substructure searching is a key requirement for any chemical information management system. In this paper, we describe the substructure search capabilities of ABCD, an integrated drug discovery informatics platform developed at Johnson & Johnson Pharmaceutical Research & Development, L.L.C. The solution consists of several algorithmic components: 1) a pattern mapping algorithm for solving the subgraph isomorphism problem, 2) an indexing scheme that enables very fast substructure searches on large structure files, 3) the incorporation of that indexing scheme into an Oracle cartridge to enable querying large relational databases through SQL, and 4) a cost estimation scheme that allows the Oracle cost-based optimizer to generate a good execution plan when a substructure search is combined with additional constraints in a single SQL query. The algorithm was tested on a public database comprising nearly 1 million molecules using 4,629 substructure queries, the vast majority of which were submitted by discovery scientists over the last 2.5 years of user acceptance testing of ABCD. 80.7% of these queries were completed in less than a second and 96.8% in less than ten seconds on a single CPU, while on eight processing cores these numbers increased to 93.2% and 99.7%, respectively. The slower queries involved extremely generic patterns that returned the entire database as screening hits and required extensive atom-by-atom verification.


Subject(s)
Algorithms , Drug Discovery , Informatics/methods , Small Molecule Libraries/chemistry , Databases, Factual , Drug Discovery/economics , Informatics/economics , Time Factors
10.
J Chem Inf Model ; 49(10): 2221-30, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19791782

ABSTRACT

We recently introduced SAR maps, a new interactive method for visualizing structure-activity relationships targeted specifically at medicinal chemists. A SAR map renders an R-group decomposition of a congeneric series as a rectangular matrix of cells, each representing a unique combination of R-groups color-coded by a user-selected property of the corresponding compound. In this paper, we describe an enhanced version that greatly expands the types of visualizations that can be displayed inside the cells. Examples include multidimensional histograms and pie charts that visualize the biological profiles of compounds across an entire panel of assays, forms that display specific fields on user-defined layouts, aligned 3D structure drawings that show the relative orientation of different substituents, dose-response curves, images of crystals or diffraction patterns, and many others. These enhancements, which capitalize on the modular architecture of its host application Third Dimension Explorer (3DX), allow the medicinal chemist to interactively analyze complex scaffolds with multiple substitution sites, correlate substituent structure and biological activity at multiple simultaneous dimensions, identify missing analogs or screening data, and produce information-dense visualizations for presentations and publications. The new tool has an intuitive user interface that makes it appealing to experts and nonexperts alike.


Subject(s)
Chemistry, Pharmaceutical/methods , Computer Graphics , Dose-Response Relationship, Drug , Molecular Conformation , Structure-Activity Relationship , User-Computer Interface
11.
J Chem Inf Model ; 47(6): 1999-2014, 2007.
Article in English | MEDLINE | ID: mdl-17973472

ABSTRACT

We present ABCD, an integrated drug discovery informatics platform developed at Johnson & Johnson Pharmaceutical Research & Development, L.L.C. ABCD is an attempt to bridge multiple continents, data systems, and cultures using modern information technology and to provide scientists with tools that allow them to analyze multifactorial SAR and make informed, data-driven decisions. The system consists of three major components: (1) a data warehouse, which combines data from multiple chemical and pharmacological transactional databases, designed for supreme query performance; (2) a state-of-the-art application suite, which facilitates data upload, retrieval, mining, and reporting, and (3) a workspace, which facilitates collaboration and data sharing by allowing users to share queries, templates, results, and reports across project teams, campuses, and other organizational units. Chemical intelligence, performance, and analytical sophistication lie at the heart of the new system, which was developed entirely in-house. ABCD is used routinely by more than 1000 scientists around the world and is rapidly expanding into other functional areas within the J&J organization.


Subject(s)
Biology , Computational Biology , Computers , Imaging, Three-Dimensional
12.
J Med Chem ; 50(24): 5926-37, 2007 Nov 29.
Article in English | MEDLINE | ID: mdl-17958407

ABSTRACT

We present structure-activity relationship (SAR) maps, a new, intuitive method for visualizing SARs targeted specifically at medicinal chemists. The method renders an R-group decomposition of a chemical series as a rectangular matrix of cells, each representing a unique combination of R-groups and thus a unique compound. Color-coding the cells by chemical property or biological activity allows patterns to be easily identified and exploited. SAR maps allow the medicinal chemist to interactively analyze complicated datasets with multiple R-group dimensions, rapidly correlate substituent structure and biological activity, assess additivity of substituent effects, identify missing analogs and screening data, and create compelling graphical representations for presentation and publication. We believe that this method fills a long-standing gap in the medicinal chemist's toolset for understanding and rationalizing SAR.


Subject(s)
Drug Design , Structure-Activity Relationship , CDC2 Protein Kinase/antagonists & inhibitors , Chemistry, Pharmaceutical , Models, Molecular , Molecular Conformation , Piperazines/chemistry , Piperidines/chemistry , Stereoisomerism , Triazoles/chemistry , Vascular Endothelial Growth Factor Receptor-2/antagonists & inhibitors
13.
J Chem Inf Model ; 46(6): 2651-60, 2006.
Article in English | MEDLINE | ID: mdl-17125205

ABSTRACT

We report on the structural comparison of the corporate collections of Johnson & Johnson Pharmaceutical Research & Development (JNJPRD) and 3-Dimensional Pharmaceuticals (3DP), performed in the context of the recent acquisition of 3DP by JNJPRD. The main objective of the study was to assess the druglikeness of the 3DP library and the extent to which it enriched the chemical diversity of the JNJPRD corporate collection. The two databases, at the time of acquisition, collectively contained more than 1.1 million compounds with a clearly defined structural description. The analysis was based on a clustering approach and aimed at providing an intuitive quantitative estimate and visual representation of this enrichment. A novel hierarchical clustering algorithm called divisive k-means was employed in combination with Kelley's cluster-level selection method to partition the combined data set into clusters, and the diversity contribution of each library was evaluated as a function of the relative occupancy of these clusters. Typical 3DP chemotypes enriching the diversity of the JNJPRD collection were catalogued and visualized using a modified maximum common substructure algorithm. The joint collection of JNJPRD and 3DP compounds was also compared to other databases of known medicinally active or druglike compounds. The potential of the methodology for the analysis of very large chemical databases is discussed.


Subject(s)
Chemistry, Pharmaceutical/methods , Cluster Analysis , Combinatorial Chemistry Techniques , Drug Design , Technology, Pharmaceutical/methods , Algorithms , Chemistry/methods , Databases, Factual , Information Systems , Ligands , Models, Statistical , Molecular Weight , Software
14.
J Chem Inf Comput Sci ; 42(4): 903-11, 2002.
Article in English | MEDLINE | ID: mdl-12132892

ABSTRACT

Despite their growing popularity among neural network practitioners, ensemble methods have not been widely adopted in structure-activity and structure-property correlation. Neural networks are inherently unstable, in that small changes in the training set and/or training parameters can lead to large changes in their generalization performance. Recent research has shown that by capitalizing on the diversity of the individual models, ensemble techniques can minimize uncertainty and produce more stable and accurate predictors. In this work, we present a critical assessment of the most common ensemble technique known as bootstrap aggregation, or bagging, as applied to QSAR and QSPR. Although aggregation does offer definitive advantages, we demonstrate that bagging may not be the best possible choice and that simpler techniques such as retraining with the full sample can often produce superior results. These findings are rationalized using Krogh and Vedelsby's decomposition of the generalization error into a term that measures the average generalization performance of the individual networks and a term that measures the diversity among them. For networks that are designed to resist over-fitting, the benefits of aggregation are clear but not overwhelming.


Subject(s)
Neural Networks, Computer , Quantitative Structure-Activity Relationship , Computer Simulation , Databases, Factual , Software
15.
Nat Rev Drug Discov ; 1(5): 337-46, 2002 May.
Article in English | MEDLINE | ID: mdl-12120409

ABSTRACT

The multitude of potential drug targets emerging from genome sequencing demands new approaches to drug discovery. A chemogenomics strategy, which involves the generation of small-molecule compounds that can be used both as tools to probe biological mechanisms and as leads for drug-property optimization, provides a highly parallel, industrialized solution. Key to the success of this strategy is an integrated suite of chemi-informatics applications that can allow the rapid and directed optimization of chemical compounds with drug-like properties using 'just-in-time' combinatorial chemical synthesis. An effective embodiment of this process requires new computational and data-mining tools that cover all aspects of library generation, compound selection and experimental design, and work effectively on a massive scale.


Subject(s)
Combinatorial Chemistry Techniques , Genomics , Research Design
16.
Comb Chem High Throughput Screen ; 5(2): 167-78, 2002 Mar.
Article in English | MEDLINE | ID: mdl-11966425

ABSTRACT

One can distinguish between two kinds of virtual combinatorial libraries: viable and accessible . Viable libraries are relatively small in size, are assembled from readily available reagents that have been filtered by the medicinal chemist, and often have a physical counterpart. Conversely, accessible libraries can encompass millions or billions of structures, typically include all possible reagents that are in principle compatible with a particular reaction scheme, and they can never be physically synthesized in their entirety. Although the analysis of viable virtual libraries is relatively straightforward, the handling of large accessible libraries requires methods that scale well with respect to library size. In this work, we present novel, efficient and scalable techniques for the construction, analysis, and in silico screening of massive virtual combinatorial libraries.


Subject(s)
Combinatorial Chemistry Techniques , Quantitative Structure-Activity Relationship
17.
Chem Rev ; 96(3): 1027-1044, 1996 May 09.
Article in English | MEDLINE | ID: mdl-11848779
SELECTION OF CITATIONS
SEARCH DETAIL
...