Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
1.
Front Microbiol ; 15: 1293928, 2024.
Article in English | MEDLINE | ID: mdl-38414766

ABSTRACT

High hydrostatic pressure (HHP) is a key driver of life's evolution and diversification on Earth. Icy moons such as Titan, Europa, and Enceladus harbor potentially habitable high-pressure environments within their subsurface oceans. Titan, in particular, is modeled to have subsurface ocean pressures ≥ 150 MPa, which are above the highest pressures known to support life on Earth in natural ecosystems. Piezophiles are organisms that grow optimally at pressures higher than atmospheric (0.1 MPa) pressure and have specialized adaptations to the physical constraints of high-pressure environments - up to ~110 MPa at Challenger Deep, the highest pressure deep-sea habitat explored. While non-piezophilic microorganisms have been shown to survive short exposures at Titan relevant pressures, the mechanisms of their survival under such conditions remain largely unelucidated. To better understand these mechanisms, we have conducted a study of gene expression for Shewanella oneidensis MR-1 using a high-pressure experimental culturing system. MR-1 was subjected to short-term (15 min) and long-term (2 h) HHP of 158 MPa, a value consistent with pressures expected near the top of Titan's subsurface ocean. We show that MR-1 is metabolically active in situ at HHP and is capable of viable growth following 2 h exposure to 158 MPa, with minimal pressure training beforehand. We further find that MR-1 regulates 264 genes in response to short-term HHP, the majority of which are upregulated. Adaptations include upregulation of the genes argA, argB, argC, and argF involved in arginine biosynthesis and regulation of genes involved in membrane reconfiguration. MR-1 also utilizes stress response adaptations common to other environmental extremes such as genes encoding for the cold-shock protein CspG and antioxidant defense related genes. This study suggests Titan's ocean pressures may not limit life, as microorganisms could employ adaptations akin to those demonstrated by terrestrial organisms.

2.
Environ Sci Technol ; 57(33): 12291-12301, 2023 08 22.
Article in English | MEDLINE | ID: mdl-37566783

ABSTRACT

Failure of animal models to predict hepatotoxicity in humans has created a push to develop biological pathway-based alternatives, such as those that use in vitro assays. Public screening programs (e.g., ToxCast/Tox21 programs) have tested thousands of chemicals using in vitro high-throughput screening (HTS) assays. Developing pathway-based models for simple biological pathways, such as endocrine disruption, has proven successful, but development remains a challenge for complex toxicities like hepatotoxicity, due to the many biological events involved. To this goal, we aimed to develop a computational strategy for developing pathway-based models for complex toxicities. Using a database of 2171 chemicals with human hepatotoxicity classifications, we identified 157 out of 1600+ ToxCast/Tox21 HTS assays to be associated with human hepatotoxicity. Then, a computational framework was used to group these assays by biological target or mechanisms into 52 key event (KE) models of hepatotoxicity. KE model output is a KE score summarizing chemical potency against a hepatotoxicity-relevant biological target or mechanism. Grouping hepatotoxic chemicals based on the chemical structure revealed chemical classes with high KE scores plausibly informing their hepatotoxicity mechanisms. Using KE scores and supervised learning to predict in vivo hepatotoxicity, including toxicokinetic information, improved the predictive performance. This new approach can be a universal computational toxicology strategy for various chemical toxicity evaluations.


Subject(s)
Chemical and Drug Induced Liver Injury , High-Throughput Screening Assays , Animals , Humans , Toxicokinetics , Databases, Factual , Biological Assay
3.
Environ Sci Technol ; 57(16): 6573-6588, 2023 04 25.
Article in English | MEDLINE | ID: mdl-37040559

ABSTRACT

Traditional methodologies for assessing chemical toxicity are expensive and time-consuming. Computational modeling approaches have emerged as low-cost alternatives, especially those used to develop quantitative structure-activity relationship (QSAR) models. However, conventional QSAR models have limited training data, leading to low predictivity for new compounds. We developed a data-driven modeling approach for constructing carcinogenicity-related models and used these models to identify potential new human carcinogens. To this goal, we used a probe carcinogen dataset from the US Environmental Protection Agency's Integrated Risk Information System (IRIS) to identify relevant PubChem bioassays. Responses of 25 PubChem assays were significantly relevant to carcinogenicity. Eight assays inferred carcinogenicity predictivity and were selected for QSAR model training. Using 5 machine learning algorithms and 3 types of chemical fingerprints, 15 QSAR models were developed for each PubChem assay dataset. These models showed acceptable predictivity during 5-fold cross-validation (average CCR = 0.71). Using our QSAR models, we can correctly predict and rank 342 IRIS compounds' carcinogenic potentials (PPV = 0.72). The models predicted potential new carcinogens, which were validated by a literature search. This study portends an automated technique that can be applied to prioritize potential toxicants using validated QSAR models based on extensive training sets from public data resources.


Subject(s)
Algorithms , Quantitative Structure-Activity Relationship , Humans , Computer Simulation , Carcinogens/toxicity , Biological Assay
4.
Empir Softw Eng ; 28(2): 53, 2023.
Article in English | MEDLINE | ID: mdl-36915711

ABSTRACT

Following the onset of the COVID-19 pandemic and subsequent lockdowns, the daily lives of software engineers were heavily disrupted as they were abruptly forced to work remotely from home. To better understand and contrast typical working days in this new reality with work in pre-pandemic times, we conducted one exploratory (N = 192) and one confirmatory study (N = 290) with software engineers recruited remotely. Specifically, we build on self-determination theory to evaluate whether and how specific activities are associated with software engineers' satisfaction and productivity. To explore the subject domain, we first ran a two-wave longitudinal study. We found that the time software engineers spent on specific activities (e.g., coding, bugfixing, helping others) while working from home was similar to pre-pandemic times. Also, the amount of time developers spent on each activity was unrelated to their general well-being, perceived productivity, and other variables such as basic needs. Our confirmatory study found that activity-specific variables (e.g., how much autonomy software engineers had during coding) do predict activity satisfaction and productivity but not by activity-independent variables such as general resilience or a good work-life balance. Interestingly, we found that satisfaction and autonomy were significantly higher when software engineers were helping others and lower when they were bugfixing. Finally, we discuss implications for software engineers, management, and researchers. In particular, active company policies to support developers' need for autonomy, relatedness, and competence appear particularly effective in a WFH context.

5.
Carbon N Y ; 204: 484-494, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36845527

ABSTRACT

Modern nanotechnology provides efficient and cost-effective nanomaterials (NMs). The increasing usage of NMs arises great concerns regarding nanotoxicity in humans. Traditional animal testing of nanotoxicity is expensive and time-consuming. Modeling studies using machine learning (ML) approaches are promising alternatives to direct evaluation of nanotoxicity based on nanostructure features. However, NMs, including two-dimensional nanomaterials (2DNMs) such as graphenes, have complex structures making them difficult to annotate and quantify the nanostructures for modeling purposes. To address this issue, we constructed a virtual graphenes library using nanostructure annotation techniques. The irregular graphene structures were generated by modifying virtual nanosheets. The nanostructures were digitalized from the annotated graphenes. Based on the annotated nanostructures, geometrical nanodescriptors were computed using Delaunay tessellation approach for ML modeling. The partial least square regression (PLSR) models for the graphenes were built and validated using a leave-one-out cross-validation (LOOCV) procedure. The resulted models showed good predictivity in four toxicity-related endpoints with the coefficient of determination (R2) ranging from 0.558 to 0.822. This study provides a novel nanostructure annotation strategy that can be applied to generate high-quality nanodescriptors for ML model developments, which can be widely applied to nanoinformatics studies of graphenes and other NMs.

6.
J Syst Softw ; 197: 111562, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36447955

ABSTRACT

With the COVID-19 pandemic, Scrum teams had to switch abruptly from a traditional working setting into an enforced working from home one. This abrupt switch had an impact on software projects. Thus, it is necessary to understand how potential future disruptive events will impact Agile software teams' ability to deliver successful projects while working from home. To investigate this problem, we used a two-phased Multi-Method study. In the first phase, we uncover how working from home impacted Scrum practitioners through semi-structured interviews. Then, in the second phase, we propose a theoretical model that we test and generalize using Partial Least Squares-Structural Equation Modeling (PLS-SEM) surveying 138 software engineers who worked from home within Scrum projects. We concluded that all the latent variables identified in our model are reliable, and all the hypotheses are significant. This paper emphasizes the importance of supporting the three innate psychological needs of autonomy, competence, and relatedness in the home working environment. We conclude that the ability of working from home and the use of Scrum both contribute to project success, with Scrum acting as a mediator.

7.
Sci Data ; 9(1): 664, 2022 Oct 31.
Article in English | MEDLINE | ID: mdl-36316331

ABSTRACT

With solar and wind power generation reaching unprecedented growth rates globally, much research effort has recently gone into a comprehensive mapping of the worldwide potential of these variable renewable electricity (VRE) sources. From a perspective of energy systems analysis, the locations with the strongest resources may not necessarily be the best candidates for investment in new power plants, since the distance from existing grid and road infrastructures and the temporal variability of power generation also matter. To inform energy planning and policymaking, cost-optimisation models for energy systems must be fed with adequate data on potential sites for VRE plants, including costs reflective of resource strength, grid expansion needs and full hourly generation profiles. Such data, tailored to energy system models, has been lacking up to now. In this study, we present a new open-source and open-access all-Africa dataset of "supply regions" for solar photovoltaic and onshore wind power to feed energy models and inform capacity expansion planning.

8.
J Hazard Mater ; 436: 129193, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35739723

ABSTRACT

Traditional experimental approaches to evaluate hepatotoxicity are expensive and time-consuming. As an advanced framework of risk assessment, adverse outcome pathways (AOPs) describe the sequence of molecular and cellular events underlying chemical toxicities. We aimed to develop an AOP that can be used to predict hepatotoxicity by leveraging computational modeling and in vitro assays. We curated 869 compounds with known hepatotoxicity classifications as a modeling set and extracted assay data from PubChem. The antioxidant response element (ARE) assay, which quantifies transcriptional responses to oxidative stress, showed a high correlation to hepatotoxicity (PPV=0.82). Next, we developed quantitative structure-activity relationship (QSAR) models to predict ARE activation for compounds lacking testing results. Potential toxicity alerts were identified and used to construct a mechanistic hepatotoxicity model. For experimental validation, 16 compounds in the modeling set and 12 new compounds were selected and tested using an in-house ARE-luciferase assay in HepG2-C8 cells. The mechanistic model showed good hepatotoxicity predictivity (accuracy = 0.82) for these compounds. Potential false positive hepatotoxicity predictions by only using ARE results can be corrected by incorporating structural alerts and vice versa. This mechanistic model illustrates a potential toxicity pathway for hepatotoxicity, and this strategy can be expanded to develop predictive models for other complex toxicities.


Subject(s)
Adverse Outcome Pathways , Chemical and Drug Induced Liver Injury , Biological Assay , Computer Simulation , Hep G2 Cells , Humans , Quantitative Structure-Activity Relationship
9.
Environ Sci Technol ; 56(9): 5984-5998, 2022 05 03.
Article in English | MEDLINE | ID: mdl-35451820

ABSTRACT

For hazard identification, classification, and labeling purposes, animal testing guidelines are required by law to evaluate the developmental toxicity potential of new and existing chemical products. However, guideline developmental toxicity studies are costly, time-consuming, and require many laboratory animals. Computational modeling has emerged as a promising, animal-sparing, and cost-effective method for evaluating the developmental toxicity potential of chemicals, such as endocrine disruptors, without the use of animals. We aimed to develop a predictive and explainable computational model for developmental toxicants. To this end, a comprehensive dataset of 1244 chemicals with developmental toxicity classifications was curated from public repositories and literature sources. Data from 2140 toxicological high-throughput screening assays were extracted from PubChem and the ToxCast program for this dataset and combined with information about 834 chemical fragments to group assays based on their chemical-mechanistic relationships. This effort revealed two assay clusters containing 83 and 76 assays, respectively, with high positive predictive rates for developmental toxicants identified with animal testing guidelines (PPV = 72.4 and 77.3% during cross-validation). These two assay clusters can be used as developmental toxicity models and were applied to predict new chemicals for external validation. This study provides a new strategy for constructing alternative chemical developmental toxicity evaluations that can be replicated for other toxicity modeling studies.


Subject(s)
High-Throughput Screening Assays , Toxicity Tests , Animals , Biological Assay , Female , Hazardous Substances , High-Throughput Screening Assays/methods , Pregnancy , Risk Assessment , Toxicity Tests/methods
10.
Empir Softw Eng ; 27(3): 71, 2022.
Article in English | MEDLINE | ID: mdl-35313539

ABSTRACT

There is considerable anecdotal evidence suggesting that software engineers enjoy engaging in solving puzzles and other cognitive efforts. A tendency to engage in and enjoy effortful thinking is referred to as a person's 'need for cognition.' In this article we study the relationship between software engineers' personality traits and their need for cognition. Through a large-scale sample study of 483 respondents we collected data to capture the six 'bright' personality traits of the HEXACO model of personality, and three 'dark' personality traits. Data were analyzed using several methods including a multiple Bayesian linear regression analysis. The results indicate that ca. 33% of variation in developers' need for cognition can be explained by personality traits. The Bayesian analysis suggests four traits to be of particular interest in predicting need for cognition: openness to experience, conscientiousness, honesty-humility, and emotionality. Further, we also find that need for cognition of software engineers is, on average, higher than in the general population, based on a comparison with prior studies. Given the importance of human factors for software engineers' performance in general, and problem solving skills in particular, our findings suggest several implications for recruitment, working behavior, and teaming.

11.
Methods Mol Biol ; 2474: 125-132, 2022.
Article in English | MEDLINE | ID: mdl-35294761

ABSTRACT

High-throughput screening (HTS) techniques are increasingly being adopted by a variety of fields of toxicology. Notably, large-scale research efforts from government, industrial, and academic laboratories are screening millions of chemicals against a variety of biomolecular targets, producing an enormous amount of publicly available HTS assay data. These HTS assay data provide toxicologists important information on how chemicals interact with different biomolecular targets and provide illustrations of potential toxicity mechanisms. Open public data repositories, such as the National Institutes of Health's PubChem ( http://pubchem.ncbi.nlm.nih.gov ), were established to accept, store, and share HTS data. Through the PubChem website, users can rapidly obtain the PubChem assay results for compounds by using different chemical identifiers (including SMILES, InChIKey, IUPAC names, etc.). However, obtaining these data in a user-friendly format suitable for modeling and other informatics analysis (e.g., gathering PubChem data for hundreds or thousands of chemicals in a modeling friendly format) directly through the PubChem web portal is not feasible. This chapter aims to introduce two approaches to obtain the HTS assay results for large datasets of compounds from the PubChem portal. First, programmatic access via PubChem's PUG-REST web service using the Python programming language will be described. Second, most users, who lack programming skills, can directly obtain PubChem data for a large set of compounds by using the freely available Chemical In vitro-In vivo Profiling (CIIPro) portal ( http://www.ciipro.rutgers.edu ).


Subject(s)
Databases, Chemical , High-Throughput Screening Assays , Programming Languages
12.
Methods Mol Biol ; 2474: 169-187, 2022.
Article in English | MEDLINE | ID: mdl-35294765

ABSTRACT

Advances in high-throughput screening (HTS) revolutionized the environmental and health sciences data landscape. However, new compounds still need to be experimentally synthesized and tested to obtain HTS data, which will still be costly and time-consuming when a large set of new compounds need to be studied against many tests. Quantitative structure-activity relationship (QSAR) modeling is a standard method to fill data gaps for new compounds. The major challenge for many toxicologists, especially those with limited computational backgrounds, is efficiently developing optimized QSAR models for each assay with missing data for certain test compounds. This chapter aims to introduce a freely available and user-friendly QSAR modeling workflow, which trains and optimizes models using five algorithms without the need for a programming background.


Subject(s)
High-Throughput Screening Assays , Quantitative Structure-Activity Relationship , Algorithms , Biological Assay
13.
J Pharmacol Toxicol Methods ; 111: 107098, 2021.
Article in English | MEDLINE | ID: mdl-34229067

ABSTRACT

Secondary pharmacology studies are utilized by the pharmaceutical industry as a cost-efficient tool to identify potential safety liabilities of drugs before entering Phase 1 clinical trials. These studies are recommended by the Food and Drug Administration (FDA) as a part of the Investigational New Drug (IND) application. However, despite the utility of these assays, there is little guidance on which targets should be screened and which format should be used. Here, we evaluated 226 secondary pharmacology profiles obtained from close to 90 unique sponsors. The results indicated that the most tested target in our set was the GABA benzodiazepine receptor (tested 168 times), the most hit target was adenosine 3 (hit 24 times), and the target with the highest hit percentage was the quinone reductase 2 (NQO2) receptor (hit 29% of the time). The overall results were largely consistent with those observed in previous publications. However, this study also identified the need for improvement in the submission process of secondary pharmacology studies by industry, which could enhance their utility for regulatory purpose. FDA-industry collaborative working groups will utilize this data to determine the best methods for regulatory submission of these studies and evaluate the need for a standard target panel.


Subject(s)
Drugs, Investigational , Pharmaceutical Preparations , Drug Industry , Drugs, Investigational/adverse effects , Investigational New Drug Application , United States , United States Food and Drug Administration
14.
ACS Sustain Chem Eng ; 9(10): 3909-3919, 2021 Mar 15.
Article in English | MEDLINE | ID: mdl-34239782

ABSTRACT

Compared to traditional experimental approaches, computational modeling is a promising strategy to efficiently prioritize new candidates with low cost. In this study, we developed a novel data mining and computational modeling workflow proven to be applicable by screening new analgesic opioids. To this end, a large opioid data set was used as the probe to automatically obtain bioassay data from the PubChem portal. There were 114 PubChem bioassays selected to build quantitative structure-activity relationship (QSAR) models based on the testing results across the probe compounds. The compounds tested in each bioassay were used to develop 12 models using the combination of three machine learning approaches and four types of chemical descriptors. The model performance was evaluated by the coefficient of determination (R 2) obtained from 5-fold cross-validation. In total, 49 models developed for 14 bioassays were selected based on the criteria and were identified to be mainly associated with binding affinities to different opioid receptors. The models for these 14 bioassays were further used to fill data gaps in the probe opioids data set and to predict general drug compounds in the DrugBank data set. This study provides a universal modeling strategy that can take advantage of large public data sets for computer-aided drug design (CADD).

15.
Environ Sci Technol ; 55(15): 10875-10887, 2021 08 03.
Article in English | MEDLINE | ID: mdl-34304572

ABSTRACT

Traditional experimental testing to identify endocrine disruptors that enhance estrogenic signaling relies on expensive and labor-intensive experiments. We sought to design a knowledge-based deep neural network (k-DNN) approach to reveal and organize public high-throughput screening data for compounds with nuclear estrogen receptor α and ß (ERα and ERß) binding potentials. The target activity was rodent uterotrophic bioactivity driven by ERα/ERß activations. After training, the resultant network successfully inferred critical relationships among ERα/ERß target bioassays, shown as weights of 6521 edges between 1071 neurons. The resultant network uses an adverse outcome pathway (AOP) framework to mimic the signaling pathway initiated by ERα and identify compounds that mimic endogenous estrogens (i.e., estrogen mimetics). The k-DNN can predict estrogen mimetics by activating neurons representing several events in the ERα/ERß signaling pathway. Therefore, this virtual pathway model, starting from a compound's chemistry initiating ERα activation and ending with rodent uterotrophic bioactivity, can efficiently and accurately prioritize new estrogen mimetics (AUC = 0.864-0.927). This k-DNN method is a potential universal computational toxicology strategy to utilize public high-throughput screening data to characterize hazards and prioritize potentially toxic compounds.


Subject(s)
Adverse Outcome Pathways , Estrogen Receptor beta , Estrogen Receptor alpha , Estrogens , High-Throughput Screening Assays , Neural Networks, Computer
17.
Acta Psychiatr Scand ; 144(3): 259-276, 2021 09.
Article in English | MEDLINE | ID: mdl-33960396

ABSTRACT

OBJECTIVES: Polypharmacy is common in maintenance treatment of bipolar illness, but proof of greater efficacy compared to monotherapy is assumed rather than well known. We systematically reviewed the evidence from the literature to provide recommendations for clinical management and future research. METHOD: A systematic review was conducted on the use of polypharmacy in bipolar prophylaxis. Relevant papers published in English through 31 December 2019 were identified searching the electronic databases MEDLINE, Embase, PsycINFO, and the Cochrane Library. RESULTS: Twelve studies matched inclusion criteria, including 10 randomized controlled trials (RCTs). The best drug combination in prevention is represented by lithium + valproic acid which showed a significant effect on time to mood relapses (HR = 0.57) compared to valproic acid monotherapy, especially for manic episodes (HR = 0.51). The effect was significant in terms of time to new drug treatment (HR = 0.51) and time to hospitalization (HR = 0.57). A significant reduction in the frequency of mood relapses was also reported for lithium + valproic acid vs. lithium monotherapy (RR=0.12); however, the trial had a small sample size. Lamotrigine + valproic acid reported significant efficacy in prevention of depressive episodes compared to lamotrigine alone. CONCLUSIONS: The literature to support a generally greater efficacy with polypharmacy in bipolar illness is scant and heterogeneous. Within that limited evidence base, the best drug combination in bipolar prevention is represented by lithium + valproic acid for manic, but not depressive episodes. Clinical practice should focus more on adequate monotherapy before considering polypharmacy.


Subject(s)
Bipolar Disorder , Antimanic Agents/therapeutic use , Bipolar Disorder/drug therapy , Humans , Lithium Compounds/therapeutic use , Polypharmacy , Valproic Acid/therapeutic use
18.
Empir Softw Eng ; 26(4): 62, 2021.
Article in English | MEDLINE | ID: mdl-33942010

ABSTRACT

The COVID-19 pandemic has forced governments worldwide to impose movement restrictions on their citizens. Although critical to reducing the virus' reproduction rate, these restrictions come with far-reaching social and economic consequences. In this paper, we investigate the impact of these restrictions on an individual level among software engineers who were working from home. Although software professionals are accustomed to working with digital tools, but not all of them remotely, in their day-to-day work, the abrupt and enforced work-from-home context has resulted in an unprecedented scenario for the software engineering community. In a two-wave longitudinal study (N = 192), we covered over 50 psychological, social, situational, and physiological factors that have previously been associated with well-being or productivity. Examples include anxiety, distractions, coping strategies, psychological and physical needs, office set-up, stress, and work motivation. This design allowed us to identify the variables that explained unique variance in well-being and productivity. Results include (1) the quality of social contacts predicted positively, and stress predicted an individual's well-being negatively when controlling for other variables consistently across both waves; (2) boredom and distractions predicted productivity negatively; (3) productivity was less strongly associated with all predictor variables at time two compared to time one, suggesting that software engineers adapted to the lockdown situation over time; and (4) longitudinal analyses did not provide evidence that any predictor variable causal explained variance in well-being and productivity. Overall, we conclude that working from home was per se not a significant challenge for software engineers. Finally, our study can assess the effectiveness of current work-from-home and general well-being and productivity support guidelines and provides tailored insights for software professionals.

19.
Urology ; 149: e1-e4, 2021 03.
Article in English | MEDLINE | ID: mdl-33421441

ABSTRACT

We describe our experience in 2 institutions handling bladder prolapse through a patent urachus (PU), together with a brief review of published literature. Case 1: A term neonate with congenital prolapsed bladder via PU. Ultrasound at 21 weeks gestation revealed a male fetus with a large midline pelvic cyst communicating with the bladder which disappeared on subsequent 27 weeks ultrasound. Case 2: A term female neonate with congenital prolapsed bladder via PU with no prenatal diagnosis. In both cases the bladder closure was undertaken during the newborns' first days of life.


Subject(s)
Pelvic Organ Prolapse/congenital , Urachus/abnormalities , Urinary Bladder Diseases/congenital , Female , Humans , Infant, Newborn , Male
20.
Chem Res Toxicol ; 34(2): 483-494, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33325690

ABSTRACT

Implementation of the Clinical Data Interchange Standards Consortium (CDISC)'s Standard for Exchange of Nonclinical Data (SEND) by the United States Food and Drug Administration Center for Drug Evaluation and Research (US FDA CDER) has created large quantities of SEND data sets and a tremendous opportunity to apply large-scale data analytic approaches. To fully realize this opportunity, differences in SEND implementation that impair the ability to conduct cross-study analysis must be addressed. In this manuscript, a prototypical question regarding historical control data (see Table of Contents graphic) was used to identify areas for SEND harmonization and to develop algorithmic strategies for nonclinical cross-study analysis within a variety of databases. FDA CDER's repository of >1800 sponsor-submitted studies in SEND format was queried using the statistical programming language R to gain insight into how the CDISC SEND Implementation Guides are being applied across the industry. For each component needed to answer the question (defined as "query block"), the frequency of data population was determined and ranged from 6 to 99%. For fields populated <90% and/or that did not have Controlled Terminology, data extraction methods such as data transformation and script development were evaluated. Data extraction was successful for fields such as phase of study, negative controls, and histopathology using scripts. Calculations to assess accuracy of data extraction indicated a high confidence in most query block searches. Some fields such as vehicle name, animal supplier name, and test facility name are not amenable to accurate data extraction through script development alone and require additional harmonization to confidently extract data. Harmonization proposals are discussed in this manuscript. Implementation of these proposals will allow stakeholders to capitalize on the opportunity presented by SEND data sets to increase the efficiency and productivity of nonclinical drug development, allowing the most promising drug candidates to proceed through development.


Subject(s)
Algorithms , Pharmaceutical Preparations/analysis , Animals , Databases, Factual/standards , Microscopy , Pharmaceutical Preparations/administration & dosage , Pharmaceutical Preparations/standards , United States , United States Food and Drug Administration/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...