Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
JACS Au ; 4(6): 2099-2107, 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38938806

ABSTRACT

Reported herein are the bench stable (2E,4E)-diazohexa-2,4-dienals (diazodienals) and their unprecedented polycyclization with aldimine and arylamines enabled by Rh(II)/Brønsted acid relay catalysis. This scalable and atom-economical reaction provides direct access to the biologically important azatricyclo[6.2.1.04,11]undecane fused polycycles having six-contiguous stereocenters. Mechanistic studies revealed that polycyclization proceeds through an unusual triple-nucleophilic cascade initiated by aldimine attack on remote Rh-carbenoid, 6π-electrocyclization of aza-trienyl azomethine ylide, stereoselective aza-Michael addition via iminium activation, and inverse electron-demand intramolecular aza Diels-Alder reaction. The π-π secondary interactions play a crucial role in the preorganization of reactive intermediates for the pericyclic reactions and, hence, the overall efficiency of the polycyclization.

2.
Sci Rep ; 13(1): 9480, 2023 06 10.
Article in English | MEDLINE | ID: mdl-37301891

ABSTRACT

Machine learning (ML) could have advantages over traditional statistical models in identifying risk factors. Using ML algorithms, our objective was to identify the most important variables associated with mortality after dementia diagnosis in the Swedish Registry for Cognitive/Dementia Disorders (SveDem). From SveDem, a longitudinal cohort of 28,023 dementia-diagnosed patients was selected for this study. Sixty variables were considered as potential predictors of mortality risk, such as age at dementia diagnosis, dementia type, sex, body mass index (BMI), mini-mental state examination (MMSE) score, time from referral to initiation of work-up, time from initiation of work-up to diagnosis, dementia medications, comorbidities, and some specific medications for chronic comorbidities (e.g., cardiovascular disease). We applied sparsity-inducing penalties for three ML algorithms and identified twenty important variables for the binary classification task in mortality risk prediction and fifteen variables to predict time to death. Area-under-ROC curve (AUC) measure was used to evaluate the classification algorithms. Then, an unsupervised clustering algorithm was applied on the set of twenty-selected variables to find two main clusters which accurately matched surviving and dead patient clusters. A support-vector-machines with an appropriate sparsity penalty provided the classification of mortality risk with accuracy = 0.7077, AUROC = 0.7375, sensitivity = 0.6436, and specificity = 0.740. Across three ML algorithms, the majority of the identified twenty variables were compatible with literature and with our previous studies on SveDem. We also found new variables which were not previously reported in literature as associated with mortality in dementia. Performance of basic dementia diagnostic work-up, time from referral to initiation of work-up, and time from initiation of work-up to diagnosis were found to be elements of the diagnostic process identified by the ML algorithms. The median follow-up time was 1053 (IQR = 516-1771) days in surviving and 1125 (IQR = 605-1770) days in dead patients. For prediction of time to death, the CoxBoost model identified 15 variables and classified them in order of importance. These highly important variables were age at diagnosis, MMSE score, sex, BMI, and Charlson Comorbidity Index with selection scores of 23%, 15%, 14%, 12% and 10%, respectively. This study demonstrates the potential of sparsity-inducing ML algorithms in improving our understanding of mortality risk factors in dementia patients and their application in clinical settings. Moreover, ML methods can be used as a complement to traditional statistical methods.


Subject(s)
Dementia , Machine Learning , Humans , Longitudinal Studies , Cohort Studies , Algorithms , Dementia/diagnosis
3.
Acta Paediatr ; 112(4): 686-696, 2023 04.
Article in English | MEDLINE | ID: mdl-36607251

ABSTRACT

AIM: Sepsis is a leading cause of morbidity and mortality in neonates. Early diagnosis is key but difficult due to non-specific signs. We investigate the predictive value of machine learning-assisted analysis of non-invasive, high frequency monitoring data and demographic factors to detect neonatal sepsis. METHODS: Single centre study, including a representative cohort of 325 infants (2866 hospitalisation days). Personalised event timelines including interventions and clinical findings were generated. Time-domain features from heart rate, respiratory rate and oxygen saturation values were calculated and demographic factors included. Sepsis prediction was performed using Naïve Bayes algorithm in a maximum a posteriori framework up to 24 h before clinical sepsis suspicion. RESULTS: Twenty sepsis cases were identified. Combining multiple vital signs improved algorithm performance compared to heart rate characteristics alone. This enabled a prediction of sepsis with an area under the receiver operating characteristics curve of 0.82, up to 24 h before clinical sepsis suspicion. Moreover, 10 h prior to clinical suspicion, the risk of sepsis increased 150-fold. CONCLUSION: The present algorithm using non-invasive patient data provides useful predictive value for neonatal sepsis detection. Machine learning-assisted algorithms are promising novel methods that could help individualise patient care and reduce morbidity and mortality.


Subject(s)
Neonatal Sepsis , Sepsis , Infant, Newborn , Humans , Bayes Theorem , Machine Learning , Vital Signs
4.
J Digit Imaging ; 35(6): 1708-1718, 2022 12.
Article in English | MEDLINE | ID: mdl-35995896

ABSTRACT

The main aim of the present study was to predict myocardial function improvement in cardiac MR (LGE-CMR) images in patients after coronary artery bypass grafting (CABG) using radiomics and machine learning algorithms. Altogether, 43 patients who had visible scars on short-axis LGE-CMR images and were candidates for CABG surgery were selected and enrolled in this study. MR imaging was performed preoperatively using a 1.5-T MRI scanner. All images were segmented by two expert radiologists (in consensus). Prior to extraction of radiomics features, all MR images were resampled to an isotropic voxel size of 1.8 × 1.8 × 1.8 mm3. Subsequently, intensities were quantized to 64 discretized gray levels and a total of 93 features were extracted. The applied algorithms included a smoothly clipped absolute deviation (SCAD)-penalized support vector machine (SVM) and the recursive partitioning (RP) algorithm as a robust classifier for binary classification in this high-dimensional and non-sparse data. All models were validated with repeated fivefold cross-validation and 10,000 bootstrapping resamples. Ten and seven features were selected with SCAD-penalized SVM and RP algorithm, respectively, for CABG responder/non-responder classification. Considering univariate analysis, the GLSZM gray-level non-uniformity-normalized feature achieved the best performance (AUC: 0.62, 95% CI: 0.53-0.76) with SCAD-penalized SVM. Regarding multivariable modeling, SCAD-penalized SVM obtained an AUC of 0.784 (95% CI: 0.64-0.92), whereas the RP algorithm achieved an AUC of 0.654 (95% CI: 0.50-0.82). In conclusion, different radiomics texture features alone or combined in multivariate analysis using machine learning algorithms provide prognostic information regarding myocardial function in patients after CABG.


Subject(s)
Algorithms , Machine Learning , Humans , Magnetic Resonance Imaging/methods , Support Vector Machine , Coronary Artery Bypass , Retrospective Studies
5.
J Conserv Dent ; 24(6): 568-575, 2021.
Article in English | MEDLINE | ID: mdl-35558662

ABSTRACT

Aim: The aim of the study is to evaluate of debris and smear layer formation after using rotary ProTaper Universal, Twisted File, and XP Endo file systems under scanning electron microscope. Materials and Methods: Forty freshly extracted mandibular second premolar teeth were taken to decoronate at the cementoenamel junction to make the remaining root length 15 mm. Specimens were divided into four groups of 10 teeth each, Group I (control) - no instrumentation. Group II - ProTaper Universal rotary file (F2), Group III - twisted file (ISO size 0.25 and 6% taper), Group IV - XP Endo file (ISO size 0.25). During instrumentation, 5 ml normal saline was used as irrigating agent. Grooves parallel to the longitudinal axis of the root were made on the mesial and distal surface of each specimen to split it into two halves and examined under scanning electron microscope at ×1500 and ×5000 magnification. Photomicrographs were taken to evaluate debris and smear layer. Evaluation of photomicrographs was done using a score index. Results: One-way analysis of variance (ANOVA) was used to compare more than one means at a time. Tukey's critical difference followed by ANOVA was used to compare the mean values pair wise. P <0.05 was considered to be statistically significant. Among all the file systems, Group II showed maximum amount of debris (3.50 ± 1.109) followed by Group III (2.83 ± 1.238) and least amount was showed by Group IV (2.65 ± 1.122) at all levels (cervical, middle, and apical third). Among all the experimental groups, Group II showed maximum amount of smear layer (2.75 ± 1.149) followed by Group III (2.40 ± 0.982) and least amount of smear layer shown by Group IV (2.10 ± 0.841) at all levels (cervical, middle and apical third), the result was statistically significant (P < 0.05). Conclusions: At all the levels (cervical third, middle third, and apical third), among all the experimental groups, highest amount of debris and smear layer was formed by ProTaper Universal rotary file followed by Twisted file and least amount showed by XP Endo file system. In all the levels, control group showed highest amount of debris but least amount of smear layer.

6.
J Conserv Dent ; 22(2): 191-195, 2019.
Article in English | MEDLINE | ID: mdl-31142992

ABSTRACT

AIM: The aim of this study is to evaluate the enamel surface abrasion using four different dentifrices and a customized automated brushing machine under a profilometer. MATERIALS AND METHODS: A total of 30 enamel blocks (9 mm × 9 mm × 2 mm) were prepared from freshly extracted maxillary central incisors which were randomly divided into five equal groups (Group 1: specimens brushed with Colgate Total, Group 2: specimens brushed with Colgate Lemon and Salt, Group 3: specimens brushed with Colgate Visible White, Group 4: specimens brushed with Colgate Sensitive, and Group 5: intact enamel surface). Samples were brushed using a customized automated toothbrushing machine for 60 min. A profilometric read out (Ra value) was taken for each group subjected to brushing and also for the control group. STATISTICAL ANALYSIS: Statistical analysis used in this study was one-way analysis of variance followed by post hoc Tukey's test. RESULTS: Statistically significant differences (P < 0.05) were observed in the values of enamel abrasion (Ra) among Group 1-Group 4 whereas Group 5 (control group) had no significant difference in enamel abrasion (P > 0.05). CONCLUSION: The highest enamel abrasion was observed in the group with Colgate Visible White toothpaste, and the least enamel abrasion was seen in the group with Colgate Sensitive Plus.

7.
PLoS One ; 10(10): e0140644, 2015.
Article in English | MEDLINE | ID: mdl-26496191

ABSTRACT

MOTIVATION: Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. RESULTS: There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. AVAILABILITY: An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.


Subject(s)
Algorithms , Bacteria/genetics , Metagenomics/methods , Microbiota/genetics , Bacteria/classification , Cluster Analysis , DNA, Bacterial/chemistry , DNA, Bacterial/genetics , Feces/microbiology , Humans , Internet , Polymerase Chain Reaction , RNA, Ribosomal, 16S/genetics , Reproducibility of Results , Sequence Analysis, DNA
8.
Bioinformatics ; 30(17): 2423-31, 2014 Sep 01.
Article in English | MEDLINE | ID: mdl-24812337

ABSTRACT

MOTIVATION: Estimation of bacterial community composition from a high-throughput sequenced sample is an important task in metagenomics applications. As the sample sequence data typically harbors reads of variable lengths and different levels of biological and technical noise, accurate statistical analysis of such data is challenging. Currently popular estimation methods are typically time-consuming in a desktop computing environment. RESULTS: Using sparsity enforcing methods from the general sparse signal processing field (such as compressed sensing), we derive a solution to the community composition estimation problem by a simultaneous assignment of all sample reads to a pre-processed reference database. A general statistical model based on kernel density estimation techniques is introduced for the assignment task, and the model solution is obtained using convex optimization tools. Further, we design a greedy algorithm solution for a fast solution. Our approach offers a reasonably fast community composition estimation method, which is shown to be more robust to input data variation than a recently introduced related method. AVAILABILITY AND IMPLEMENTATION: A platform-independent Matlab implementation of the method is freely available at http://www.ee.kth.se/ctsoftware; source code that does not require access to Matlab is currently being tested and will be made available later through the above Web site.


Subject(s)
Bacteria/classification , Metagenomics/methods , Algorithms , Bacteria/genetics , High-Throughput Nucleotide Sequencing , Models, Statistical , RNA, Ribosomal, 16S/genetics , Sequence Analysis, DNA
SELECTION OF CITATIONS
SEARCH DETAIL
...