Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Dent ; 135: 104565, 2023 08.
Article in English | MEDLINE | ID: mdl-37308053

ABSTRACT

OBJECTIVES: To evaluate the accuracy of fully automatic segmentation of pharyngeal volume of interests (VOIs) before and after orthognathic surgery in skeletal Class III patients using a convolutional neural network (CNN) model and to investigate the clinical applicability of artificial intelligence for quantitative evaluation of treatment changes in pharyngeal VOIs. METHODS: 310 cone-beam computed tomography (CBCT) images were divided into a training set (n = 150), validation set (n = 40), and test set (n = 120). The test datasets comprised matched pairs of pre- and post-treatment images of 60 skeletal Class III patients (mean age 23.1 ± 5.0 years; ANB<-2°) who underwent bimaxillary orthognathic surgery with orthodontic treatment. A 3D U-Net CNNs model was applied for fully automatic segmentation and measurement of subregional pharyngeal volumes of pre-treatment (T0) and post-treatment (T1) scans. The model's accuracy was compared to semi-automatic segmentation outcomes by humans using the dice similarity coefficient (DSC) and volume similarity (VS). The correlation between surgical skeletal changes and model accuracy was obtained. RESULTS: The proposed model achieved high performance of subregional pharyngeal segmentation on both T0 and T1 images, representing a significant T1-T0 difference of DSC only in the nasopharynx. Region-specific differences amongst pharyngeal VOIs, which were observed at T0, disappeared on the T1 images. The decreased DSC of nasopharyngeal segmentation after treatment was weakly correlated with the amount of maxillary advancement. There was no correlation between the mandibular setback amount and model accuracy. CONCLUSIONS: The proposed model offers fast and accurate subregional pharyngeal segmentation on both pre-treatment and post-treatment CBCT images in skeletal Class III patients. CLINICAL SIGNIFICANCE: We elucidated the clinical applicability of the CNNs model to quantitatively evaluate subregional pharyngeal changes after surgical-orthodontic treatment, which offers a basis for developing a fully integrated multiclass CNNs model to predict pharyngeal responses after dentoskeletal treatments.


Subject(s)
Malocclusion, Angle Class III , Orthognathic Surgery , Humans , Adolescent , Young Adult , Adult , Artificial Intelligence , Malocclusion, Angle Class III/diagnostic imaging , Malocclusion, Angle Class III/surgery , Pharynx/diagnostic imaging , Cone-Beam Computed Tomography/methods , Neural Networks, Computer
2.
J Digit Imaging ; 36(3): 902-910, 2023 06.
Article in English | MEDLINE | ID: mdl-36702988

ABSTRACT

Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS .


Subject(s)
X-Rays , Humans , ROC Curve , Radiography , Signal-To-Noise Ratio
3.
Eur J Med Chem ; 120: 338-52, 2016 Sep 14.
Article in English | MEDLINE | ID: mdl-27236015

ABSTRACT

Estrogen-related receptor gamma (ERRγ) has recently been recognized as an attractive target for treating inflammation, cancer, and metabolic disorders. Herein, we discovered and demonstrated the in vitro pharmacology as well as the absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties of chemical entities that could act as highly selective inverse agonists for ERRγ. The results were comparable to those for GSK5182 (4), a leading ERRγ inverse agonist ligand. Briefly, the half-maximal inhibitory concentration (IC50) range of the synthesized compounds for ERRγ was 0.1-10 µM. Impressively, compound 24e exhibited potency comparable to 4 but was more selective for ERRγ over three other subtypes: ERRα, ERRß, and estrogen receptor α. Furthermore, compound 24e exhibited a superior in vitro ADMET profile compared to the other compounds. Thus, the newly synthesized class of ERRγ inverse agonists could be lead candidates for developing clinical therapies for ERRγ-related disorders.


Subject(s)
Drug Inverse Agonism , Receptors, Estrogen/antagonists & inhibitors , Tamoxifen/analogs & derivatives , Humans , Inhibitory Concentration 50 , Ligands , Small Molecule Libraries/chemical synthesis , Structure-Activity Relationship , Tamoxifen/chemical synthesis , Tamoxifen/pharmacokinetics , Tamoxifen/pharmacology
SELECTION OF CITATIONS
SEARCH DETAIL
...