Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 11750, 2024 05 23.
Article in English | MEDLINE | ID: mdl-38782964

ABSTRACT

Sex determination is essential for identifying unidentified individuals, particularly in forensic contexts. Traditional methods for sex determination involve manual measurements of skeletal features on CBCT scans. However, these manual measurements are labor-intensive, time-consuming, and error-prone. The purpose of this study was to automatically and accurately determine sex on a CBCT scan using a two-stage anatomy-guided attention network (SDetNet). SDetNet consisted of a 2D frontal sinus segmentation network (FSNet) and a 3D anatomy-guided attention network (SDNet). FSNet segmented frontal sinus regions in the CBCT images and extracted regions of interest (ROIs) near them. Then, the ROIs were fed into SDNet to predict sex accurately. To improve sex determination performance, we proposed multi-channel inputs (MSIs) and an anatomy-guided attention module (AGAM), which encouraged SDetNet to learn differences in the anatomical context of the frontal sinus between males and females. SDetNet showed superior sex determination performance in the area under the receiver operating characteristic curve, accuracy, Brier score, and specificity compared with the other 3D CNNs. Moreover, the results of ablation studies showed a notable improvement in sex determination with the embedding of both MSI and AGAM. Consequently, SDetNet demonstrated automatic and accurate sex determination by learning the anatomical context information of the frontal sinus on CBCT scans.


Subject(s)
Cone-Beam Computed Tomography , Frontal Sinus , Humans , Cone-Beam Computed Tomography/methods , Male , Female , Frontal Sinus/diagnostic imaging , Frontal Sinus/anatomy & histology , Imaging, Three-Dimensional/methods , Adult , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Sex Determination by Skeleton/methods
2.
Imaging Sci Dent ; 54(1): 81-91, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38571772

ABSTRACT

Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

3.
Sci Rep ; 12(1): 4075, 2022 03 08.
Article in English | MEDLINE | ID: mdl-35260710

ABSTRACT

Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Pancreas/diagnostic imaging , Tomography, X-Ray Computed
4.
Diagnostics (Basel) ; 12(3)2022 Mar 18.
Article in English | MEDLINE | ID: mdl-35328288

ABSTRACT

Computed tomography (CT) is undoubtedly the most reliable and the only method for accurate diagnosis of sinusitis, while X-ray has long been used as the first imaging technique for early detection of sinusitis symptoms. More importantly, radiography plays a key role in determining whether or not a CT examination should be performed for further evaluation. In order to simplify the diagnostic process of paranasal sinus view and moreover to avoid the use of CT scans which have disadvantages such as high radiation dose, high cost, and high time consumption, this paper proposed a multi-view CNN able to faithfully estimate the severity of sinusitis. In this study, a multi-view convolutional neural network (CNN) is proposed which is able to accurately estimate the severity of sinusitis by analyzing only radiographs consisting of Waters' view and Caldwell's view without the aid of CT scans. The proposed network is designed as a cascaded architecture, and can simultaneously provide decisions for maxillary sinus localization and sinusitis classification. We obtained an average area under the curve (AUC) of 0.722 for maxillary sinusitis classification, and an AUC of 0.750 and 0.700 for the left and right maxillary sinusitis, respectively, using the proposed network.

5.
Sci Rep ; 11(1): 13445, 2021 06 29.
Article in English | MEDLINE | ID: mdl-34188141

ABSTRACT

The habenula is one of the most important brain regions for investigating the etiology of psychiatric diseases such as major depressive disorder (MDD). However, the habenula is challenging to delineate with the naked human eye in brain imaging due to its low contrast and tiny size, and the manual segmentation results vary greatly depending on the observer. Therefore, there is a great need for automatic quantitative analytic methods of the habenula for psychiatric research purposes. Here we propose an automated segmentation and volume estimation method for the habenula in 7 Tesla magnetic resonance imaging based on a deep learning-based semantic segmentation network. The proposed method, using the data of 69 participants (33 patients with MDD and 36 normal controls), achieved an average precision, recall, and dice similarity coefficient of 0.869, 0.865, and 0.852, respectively, in the automated segmentation task. Moreover, the intra-class correlation coefficient reached 0.870 in the volume estimation task. This study demonstrates that this deep learning-based method can provide accurate and quantitative analytic results of the habenula. By providing rapid and quantitative information on the habenula, we expect our proposed method will aid future psychiatric disease studies.


Subject(s)
Deep Learning , Depressive Disorder, Major/diagnostic imaging , Habenula/diagnostic imaging , Magnetic Resonance Imaging , Adult , Female , Humans , Male , Reproducibility of Results
6.
Healthc Inform Res ; 26(4): 321-327, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33190466

ABSTRACT

OBJECTIVES: Changes in the pancreatic volume (PV) are useful as potential clinical markers for some pancreatic-related diseases. The objective of this study was to measure the volume of the pancreas using computed tomography (CT) volumetry and to evaluate the relationships between sex, age, body mass index (BMI), and sarcopenia. METHODS: We retrospectively analyzed the abdominal CT scans of 1,003 subjects whose ages ranged between 10 and 90 years. The pancreas was segmented manually to define the region of interest (ROI) based on CT images, and then the PVs were measured by counting the voxels in all ROIs within the pancreas boundary. Sarcopenia was identified by examination of CT images that determined the crosssectional area of the skeletal muscle around the third lumbar vertebra. RESULTS: The mean volume of the pancreas was 62.648 ± 19.094 cm3. The results indicated a negative correlation between the PV and age. There was a positive correlation between the PV and BMI for both sexes, females, and males (r = 0.343, p < 0.001; r = 0.461, p < 0.001; and r = 0.244, p < 0.001, respectively). Additionally, there was a positive correlation between the PV and sarcopenia for females (r = 0.253, p < 0.001) and males (r = 0.200, p < 0.001). CONCLUSIONS: CT pancreas volumetry results may help physicians follow up or predict conditions of the pancreas after interventions for pancreatic-related disease in the future.

SELECTION OF CITATIONS
SEARCH DETAIL
...