Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
J Xray Sci Technol ; 31(6): 1315-1332, 2023.
Article in English | MEDLINE | ID: mdl-37840464

ABSTRACT

BACKGROUND: Dental panoramic imaging plays a pivotal role in dentistry for diagnosis and treatment planning. However, correctly positioning patients can be challenging for technicians due to the complexity of the imaging equipment and variations in patient anatomy, leading to positioning errors. These errors can compromise image quality and potentially result in misdiagnoses. OBJECTIVE: This research aims to develop and validate a deep learning model capable of accurately and efficiently identifying multiple positioning errors in dental panoramic imaging. METHODS AND MATERIALS: This retrospective study used 552 panoramic images selected from a hospital Picture Archiving and Communication System (PACS). We defined six types of errors (E1-E6) namely, (1) slumped position, (2) chin tipped low, (3) open lip, (4) head turned to one side, (5) head tilted to one side, and (6) tongue against the palate. First, six Convolutional Neural Network (CNN) models were employed to extract image features, which were then fused using transfer learning. Next, a Support Vector Machine (SVM) was applied to create a classifier for multiple positioning errors, using the fused image features. Finally, the classifier performance was evaluated using 3 indices of precision, recall rate, and accuracy. RESULTS: Experimental results show that the fusion of image features with six binary SVM classifiers yielded high accuracy, recall rates, and precision. Specifically, the classifier achieved an accuracy of 0.832 for identifying multiple positioning errors. CONCLUSIONS: This study demonstrates that six SVM classifiers effectively identify multiple positioning errors in dental panoramic imaging. The fusion of extracted image features and the employment of SVM classifiers improve diagnostic precision, suggesting potential enhancements in dental imaging efficiency and diagnostic accuracy. Future research should consider larger datasets and explore real-time clinical application.


Subject(s)
Deep Learning , Radiology Information Systems , Humans , Retrospective Studies , Diagnostic Imaging , Neural Networks, Computer
2.
Healthcare (Basel) ; 11(15)2023 Aug 07.
Article in English | MEDLINE | ID: mdl-37570467

ABSTRACT

This study focuses on overcoming challenges in classifying eye diseases using color fundus photographs by leveraging deep learning techniques, aiming to enhance early detection and diagnosis accuracy. We utilized a dataset of 6392 color fundus photographs across eight disease categories, which was later augmented to 17,766 images. Five well-known convolutional neural networks (CNNs)-efficientnetb0, mobilenetv2, shufflenet, resnet50, and resnet101-and a custom-built CNN were integrated and trained on this dataset. Image sizes were standardized, and model performance was evaluated via accuracy, Kappa coefficient, and precision metrics. Shufflenet and efficientnetb0demonstrated strong performances, while our custom 17-layer CNN outperformed all with an accuracy of 0.930 and a Kappa coefficient of 0.920. Furthermore, we found that the fusion of image features with classical machine learning classifiers increased the performance, with Logistic Regression showcasing the best results. Our study highlights the potential of AI and deep learning models in accurately classifying eye diseases and demonstrates the efficacy of custom-built models and the fusion of deep learning and classical methods. Future work should focus on validating these methods across larger datasets and assessing their real-world applicability.

3.
Healthcare (Basel) ; 10(12)2022 Nov 27.
Article in English | MEDLINE | ID: mdl-36553906

ABSTRACT

According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator's technique, the cooperation of the subjects, and the subjective interpretation by the physician. It results in inconsistent identification. Therefore, this study explores the use of a deep neural network algorithm for the classification of mammography images. In the experimental design, a retrospective study was used to collect imaging data from actual clinical cases. The mammography images were collected and classified according to the breast image reporting and data-analyzing system (BI-RADS). In terms of model building, a fully convolutional dense connection network (FC-DCN) is used for the network backbone. All the images were obtained through image preprocessing, a data augmentation method, and transfer learning technology to build a mammography image classification model. The research results show the model's accuracy, sensitivity, and specificity were 86.37%, 100%, and 72.73%, respectively. Based on the FC-DCN model framework, it can effectively reduce the number of training parameters and successfully obtain a reasonable image classification model for mammography.

4.
Biosensors (Basel) ; 12(5)2022 May 03.
Article in English | MEDLINE | ID: mdl-35624595

ABSTRACT

Many neurological and musculoskeletal disorders are associated with problems related to postural movement. Noninvasive tracking devices are used to record, analyze, measure, and detect the postural control of the body, which may indicate health problems in real time. A total of 35 young adults without any health problems were recruited for this study to participate in a walking experiment. An iso-block postural identity method was used to quantitatively analyze posture control and walking behavior. The participants who exhibited straightforward walking and skewed walking were defined as the control and experimental groups, respectively. Fusion deep learning was applied to generate dynamic joint node plots by using OpenPose-based methods, and skewness was qualitatively analyzed using convolutional neural networks. The maximum specificity and sensitivity achieved using a combination of ResNet101 and the naïve Bayes classifier were 0.84 and 0.87, respectively. The proposed approach successfully combines cell phone camera recordings, cloud storage, and fusion deep learning for posture estimation and classification.


Subject(s)
Artificial Intelligence , Posture , Bayes Theorem , Humans , Neural Networks, Computer , Walking , Young Adult
5.
Sensors (Basel) ; 21(21)2021 Oct 30.
Article in English | MEDLINE | ID: mdl-34770534

ABSTRACT

Positron emission tomography (PET) can provide functional images and identify abnormal metabolic regions of the whole-body to effectively detect tumor presence and distribution. The filtered back-projection (FBP) algorithm is one of the most common images reconstruction methods. However, it will generate strike artifacts on the reconstructed image and affect the clinical diagnosis of lesions. Past studies have shown reduction in strike artifacts and improvement in quality of images by two-dimensional morphological structure operators (2D-MSO). The morphological structure method merely processes the noise distribution of 2D space and never considers the noise distribution of 3D space. This study was designed to develop three-dimensional-morphological structure operators (3D MSO) for nuclear medicine imaging and effectively eliminating strike artifacts without reducing image quality. A parallel operation was also used to calculate the minimum background standard deviation of the images for three-dimensional morphological structure operators with the optimal response curve (3D-MSO/ORC). As a result of Jaszczak phantom and rat verification, 3D-MSO/ORC showed better denoising performance and image quality than the 2D-MSO method. Thus, 3D MSO/ORC with a 3 × 3 × 3 mask can reduce noise efficiently and provide stability in FBP images.


Subject(s)
Algorithms , Artifacts , Animals , Image Processing, Computer-Assisted , Phantoms, Imaging , Positron-Emission Tomography , Rats
6.
Biosensors (Basel) ; 11(6)2021 Jun 08.
Article in English | MEDLINE | ID: mdl-34201215

ABSTRACT

Anesthesia assessment is most important during surgery. Anesthesiologists use electrocardiogram (ECG) signals to assess the patient's condition and give appropriate medications. However, it is not easy to interpret the ECG signals. Even physicians with more than 10 years of clinical experience may still misjudge. Therefore, this study uses convolutional neural networks to classify ECG image types to assist in anesthesia assessment. The research uses Internet of Things (IoT) technology to develop ECG signal measurement prototypes. At the same time, it classifies signal types through deep neural networks, divided into QRS widening, sinus rhythm, ST depression, and ST elevation. Three models, ResNet, AlexNet, and SqueezeNet, are developed with 50% of the training set and test set. Finally, the accuracy and kappa statistics of ResNet, AlexNet, and SqueezeNet in ECG waveform classification were (0.97, 0.96), (0.96, 0.95), and (0.75, 0.67), respectively. This research shows that it is feasible to measure ECG in real time through IoT and then distinguish four types through deep neural network models. In the future, more types of ECG images will be added, which can improve the real-time classification practicality of the deep model.


Subject(s)
Electrocardiography , Neural Networks, Computer , Algorithms , Arrhythmias, Cardiac , Humans , Internet of Things
7.
Sensors (Basel) ; 21(9)2021 May 05.
Article in English | MEDLINE | ID: mdl-34063144

ABSTRACT

Postural control decreases with aging. Thus, an efficient and accurate method of detecting postural control is needed. We enrolled 35 elderly adults (aged 82.06 ± 8.74 years) and 20 healthy young adults (aged 21.60 ± 0.60 years) who performed standing tasks for 40 s, performed six times. The coordinates of 15 joint nodes were captured using a Kinect device (30 Hz). We plotted joint positions into a single 2D figure (named a joint-node plot, JNP) once per second for up to 40 s. A total of 15 methods combining deep and machine learning for postural control classification were investigated. The accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), and kappa values of the selected methods were assessed. The highest PPV, NPV, accuracy, sensitivity, specificity, and kappa values were higher than 0.9 in validation testing. The presented method using JNPs demonstrated strong performance in detecting the postural control ability of young and elderly adults.


Subject(s)
Machine Learning , Postural Balance , Aged , Aging , Humans , Young Adult
8.
Comput Methods Programs Biomed ; 154: 79-88, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29249349

ABSTRACT

BACKGROUND AND OBJECTIVE: Flatfeet can be evaluated by measuring the calcaneal-fifth metatarsal angle on a weight-bearing lateral foot radiograph. This study aimed to develop an automated method for determining the calcaneal-fifth metatarsal angle on weight-bearing lateral foot radiograph. METHOD: The proposed method comprises four processing steps: (1) identification of the regions including the calcaneus and fifth metatarsal bones in a foot image; (2) delineation of the contours of the calcaneus and the fifth metatarsal; (3) determination of the tangential lines of the two bones from the contours; and (4) determination of the calcaneal-fifth metatarsal angle between the two tangential lines as arch angle. RESULTS: The proposed method was evaluated using 300 weight-bearing lateral foot radiographs. The arch angles determined by the proposed method were compared with those measured by a radiologist, and the errors between the automatically and manually determined angles were used to evaluate the precision of the method. The average error in the proposed method was found to be 1.12°â€¯±â€¯1.57° In the study, in 73.33% of the cases, the arch angles could be determined automatically without redrawing any tangential lines; in 23.00% of the cases, the angles would be correctly determined by redrawing one of the tangential lines; further, in only 3.67% of the cases, both the calcaneal and fifth metatarsal tangential lines needed to be redrawn to determine the arch angles. CONCLUSION: The results revealed that the proposed method has potential for assisting doctors in measuring the arch angles on weight-bearing lateral foot radiographs more efficiently.


Subject(s)
Automation , Calcaneus/diagnostic imaging , Flatfoot/diagnostic imaging , Foot/diagnostic imaging , Image Processing, Computer-Assisted , Metatarsus/diagnostic imaging , Weight-Bearing , Calcaneus/pathology , Flatfoot/pathology , Foot/pathology , Humans , Metatarsus/pathology , Radiography , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...