Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
Add more filters











Publication year range
1.
J Imaging Inform Med ; 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39354294

ABSTRACT

The increasing prevalence of skin diseases necessitates accurate and efficient diagnostic tools. This research introduces a novel skin disease classification model leveraging advanced deep learning techniques. The proposed architecture combines the MobileNet-V2 backbone, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and a Channel Attention Mechanism. The model was trained on four diverse datasets such as PH2 dataset, Skin Cancer MNIST: HAM10000 dataset, DermNet. dataset, and Skin Cancer ISIC dataset. Data preprocessing techniques, including image resizing, and normalization, played a crucial role in optimizing model performance. In this paper, the MobileNet-V2 backbone is implemented to extract hierarchical features from the preprocessed dermoscopic images. The multi-scale contextual information is fused by the ASPP model for generating a feature map. The attention mechanisms contributed significantly, enhancing the extraction ability of inter-channel relationships and multi-scale contextual information for enhancing the discriminative power of the features. Finally, the output feature map is converted into probability distribution through the softmax function. The proposed model outperformed several baseline models, including traditional machine learning approaches, emphasizing its superiority in skin disease classification with 98.6% overall accuracy. Its competitive performance with state-of-the-art methods positions it as a valuable tool for assisting dermatologists in early classification. The study also identified limitations and suggested avenues for future research, emphasizing the model's potential for practical implementation in the field of dermatology.

2.
Heliyon ; 10(17): e37293, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39296185

ABSTRACT

Diabetic retinopathy is a serious eye disease that may lead to loss of vision if it is not treated. Early detection is crucial in preventing further vision impairment and enabling timely interventions. Despite notable advancements in AI-based methods for detecting diabetic retinopathy, researchers are still striving to enhance the efficiency of these techniques. Therefore, obtaining an efficient technique in this field is essential. In this research, a new strategy has been proposed to improve the detection of diabetic retinopathy by increasing the accuracy of diagnosis and identifying cases in the initial stages. To achieve this, it has been proposed to integrate the MobileNet-V2 deep learning-based neural network with Improved Fire Hawk Optimizer (IFHO). The MobileNet-V2 network has been renowned for its efficiency and accuracy in image classification tasks, making it a suitable candidate for diabetic retinopathy detection. By combining it with the IFHO, the feature selection process has been optimized, which is essential for identifying relevant patterns and abnormalities related to diabetic retinopathy. The Diabetic Retinopathy 2015 dataset has been used to evaluate the effectiveness of the MobileNet-V2/IFHO model. The study results indicate that the DRMNV2/IFHO model consistently outperforms other methods in terms of precision, accuracy, and recall. Specifically, the model achieves an average precision of 97.521 %, accuracy of 96.986 %, and recall of 98.543 %. Moreover, when compared to advanced techniques, the DRMNV2/IFHO model demonstrates superior performance in specificity, F1-score, and AUC, with average values of 97.233 %, 93.8 %, and 0.927, respectively. These results underscore the potential of the DRMNV2/IFHO model as a valuable tool for improving the accuracy and efficiency of DR diagnosis. Nevertheless, additional validation and testing on larger datasets are required to verify the model's effectiveness and robustness in real-world clinical scenarios.

3.
Sensors (Basel) ; 24(17)2024 Aug 29.
Article in English | MEDLINE | ID: mdl-39275523

ABSTRACT

To enable the timely adjustment of the control strategy of automobile active safety systems, enhance their capacity to adapt to complex working conditions, and improve driving safety, this paper introduces a new method for predicting road surface state information and recognizing road adhesion coefficients using an enhanced version of the MobileNet V3 model. On one hand, the Squeeze-and-Excitation (SE) is replaced by the Convolutional Block Attention Module (CBAM). It can enhance the extraction of features effectively by considering both spatial and channel dimensions. On the other hand, the cross-entropy loss function is replaced by the Bias Loss function. It can reduce the random prediction problem occurring in the optimization process to improve identification accuracy. Finally, the proposed method is evaluated in an experiment with a four-wheel-drive ROS robot platform. Results indicate that a classification precision of 95.53% is achieved, which is higher than existing road adhesion coefficient identification methods.

4.
Sensors (Basel) ; 24(16)2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39204852

ABSTRACT

With the rapid development of the Industrial Internet of Things in rotating machinery, the amount of data sampled by mechanical vibration wireless sensor networks (MvWSNs) has increased significantly, straining bandwidth capacity. Concurrently, the safety requirements for rotating machinery have escalated, necessitating enhanced real-time data processing capabilities. Conventional methods, reliant on experiential approaches, have proven inefficient in meeting these evolving challenges. To this end, a fault detection method for rotating machinery based on mobileNet in MvWSNs is proposed to address these intractable issues. The small and light deep learning model is helpful to realize nearly real-time sensing and fault detection, lightening the communication pressure of MvWSNs. The well-trained deep learning is implanted on the MvWSNs sensor node, an edge computing platform developed via embedded STM32 microcontrollers (STMicroelectronics International NV, Geneva, Switzerland). Data acquisition, data processing, and data classification are all executed on the computing- and energy-constrained sensor node. The experimental results demonstrate that the proposed fault detection method can achieve about 0.99 for the DDS dataset and an accuracy of 0.98 in the MvWSNs sensor node. Furthermore, the final transmission data size is only 0.1% compared to the original data size. It is also a time-saving method that can be accomplished within 135 ms while the raw data will take about 1000 ms to transmit to the monitoring center when there are four sensor nodes in the network. Thus, the proposed edge computing method shows good application prospects in fault detection and control of rotating machinery with high time sensitivity.

5.
Am J Otolaryngol ; 45(6): 104474, 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39137696

ABSTRACT

OBJECTIVE: Early diagnosis of laryngeal cancer (LC) is crucial, particularly in rural areas. Despite existing studies on deep learning models for LC identification, challenges remain in selecting suitable models for rural areas with shortages of laryngologists and limited computer resources. We present the intelligent laryngeal cancer detection system (ILCDS), a deep learning-based solution tailored for effective LC screening in resource-constrained rural areas. METHODS: We compiled a dataset comprised of 2023 laryngoscopic images and applied data augmentation techniques for dataset expansion. Subsequently, we utilized eight deep learning models-AlexNet, VGG, ResNet, DenseNet, MobileNet, ShuffleNet, Vision Transformer, and Swin Transformer-for LC identification. A comprehensive evaluation of their performances and efficiencies was conducted, and the most suitable model was selected to assemble the ILCDS. RESULTS: Regarding performance, all models attained an average accuracy exceeding 90 % on the test set. Particularly noteworthy are VGG, DenseNet, and MobileNet, which exceeded an accuracy of 95 %, with scores of 95.32 %, 95.75 %, and 95.99 %, respectively. Regarding efficiency, MobileNet excels owing to its compact size and fast inference speed, making it an ideal model for integration into ILCDS. CONCLUSION: The ILCDS demonstrated promising accuracy in LC detection while maintaining modest computational resource requirements, indicating its potential to enhance LC screening accuracy and alleviate the workload on otolaryngologists in rural areas.

6.
Front Med (Lausanne) ; 11: 1436646, 2024.
Article in English | MEDLINE | ID: mdl-39099594

ABSTRACT

Timely and unbiased evaluation of Autism Spectrum Disorder (ASD) is essential for providing lasting benefits to affected individuals. However, conventional ASD assessment heavily relies on subjective criteria, lacking objectivity. Recent advancements propose the integration of modern processes, including artificial intelligence-based eye-tracking technology, for early ASD assessment. Nonetheless, the current diagnostic procedures for ASD often involve specialized investigations that are both time-consuming and costly, heavily reliant on the proficiency of specialists and employed techniques. To address the pressing need for prompt, efficient, and precise ASD diagnosis, an exploration of sophisticated intelligent techniques capable of automating disease categorization was presented. This study has utilized a freely accessible dataset comprising 547 eye-tracking systems that can be used to scan pathways obtained from 328 characteristically emerging children and 219 children with autism. To counter overfitting, state-of-the-art image resampling approaches to expand the training dataset were employed. Leveraging deep learning algorithms, specifically MobileNet, VGG19, DenseNet169, and a hybrid of MobileNet-VGG19, automated classifiers, that hold promise for enhancing diagnostic precision and effectiveness, was developed. The MobileNet model demonstrated superior performance compared to existing systems, achieving an impressive accuracy of 100%, while the VGG19 model achieved 92% accuracy. These findings demonstrate the potential of eye-tracking data to aid physicians in efficiently and accurately screening for autism. Moreover, the reported results suggest that deep learning approaches outperform existing event detection algorithms, achieving a similar level of accuracy as manual coding. Users and healthcare professionals can utilize these classifiers to enhance the accuracy rate of ASD diagnosis. The development of these automated classifiers based on deep learning algorithms holds promise for enhancing the diagnostic precision and effectiveness of ASD assessment, addressing the pressing need for prompt, efficient, and precise ASD diagnosis.

7.
BMC Med Inform Decis Mak ; 24(1): 222, 2024 Aug 07.
Article in English | MEDLINE | ID: mdl-39112991

ABSTRACT

Lung and colon cancers are leading contributors to cancer-related fatalities globally, distinguished by unique histopathological traits discernible through medical imaging. Effective classification of these cancers is critical for accurate diagnosis and treatment. This study addresses critical challenges in the diagnostic imaging of lung and colon cancers, which are among the leading causes of cancer-related deaths worldwide. Recognizing the limitations of existing diagnostic methods, which often suffer from overfitting and poor generalizability, our research introduces a novel deep learning framework that synergistically combines the Xception and MobileNet architectures. This innovative ensemble model aims to enhance feature extraction, improve model robustness, and reduce overfitting.Our methodology involves training the hybrid model on a comprehensive dataset of histopathological images, followed by validation against a balanced test set. The results demonstrate an impressive classification accuracy of 99.44%, with perfect precision and recall in identifying certain cancerous and non-cancerous tissues, marking a significant improvement over traditional approach.The practical implications of these findings are profound. By integrating Gradient-weighted Class Activation Mapping (Grad-CAM), the model offers enhanced interpretability, allowing clinicians to visualize the diagnostic reasoning process. This transparency is vital for clinical acceptance and enables more personalized, accurate treatment planning. Our study not only pushes the boundaries of medical imaging technology but also sets the stage for future research aimed at expanding these techniques to other types of cancer diagnostics.


Subject(s)
Colonic Neoplasms , Deep Learning , Lung Neoplasms , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Colonic Neoplasms/diagnostic imaging , Colonic Neoplasms/classification , Artificial Intelligence
8.
J Environ Manage ; 362: 121274, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38838537

ABSTRACT

Cyanobacteria are the dominating microorganisms in aquatic environments, posing significant risks to public health due to toxin production in drinking water reservoirs. Traditional water quality assessments for abundance of the toxigenic genera in water samples are both time-consuming and error-prone, highlighting the urgent need for a fast and accurate automated approach. This study addresses this gap by introducing a novel public dataset, TCB-DS (Toxigenic Cyanobacteria Dataset), comprising 2593 microscopic images of 10 toxigenic cyanobacterial genera and subsequently, an automated system to identify these genera which can be divided into two parts. Initially, a feature extractor Convolutional Neural Network (CNN) model was employed, with MobileNet emerging as the optimal choice after comparing it with various other popular architectures such as MobileNetV2, VGG, etc. Secondly, to perform classification algorithms on the extracted features of the first section, multiple approaches were tested and the experimental results indicate that a Fully Connected Neural Network (FCNN) had the optimal performance with weighted accuracy and f1-score of 94.79% and 94.91%, respectively. The highest macro accuracy and f1-score were 90.17% and 87.64% which were acquired using MobileNetV2 as the feature extractor and FCNN as the classifier. These results demonstrate that the proposed approach can be employed as an automated screening tool for identifying toxigenic Cyanobacteria with practical implications for water quality control replacing the traditional estimation given by the lab operator following microscopic observations. The dataset and code of this paper are publicly available at https://github.com/iman2693/CTCB.


Subject(s)
Cyanobacteria , Neural Networks, Computer , Water Quality , Algorithms , Quality Control , Automation
9.
Mar Pollut Bull ; 205: 116616, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38936001

ABSTRACT

Accurately classifying microalgae species is vital for monitoring marine ecosystems and managing the emergence of marine mucilage, which is crucial for monitoring mucilage phenomena in marine environments. Traditional methods have been inadequate due to time-consuming processes and the need for expert knowledge. The purpose of this article is to employ convolutional neural networks (CNNs) and support vector machines (SVMs) to improve classification accuracy and efficiency. By employing advanced computational techniques, including MobileNet and GoogleNet models, alongside SVM classification, the study demonstrates significant advancements over conventional identification methods. In the classification of a dataset consisting of 7820 images using four different SVM kernel functions, the linear kernel achieved the highest success rate at 98.79 %. It is followed by the RBF kernel at 98.73 %, the polynomial kernel at 97.84 %, and the sigmoid kernel at 97.20 %. This research not only provides a methodological framework for future studies in marine biodiversity monitoring but also highlights the potential for real-time applications in ecological conservation and understanding mucilage dynamics amidst climate change and environmental pollution.


Subject(s)
Ecosystem , Environmental Monitoring , Microalgae , Neural Networks, Computer , Support Vector Machine , Environmental Monitoring/methods , Biodiversity
10.
Sensors (Basel) ; 24(10)2024 May 14.
Article in English | MEDLINE | ID: mdl-38793964

ABSTRACT

Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features.


Subject(s)
Deep Learning , Sign Language , Humans , Saudi Arabia , Mobile Applications , Deafness/physiopathology , Persons With Hearing Impairments
11.
Sci Rep ; 14(1): 9574, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38671005

ABSTRACT

In order to protect intangible cultural heritage and promote outstanding cultural works, this article introduces innovative research on Shen Embroidery using convolutional neural networks. The dataset of Shen Embroidery was preprocessed to augment the data required for experimentation. Moreover, the approach of transfer learning was introduced to fine-tune the recognition network. Specifically, Spatial Pyramid Pooling (SPP) is employed by replacing the avg pool in the MobileNet V1 network, achieving the fusion of local and global features. The experimental results showed that the improved MobileNet V1 achieved a recognition accuracy of 98.45%, which was 2.3% higher than the baseline MobileNet V1 network. The experiments demonstrated that the improved convolutional neural network can efficiently recognize Shen Embroidery and provide technical support for the intelligent development of intangible cultural heritage.

12.
Heliyon ; 10(5): e27509, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38468955

ABSTRACT

Several deep-learning assisted disease assessment schemes (DAS) have been proposed to enhance accurate detection of COVID-19, a critical medical emergency, through the analysis of clinical data. Lung imaging, particularly from CT scans, plays a pivotal role in identifying and assessing the severity of COVID-19 infections. Existing automated methods leveraging deep learning contribute significantly to reducing the diagnostic burden associated with this process. This research aims in developing a simple DAS for COVID-19 detection using the pre-trained lightweight deep learning methods (LDMs) applied to lung CT slices. The use of LDMs contributes to a less complex yet highly accurate detection system. The key stages of the developed DAS include image collection and initial processing using Shannon's thresholding, deep-feature mining supported by LDMs, feature optimization utilizing the Brownian Butterfly Algorithm (BBA), and binary classification through three-fold cross-validation. The performance evaluation of the proposed scheme involves assessing individual, fused, and ensemble features. The investigation reveals that the developed DAS achieves a detection accuracy of 93.80% with individual features, 96% accuracy with fused features, and an impressive 99.10% accuracy with ensemble features. These outcomes affirm the effectiveness of the proposed scheme in significantly enhancing COVID-19 detection accuracy in the chosen lung CT database.

13.
J Sci Food Agric ; 104(3): 1630-1637, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37842747

ABSTRACT

BACKGROUND: In the contemporary food industry, accurate and rapid differentiation of oolong tea varieties holds paramount importance for traceability and quality control. However, achieving this remains a formidable challenge. This study addresses this lacuna by employing machine learning algorithms - namely support vector machines (SVMs) and convolutional neural networks (CNNs) - alongside computer vision techniques for the automated classification of oolong tea leaves based on visual attributes. RESULTS: An array of 13 distinct characteristics, encompassing color and texture, were identified from five unique oolong tea varieties. To fortify the robustness of the predictive models, data augmentation and image cropping methods were employed. A comparative analysis of SVM- and CNN-based models revealed that the ResNet50 model achieved a high Top-1 accuracy rate exceeding 93%. This robust performance substantiates the efficacy of the implemented methodology for rapid and precise oolong tea classification. CONCLUSION: The study elucidates that the integration of computer vision with machine learning algorithms constitutes a promising, non-invasive approach for the quick and accurate categorization of oolong tea varieties. The findings have significant ramifications for process monitoring, quality assurance, authenticity validation and adulteration detection within the tea industry. © 2023 Society of Chemical Industry.


Subject(s)
Algorithms , Neural Networks, Computer , Machine Learning , Support Vector Machine , Tea
14.
Article in English | MEDLINE | ID: mdl-38087975

ABSTRACT

The prevalence of breast cancer as a major global cancer has underscored the importance of postoperative recovery for breast cancer patients. Among the issues, postoperative patients are prone to spinal deformities, including scoliosis, which has drawn significant attention from healthcare professionals. The primary aim of this study is to design a postoperative recovery platform for breast cancer patients that can effectively detect posture changes, provide feedback and support to medical staff, assist doctors in formulating recovery plans, and prevent spinal deformities. The feasibility of the recovery platform is also validated through experiments. The development and validation of the experimental recovery platform. The recovery platform includes instrument design, patient data collection, model training and fine-tuning, and postoperative body posture evaluation by comparing preoperative and postoperative conditions. The evaluation results are provided to doctors to facilitate the formulation of personalized postoperative recovery plans. This paper comprehensively designs and implements the recovery platform and verifies its feasibility through simulation experiments. Statistical methods were employed for the validation of the rehabilitation platform in simulated experiments, with a significance level of p < 0.05. In comparison to static assessments like CT scans, this paper introduces a dynamic detection method that provides a more insightful analysis of body posture. The experiments also demonstrate the preventive capability of this method against post-operative spinal deformities, ultimately enhancing patients' self-image, restoring their confidence, and enabling them to lead more fulfilling lives.

15.
Diagnostics (Basel) ; 13(21)2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37958257

ABSTRACT

Oral lesions are a prevalent manifestation of oral disease, and the timely identification of oral lesions is imperative for effective intervention. Fortunately, deep learning algorithms have shown great potential for automated lesion detection. The primary aim of this study was to employ deep learning-based image classification algorithms to identify oral lesions. We used three deep learning models, namely VGG19, DeIT, and MobileNet, to assess the efficacy of various categorization methods. To evaluate the accuracy and reliability of the models, we employed a dataset consisting of oral pictures encompassing two distinct categories: benign and malignant lesions. The experimental findings indicate that VGG19 and MobileNet attained an almost perfect accuracy rate of 100%, while DeIT achieved a slightly lower accuracy rate of 98.73%. The results of this study indicate that deep learning algorithms for picture classification demonstrate a high level of effectiveness in detecting oral lesions by achieving 100% for VGG19 and MobileNet and 98.73% for DeIT. Specifically, the VGG19 and MobileNet models exhibit notable suitability for this particular task.

16.
Sensors (Basel) ; 23(22)2023 Nov 09.
Article in English | MEDLINE | ID: mdl-38005464

ABSTRACT

In this paper, research was conducted on anomaly detection of wheel flats. In the railway sector, conducting tests with actual railway vehicles is challenging due to safety concerns for passengers and maintenance issues as it is a public industry. Therefore, dynamics software was utilized. Next, STFT (short-time Fourier transform) was performed to create spectrogram images. In the case of railway vehicles, control, monitoring, and communication are performed through TCMS, but complex analysis and data processing are difficult because there are no devices such as GPUs. Furthermore, there are memory limitations. Therefore, in this paper, the relatively lightweight models LeNet-5, ResNet-20, and MobileNet-V3 were selected for deep learning experiments. At this time, the LeNet-5 and MobileNet-V3 models were modified from the basic architecture. Since railway vehicles are given preventive maintenance, it is difficult to obtain fault data. Therefore, semi-supervised learning was also performed. At this time, the Deep One Class Classification paper was referenced. The evaluation results indicated that the modified LeNet-5 and MobileNet-V3 models achieved approximately 97% and 96% accuracy, respectively. At this point, the LeNet-5 model showed a training time of 12 min faster than the MobileNet-V3 model. In addition, the semi-supervised learning results showed a significant outcome of approximately 94% accuracy when considering the railway maintenance environment. In conclusion, considering the railway vehicle maintenance environment and device specifications, it was inferred that the relatively simple and lightweight LeNet-5 model can be effectively utilized while using small images.

17.
Brain Sci ; 13(11)2023 Nov 10.
Article in English | MEDLINE | ID: mdl-38002538

ABSTRACT

Researchers have explored various potential indicators of ASD, including changes in brain structure and activity, genetics, and immune system abnormalities, but no definitive indicator has been found yet. Therefore, this study aims to investigate ASD indicators using two types of magnetic resonance images (MRI), structural (sMRI) and functional (fMRI), and to address the issue of limited data availability. Transfer learning is a valuable technique when working with limited data, as it utilizes knowledge gained from a pre-trained model in a domain with abundant data. This study proposed the use of four vision transformers namely ConvNeXT, MobileNet, Swin, and ViT using sMRI modalities. The study also investigated the use of a 3D-CNN model with sMRI and fMRI modalities. Our experiments involved different methods of generating data and extracting slices from raw 3D sMRI and 4D fMRI scans along the axial, coronal, and sagittal brain planes. To evaluate our methods, we utilized a standard neuroimaging dataset called NYU from the ABIDE repository to classify ASD subjects from typical control subjects. The performance of our models was evaluated against several baselines including studies that implemented VGG and ResNet transfer learning models. Our experimental results validate the effectiveness of the proposed multi-slice generation with the 3D-CNN and transfer learning methods as they achieved state-of-the-art results. In particular, results from 50-middle slices from the fMRI and 3D-CNN showed a profound promise in ASD classifiability as it obtained a maximum accuracy of 0.8710 and F1-score of 0.8261 when using the mean of 4D images across the axial, coronal, and sagittal. Additionally, the use of the whole slices in fMRI except the beginnings and the ends of brain views helped to reduce irrelevant information and showed good performance of 0.8387 accuracy and 0.7727 F1-score. Lastly, the transfer learning with the ConvNeXt model achieved results higher than other transformers when using 50-middle slices sMRI along the axial, coronal, and sagittal planes.

18.
Diagnostics (Basel) ; 13(19)2023 Oct 03.
Article in English | MEDLINE | ID: mdl-37835861

ABSTRACT

Diabetic retinopathy (DR) is a severe complication of diabetes. It affects a large portion of the population of the Kingdom of Saudi Arabia. Existing systems assist clinicians in treating DR patients. However, these systems entail significantly high computational costs. In addition, dataset imbalances may lead existing DR detection systems to produce false positive outcomes. Therefore, the author intended to develop a lightweight deep-learning (DL)-based DR-severity grading system that could be used with limited computational resources. The proposed model followed an image pre-processing approach to overcome the noise and artifacts found in fundus images. A feature extraction process using the You Only Look Once (Yolo) V7 technique was suggested. It was used to provide feature sets. The author employed a tailored quantum marine predator algorithm (QMPA) for selecting appropriate features. A hyperparameter-optimized MobileNet V3 model was utilized for predicting severity levels using images. The author generalized the proposed model using the APTOS and EyePacs datasets. The APTOS dataset contained 5590 fundus images, whereas the EyePacs dataset included 35,100 images. The outcome of the comparative analysis revealed that the proposed model achieved an accuracy of 98.0 and 98.4 and an F1 Score of 93.7 and 93.1 in the APTOS and EyePacs datasets, respectively. In terms of computational complexity, the proposed DR model required fewer parameters, fewer floating-point operations (FLOPs), a lower learning rate, and less training time to learn the key patterns of the fundus images. The lightweight nature of the proposed model can allow healthcare centers to serve patients in remote locations. The proposed model can be implemented as a mobile application to support clinicians in treating DR patients. In the future, the author will focus on improving the proposed model's efficiency to detect DR from low-quality fundus images.

19.
Curr Med Imaging ; 2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37724666

ABSTRACT

Brain hemorrhage is one of the leading causes of death due to the sudden rupture of a blood vessel in the brain, resulting in bleeding in the brain parenchyma. The early detection and segmentation of brain damage are extremely important for prompt treatment. Some previous studies focused on localizing cerebral hemorrhage based on bounding boxes without specifying specific damage regions. However, in practice, doctors need to detect and segment the hemorrhage area more accurately. In this paper, we propose a method for automatic brain hemorrhage detection and segmentation using the proposed network models, which are improved from the U-Net by changing its backbone with typical feature extraction networks, i.e., DenseNet-121, ResNet-50, and MobileNet-V2. The U-Net architecture has many outstanding advantages. It does not need to do too many preprocessing techniques on the original images and it can be trained with a small dataset providing low error segmentation in medical images. We use the transfer learning approach with the head CT dataset gathered on Kaggle including two classes, bleeding and non-bleeding. Besides, we give some comparison results between the proposed models and the previous works to provide an overview of the suitable model for cerebral CT images. On the head CT dataset, our proposed models achieve a segmentation accuracy of up to 99%.

20.
Sensors (Basel) ; 23(15)2023 Jul 28.
Article in English | MEDLINE | ID: mdl-37571532

ABSTRACT

Infrared thermography (IRT) is a technique used to diagnose Photovoltaic (PV) installations to detect sub-optimal conditions. The increase of PV installations in smart cities has generated the search for technology that improves the use of IRT, which requires irradiance conditions to be greater than 700 W/m2, making it impossible to use at times when irradiance goes under that value. This project presents an IoT platform working on artificial intelligence (AI) which automatically detects hot spots in PV modules by analyzing the temperature differentials between modules exposed to irradiances greater than 300 W/m2. For this purpose, two AI (Deep learning and machine learning) were trained and tested in a real PV installation where hot spots were induced. The system was able to detect hot spots with a sensitivity of 0.995 and an accuracy of 0.923 under dirty, short-circuited, and partially shaded conditions. This project differs from others because it proposes an alternative to facilitate the implementation of diagnostics with IRT and evaluates the real temperatures of PV modules, which represents a potential economic saving for PV installation managers and inspectors.

SELECTION OF CITATIONS
SEARCH DETAIL