Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 6.732
Filter
1.
Phytopathology ; : PHYTO09230326R, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38968142

ABSTRACT

Early detection of rice blast disease is pivotal to ensure rice yield. We collected in situ images of rice blast and constructed a rice blast dataset based on variations in lesion shape, size, and color. Given that rice blast lesions are small and typically exhibit round, oval, and fusiform shapes, we proposed a small object detection model named GCPDFFNet (global context-based parallel differentiation feature fusion network) for rice blast recognition. The GCPDFFNet model has three global context feature extraction modules and two parallel differentiation feature fusion modules. The global context modules are employed to focus on the lesion areas; the parallel differentiation feature fusion modules are used to enhance the recognition effect of small-sized lesions. In addition, we proposed the SCYLLA normalized Wasserstein distance loss function, specifically designed to accelerate model convergence and improve the detection accuracy of rice blast disease. Comparative experiments were conducted on the rice blast dataset to evaluate the performance of the model. The proposed GCPDFFNet model outperformed the baseline network CenterNet, with a significant increase in mean average precision from 83.6 to 95.4% on the rice blast test set while maintaining a satisfactory frames per second drop from 147.9 to 122.1. Our results suggest that the GCPDFFNet model can accurately detect in situ rice blast disease while ensuring the inference speed meets the real-time requirements.

2.
Front Neurosci ; 18: 1431033, 2024.
Article in English | MEDLINE | ID: mdl-38962176

ABSTRACT

As an important part of the unmanned driving system, the detection and recognition of traffic sign need to have the characteristics of excellent recognition accuracy, fast execution speed and easy deployment. Researchers have applied the techniques of machine learning, deep learning and image processing to traffic sign recognition successfully. Considering the hardware conditions of the terminal equipment in the unmanned driving system, in this research work, the goal was to achieve a convolutional neural network (CNN) architecture that is lightweight and easily implemented for an embedded application and with excellent recognition accuracy and execution speed. As a classical CNN architecture, LeNet-5 network model was chosen to be improved, including image preprocessing, improving spatial pool convolutional neural network, optimizing neurons, optimizing activation function, etc. The test experiment of the improved network architecture was carried out on German Traffic Sign Recognition Benchmark (GTSRB) database. The experimental results show that the improved network architecture can obtain higher recognition accuracy in a short interference time, and the algorithm loss is significantly reduced with the progress of training. At the same time, compared with other lightweight network models, this network architecture gives a good recognition result, with a recognition accuracy of 97.53%. The network structure is simple, the algorithm complexity is low, and it is suitable for all kinds of terminal equipment, which can have a wider application in unmanned driving system.

3.
Data Brief ; 54: 110261, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38962186

ABSTRACT

Hyperspectral imaging, combined with deep learning techniques, has been employed to classify maize. However, the implementation of these automated methods often requires substantial processing and computing resources, presenting a significant challenge for deployment on embedded devices due to high GPU power consumption. Access to Ghanaian local maize data for such classification tasks is also extremely difficult in Ghana. To address these challenges, this research aims to create a simple dataset comprising three distinct types of local maize seeds in Ghana. The goal is to facilitate the development of an efficient maize classification tool that minimizes computational costs and reduces human involvement in the process of grading seeds for marketing and production. The dataset is presented in two parts: raw images, consisting of 4,846 images, are categorized into bad and good. Specifically, 2,211 images belong to the bad class, while 2,635 belong to the good class. Augmented images consist of 28,910 images, with 13,250 representing bad data and 15,660 representing good data. All images have been validated by experts from Heritage Seeds Ghana and are freely available for use within the research community.

4.
Magn Reson Med Sci ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38960679

ABSTRACT

PURPOSE: We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). METHODS: Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. RESULTS: In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. CONCLUSION: Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

5.
Front Artif Intell ; 7: 1321884, 2024.
Article in English | MEDLINE | ID: mdl-38952409

ABSTRACT

Background: Carotid plaques are major risk factors for stroke. Carotid ultrasound can help to assess the risk and incidence rate of stroke. However, large-scale carotid artery screening is time-consuming and laborious, the diagnostic results inevitably involve the subjectivity of the diagnostician to a certain extent. Deep learning demonstrates the ability to solve the aforementioned challenges. Thus, we attempted to develop an automated algorithm to provide a more consistent and objective diagnostic method and to identify the presence and stability of carotid plaques using deep learning. Methods: A total of 3,860 ultrasound images from 1,339 participants who underwent carotid plaque assessment between January 2021 and March 2023 at the Shanghai Eighth People's Hospital were divided into a 4:1 ratio for training and internal testing. The external test included 1,564 ultrasound images from 674 participants who underwent carotid plaque assessment between January 2022 and May 2023 at Xinhua Hospital affiliated with Dalian University. Deep learning algorithms, based on the fusion of a bilinear convolutional neural network with a residual neural network (BCNN-ResNet), were used for modeling to detect carotid plaques and assess plaque stability. We chose AUC as the main evaluation index, along with accuracy, sensitivity, and specificity as auxiliary evaluation indices. Results: Modeling for detecting carotid plaques involved training and internal testing on 1,291 ultrasound images, with 617 images showing plaques and 674 without plaques. The external test comprised 470 ultrasound images, including 321 images with plaques and 149 without. Modeling for assessing plaque stability involved training and internal testing on 764 ultrasound images, consisting of 494 images with unstable plaques and 270 with stable plaques. The external test was composed of 279 ultrasound images, including 197 images with unstable plaques and 82 with stable plaques. For the task of identifying the presence of carotid plaques, our model achieved an AUC of 0.989 (95% CI: 0.840, 0.998) with a sensitivity of 93.2% and a specificity of 99.21% on the internal test. On the external test, the AUC was 0.951 (95% CI: 0.962, 0.939) with a sensitivity of 95.3% and a specificity of 82.24%. For the task of identifying the stability of carotid plaques, our model achieved an AUC of 0.896 (95% CI: 0.865, 0.922) on the internal test with a sensitivity of 81.63% and a specificity of 87.27%. On the external test, the AUC was 0.854 (95% CI: 0.889, 0.830) with a sensitivity of 68.52% and a specificity of 89.49%. Conclusion: Deep learning using BCNN-ResNet algorithms based on routine ultrasound images could be useful for detecting carotid plaques and assessing plaque instability.

6.
Gastric Cancer ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38954175

ABSTRACT

BACKGROUND: Accurate prediction of pathologic results for early gastric cancer (EGC) based on endoscopic findings is essential in deciding between endoscopic and surgical resection. This study aimed to develop an artificial intelligence (AI) model to assess comprehensive pathologic characteristics of EGC using white-light endoscopic images and videos. METHODS: To train the model, we retrospectively collected 4,336 images and prospectively included 153 videos from patients with EGC who underwent endoscopic or surgical resection. The performance of the model was tested and compared to that of 16 endoscopists (nine experts and seven novices) using a mutually exclusive set of 260 images and 10 videos. Finally, we conducted external validation using 436 images and 89 videos from another institution. RESULTS: After training, the model achieved predictive accuracies of 89.7% for undifferentiated histology, 88.0% for submucosal invasion, 87.9% for lymphovascular invasion (LVI), and 92.7% for lymph node metastasis (LNM), using endoscopic videos. The area under the curve values of the model were 0.992 for undifferentiated histology, 0.902 for submucosal invasion, 0.706 for LVI, and 0.680 for LNM in the test. In addition, the model showed significantly higher accuracy than the experts in predicting undifferentiated histology (92.7% vs. 71.6%), submucosal invasion (87.3% vs. 72.6%), and LNM (87.7% vs. 72.3%). The external validation showed accuracies of 75.6% and 71.9% for undifferentiated histology and submucosal invasion, respectively. CONCLUSIONS: AI may assist endoscopists with high predictive performance for differentiation status and invasion depth of EGC. Further research is needed to improve the detection of LVI and LNM.

7.
Comput Biol Chem ; 112: 108130, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38954849

ABSTRACT

Retrosynthesis is vital in synthesizing target products, guiding reaction pathway design crucial for drug and material discovery. Current models often neglect multi-scale feature extraction, limiting efficacy in leveraging molecular descriptors. Our proposed SB-Net model, a deep-learning architecture tailored for retrosynthesis prediction, addresses this gap. SB-Net combines CNN and Bi-LSTM architectures, excelling in capturing multi-scale molecular features. It integrates parallel branches for processing one-hot encoded descriptors and ECFP, merging through dense layers. Experimental results demonstrate SB-Net's superiority, achieving 73.6 % top-1 and 94.6 % top-10 accuracy on USPTO-50k data. Versatility is validated on MetaNetX, with rates of 52.8 % top-1, 74.3 % top-3, 79.8 % top-5, and 83.5 % top-10. SB-Net's success in bioretrosynthesis prediction tasks indicates its efficacy. This research advances computational chemistry, offering a robust deep-learning model for retrosynthesis prediction. With implications for drug discovery and synthesis planning, SB-Net promises innovative and efficient pathways.

8.
Sci Rep ; 14(1): 15057, 2024 07 01.
Article in English | MEDLINE | ID: mdl-38956224

ABSTRACT

Image segmentation is a critical and challenging endeavor in the field of medicine. A magnetic resonance imaging (MRI) scan is a helpful method for locating any abnormal brain tissue these days. It is a difficult undertaking for radiologists to diagnose and classify the tumor from several pictures. This work develops an intelligent method for accurately identifying brain tumors. This research investigates the identification of brain tumor types from MRI data using convolutional neural networks and optimization strategies. Two novel approaches are presented: the first is a novel segmentation technique based on firefly optimization (FFO) that assesses segmentation quality based on many parameters, and the other is a combination of two types of convolutional neural networks to categorize tumor traits and identify the kind of tumor. These upgrades are intended to raise the general efficacy of the MRI scan technique and increase identification accuracy. Using MRI scans from BBRATS2018, the testing is carried out, and the suggested approach has shown improved performance with an average accuracy of 98.6%.


Subject(s)
Brain Neoplasms , Magnetic Resonance Imaging , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Brain Neoplasms/classification , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Brain/diagnostic imaging , Brain/pathology
9.
Sci Rep ; 14(1): 15270, 2024 07 03.
Article in English | MEDLINE | ID: mdl-38961114

ABSTRACT

Alzheimer's disease (AD), the predominant form of dementia, is a growing global challenge, emphasizing the urgent need for accurate and early diagnosis. Current clinical diagnoses rely on radiologist expert interpretation, which is prone to human error. Deep learning has thus far shown promise for early AD diagnosis. However, existing methods often overlook focal structural atrophy critical for enhanced understanding of the cerebral cortex neurodegeneration. This paper proposes a deep learning framework that includes a novel structure-focused neurodegeneration CNN architecture named SNeurodCNN and an image brightness enhancement preprocessor using gamma correction. The SNeurodCNN architecture takes as input the focal structural atrophy features resulting from segmentation of brain structures captured through magnetic resonance imaging (MRI). As a result, the architecture considers only necessary CNN components, which comprises of two downsampling convolutional blocks and two fully connected layers, for achieving the desired classification task, and utilises regularisation techniques to regularise learnable parameters. Leveraging mid-sagittal and para-sagittal brain image viewpoints from the Alzheimer's disease neuroimaging initiative (ADNI) dataset, our framework demonstrated exceptional performance. The para-sagittal viewpoint achieved 97.8% accuracy, 97.0% specificity, and 98.5% sensitivity, while the mid-sagittal viewpoint offered deeper insights with 98.1% accuracy, 97.2% specificity, and 99.0% sensitivity. Model analysis revealed the ability of SNeurodCNN to capture the structural dynamics of mild cognitive impairment (MCI) and AD in the frontal lobe, occipital lobe, cerebellum, temporal, and parietal lobe, suggesting its potential as a brain structural change digi-biomarker for early AD diagnosis. This work can be reproduced using code we made available on GitHub.


Subject(s)
Alzheimer Disease , Deep Learning , Magnetic Resonance Imaging , Neural Networks, Computer , Alzheimer Disease/pathology , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/diagnosis , Alzheimer Disease/classification , Humans , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Brain/pathology , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods
10.
Heliyon ; 10(12): e32733, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38975150

ABSTRACT

Current noninvasive methods of clinical practice often do not identify the causes of conductive hearing loss due to pathologic changes in the middle ear with sufficient certainty. Wideband acoustic immittance (WAI) measurement is noninvasive, inexpensive and objective. It is very sensitive to pathologic changes in the middle ear and therefore promising for diagnosis. However, evaluation of the data is difficult because of large interindividual variations. Machine learning methods like Convolutional neural networks (CNN) which might be able to deal with this overlaying pattern require a large amount of labeled measurement data for training and validation. This is difficult to provide given the low prevalence of many middle-ear pathologies. Therefore, this study proposes an approach in which the WAI training data of the CNN are simulated with a finite-element ear model and the Monte-Carlo method. With this approach, virtual populations of normal, otosclerotic, and disarticulated ears were generated, consistent with the averaged data of measured populations and well representing the qualitative characteristics of individuals. The CNN trained with the virtual data achieved for otosclerosis an AUC of 91.1 %, a sensitivity of 85.7 %, and a specificity of 85.2 %. For disarticulation, an AUC of 99.5 %, sensitivity of 100 %, and specificity of 93.1 % was achieved. Furthermore, it was estimated that specificity could potentially be increased to about 99 % in both pathological cases if stapes reflex threshold measurements were used to confirm the diagnosis. Thus, the procedures' performance is comparable to classifiers from other studies trained with real measurement data, and therefore the procedure offers great potential for the diagnosis of rare pathologies or early-stages pathologies. The clinical potential of these preliminary results remains to be evaluated on more measurement data and additional pathologies.

11.
Heliyon ; 10(12): e32400, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38975160

ABSTRACT

Pests are a significant challenge in paddy cultivation, resulting in a global loss of approximately 20 % of rice yield. Early detection of paddy insects can help to save these potential losses. Several ways have been suggested for identifying and categorizing insects in paddy fields, employing a range of advanced, noninvasive, and portable technologies. However, none of these systems have successfully incorporated feature optimization techniques with Deep Learning and Machine Learning. Hence, the current research provided a framework utilizing these techniques to detect and categorize images of paddy insects promptly. Initially, the suggested research will gather the image dataset and categorize it into two groups: one without paddy insects and the other with paddy insects. Furthermore, various pre-processing techniques, such as augmentation and image filtering, will be applied to enhance the quality of the dataset and eliminate any unwanted noise. To determine and analyze the deep characteristics of an image, the suggested architecture will incorporate 5 pre-trained Convolutional Neural Network models. Following that, feature selection techniques, including Principal Component Analysis (PCA), Recursive Feature Elimination (RFE), Linear Discriminant Analysis (LDA), and an optimization algorithm called Lion Optimization, were utilized in order to further reduce the redundant number of features that were collected for the study. Subsequently, the process of identifying the paddy insects will be carried out by employing 7 ML algorithms. Finally, a set of experimental data analysis has been conducted to achieve the objectives, and the proposed approach demonstrates that the extracted feature vectors of ResNet50 with Logistic Regression and PCA have achieved the highest accuracy, precisely 99.28 %. However, the present idea will significantly impact how paddy insects are diagnosed in the field.

12.
Sci Rep ; 14(1): 14996, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38951158

ABSTRACT

In this work, we combine the advantages of virtual Small Angle Neutron Scattering (SANS) experiments carried out by Monte Carlo simulations with the recent advances in computer vision to generate a tool that can assist SANS users in small angle scattering model selection. We generate a dataset of almost 260.000 SANS virtual experiments of the SANS beamline KWS-1 at FRM-II, Germany, intended for Machine Learning purposes. Then, we train a recommendation system based on an ensemble of Convolutional Neural Networks to predict the SANS model from the two-dimensional scattering pattern measured at the position-sensitive detector of the beamline. The results show that the CNNs can learn the model prediction task, and that this recommendation system has a high accuracy in the classification task on 46 different SANS models. We also test the network with real data and explore the outcome. Finally, we discuss the reach of counting with the set of virtual experimental data presented here, and of such a recommendation system in the SANS user data analysis procedure.

13.
Sci Rep ; 14(1): 15051, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38951605

ABSTRACT

Electrical conductivity (EC) is widely recognized as one of the most essential water quality metrics for predicting salinity and mineralization. In the current research, the EC of two Australian rivers (Albert River and Barratta Creek) was forecasted for up to 10 days using a novel deep learning algorithm (Convolutional Neural Network combined with Long Short-Term Memory Model, CNN-LSTM). The Boruta-XGBoost feature selection method was used to determine the significant inputs (time series lagged data) to the model. To compare the performance of Boruta-XGB-CNN-LSTM models, three machine learning approaches-multi-layer perceptron neural network (MLP), K-nearest neighbour (KNN), and extreme gradient boosting (XGBoost) were used. Different statistical metrics, such as correlation coefficient (R), root mean square error (RMSE), and mean absolute percentage error, were used to assess the models' performance. From 10 years of data in both rivers, 7 years (2012-2018) were used as a training set, and 3 years (2019-2021) were used for testing the models. Application of the Boruta-XGB-CNN-LSTM model in forecasting one day ahead of EC showed that in both stations, Boruta-XGB-CNN-LSTM can forecast the EC parameter better than other machine learning models for the test dataset (R = 0.9429, RMSE = 45.6896, MAPE = 5.9749 for Albert River, and R = 0.9215, RMSE = 43.8315, MAPE = 7.6029 for Barratta Creek). Considering the better performance of the Boruta-XGB-CNN-LSTM model in both rivers, this model was used to forecast 3-10 days ahead of EC. The results showed that the Boruta-XGB-CNN-LSTM model is very capable of forecasting the EC for the next 10 days. The results showed that by increasing the forecasting horizon from 3 to 10 days, the performance of the Boruta-XGB-CNN-LSTM model slightly decreased. The results of this study show that the Boruta-XGB-CNN-LSTM model can be used as a good soft computing method for accurately predicting how the EC will change in rivers.

14.
Biomed Eng Lett ; 14(4): 649-661, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38946810

ABSTRACT

The accurate prediction of heart disease is crucial in the field of medicine. While convolutional neural networks have shown remarkable precision in heart disease prediction, they are often perceived as opaque models due to their complex internal workings. This paper introduces a novel method, named Extraction of Classification Rules from Convolutional Neural Network (ECRCNN), aimed at extracting rules from convolutional neural networks to enhance interpretability in heart disease prediction. The ECRCNN algorithm analyses updated kernels to derive understandable rules from convolutional neural networks, providing valuable insights into the contributing factors of heart disease. The algorithm's performance is assessed using the Statlog (Heart) dataset from the University of California, Irvine's repository. Experimental results underscore the effectiveness of the ECRCNN algorithm in predicting heart disease and extracting meaningful rules. The extracted rules can assist healthcare professionals in making precise diagnoses and formulating targeted treatment plans. In summary, the proposed method bridges the gap between the high accuracy of convolutional neural networks and the interpretability necessary for informed decision-making in heart disease prediction.

15.
Front Comput Neurosci ; 18: 1423051, 2024.
Article in English | MEDLINE | ID: mdl-38978524

ABSTRACT

The classification of medical images is crucial in the biomedical field, and despite attempts to address the issue, significant challenges persist. To effectively categorize medical images, collecting and integrating statistical information that accurately describes the image is essential. This study proposes a unique method for feature extraction that combines deep spatial characteristics with handmade statistical features. The approach involves extracting statistical radiomics features using advanced techniques, followed by a novel handcrafted feature fusion method inspired by the ResNet deep learning model. A new feature fusion framework (FusionNet) is then used to reduce image dimensionality and simplify computation. The proposed approach is tested on MRI images of brain tumors from the BraTS dataset, and the results show that it outperforms existing methods regarding classification accuracy. The study presents three models, including a handcrafted-based model and two CNN models, which completed the binary classification task. The recommended hybrid approach achieved a high F1 score of 96.12 ± 0.41, precision of 97.77 ± 0.32, and accuracy of 97.53 ± 0.24, indicating that it has the potential to serve as a valuable tool for pathologists.

16.
Sci Rep ; 14(1): 15531, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969717

ABSTRACT

To improve the current oil painting teaching mode in Chinese universities, this study combines deep learning technology and artificial intelligence technology to explore oil painting teaching. Firstly, the research status of individualized education and related research on image classification based on brush features are analyzed. Secondly, based on a convolutional neural network, mathematical morphology, and support vector machine, the oil painting classification model is constructed, in which the extracted features include color and brush features. Moreover, based on artificial intelligence technology and individualized education theory, a personalized intelligent oil painting teaching framework is built. Finally, the performance of the intelligent oil painting classification model is evaluated, and the content of the personalized intelligent oil painting teaching framework is explained. The results show that the average classification accuracy of oil painting is 90.25% when only brush features are extracted. When only color features are extracted, the average classification accuracy is over 89%. When the two features are extracted, the average accuracy of the oil painting classification model reaches 94.03%. Iterative Dichotomiser3, decision tree C4.5, and support vector machines have an average classification accuracy of 82.24%, 83.57%, and 94.03%. The training speed of epochs data with size 50 is faster than that of epochs original data with size 100, but the accuracy is slightly decreased. The personalized oil painting teaching system helps students adjust their learning plans according to their conditions, avoid learning repetitive content, and ultimately improve students' learning efficiency. Compared with other studies, this study obtains a good oil painting classification model and a personalized oil painting education system that plays a positive role in oil painting teaching. This study has laid the foundation for the development of higher art education.

17.
Sci Rep ; 14(1): 15537, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969738

ABSTRACT

Crop yield production could be enhanced for agricultural growth if various plant nutrition deficiencies, and diseases are identified and detected at early stages. Hence, continuous health monitoring of plant is very crucial for handling plant stress. The deep learning methods have proven its superior performances in the automated detection of plant diseases and nutrition deficiencies from visual symptoms in leaves. This article proposes a new deep learning method for plant nutrition deficiencies and disease classification using a graph convolutional network (GNN), added upon a base convolutional neural network (CNN). Sometimes, a global feature descriptor might fail to capture the vital region of a diseased leaf, which causes inaccurate classification of disease. To address this issue, regional feature learning is crucial for a holistic feature aggregation. In this work, region-based feature summarization at multi-scales is explored using spatial pyramidal pooling for discriminative feature representation. Furthermore, a GCN is developed to capacitate learning of finer details for classifying plant diseases and insufficiency of nutrients. The proposed method, called Plant Nutrition Deficiency and Disease Network (PND-Net), has been evaluated on two public datasets for nutrition deficiency, and two for disease classification using four backbone CNNs. The best classification performances of the proposed PND-Net are as follows: (a) 90.00% Banana and 90.54% Coffee nutrition deficiency; and (b) 96.18% Potato diseases and 84.30% on PlantDoc datasets using Xception backbone. Furthermore, additional experiments have been carried out for generalization, and the proposed method has achieved state-of-the-art performances on two public datasets, namely the Breast Cancer Histopathology Image Classification (BreakHis 40 × : 95.50%, and BreakHis 100 × : 96.79% accuracy) and Single cells in Pap smear images for cervical cancer classification (SIPaKMeD: 99.18% accuracy). Also, the proposed method has been evaluated using five-fold cross validation and achieved improved performances on these datasets. Clearly, the proposed PND-Net effectively boosts the performances of automated health analysis of various plants in real and intricate field environments, implying PND-Net's aptness for agricultural growth as well as human cancer classification.


Subject(s)
Deep Learning , Neural Networks, Computer , Plant Diseases , Plant Leaves , Humans
18.
Ultrasound Med Biol ; 2024 Jul 07.
Article in English | MEDLINE | ID: mdl-38972792

ABSTRACT

OBJECTIVE: Bone diseases deteriorate the microstructure of bone tissue. Optical-resolution photoacoustic microscopy (OR-PAM) enables high spatial resolution of imaging bone tissues. However, the spatiotemporal trade-off limits the application of OR-PAM. The purpose of this study was to improve the quality of OR-PAM images without sacrificing temporal resolution. METHODS: In this study, we proposed the Photoacoustic Dense Attention U-Net (PADA U-Net) model, which was used for reconstructing full-scanning images from under-sampled images. Thereby, this approach breaks the trade-off between imaging speed and spatial resolution. RESULTS: The proposed method was validated on resolution test targets and bovine cancellous bone samples to demonstrate the capability of PADA U-Net in recovering full-scanning images from under-sampled OR-PAM images. With a down-sampling ratio of [4, 1], compared to bilinear interpolation, the Peak Signal-to-Noise Ratio and Structural Similarity Index Measure values (averaged over the test set of bovine cancellous bone) of the PADA U-Net were improved by 2.325 dB and 0.117, respectively. CONCLUSION: The results demonstrate that the PADA U-Net model reconstructed the OR-PAM images well with different levels of sparsity. Our proposed method can further facilitate early diagnosis and treatment of bone diseases using OR-PAM.

19.
Front Psychiatry ; 15: 1395563, 2024.
Article in English | MEDLINE | ID: mdl-38979503

ABSTRACT

This study addresses the pervasive and debilitating impact of Alzheimer's disease (AD) on individuals and society, emphasizing the crucial need for timely diagnosis. We present a multistage convolutional neural network (CNN)-based framework for AD detection and sub-classification using brain magnetic resonance imaging (MRI). After preprocessing, a 26-layer CNN model was designed to differentiate between healthy individuals and patients with dementia. After detecting dementia, the 26-layer CNN model was reutilized using the concept of transfer learning to further subclassify dementia into mild, moderate, and severe dementia. Leveraging the frozen weights of the developed CNN on correlated medical images facilitated the transfer learning process for sub-classifying dementia classes. An online AD dataset is used to verify the performance of the proposed multistage CNN-based framework. The proposed approach yielded a noteworthy accuracy of 98.24% in identifying dementia classes, whereas it achieved 99.70% accuracy in dementia subclassification. Another dataset was used to further validate the proposed framework, resulting in 100% performance. Comparative evaluations against pre-trained models and the current literature were also conducted, highlighting the usefulness and superiority of the proposed framework and presenting it as a robust and effective AD detection and subclassification method.

20.
J Biophotonics ; : e202400138, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38952169

ABSTRACT

Neurological disorders such as Parkinson's disease (PD) often adversely affect the vascular system, leading to alterations in blood flow patterns. Functional near-infrared spectroscopy (fNIRS) is used to monitor hemodynamic changes via signal measurement. This study investigated the potential of using resting-state fNIRS data through a convolutional neural network (CNN) to evaluate PD with orthostatic hypotension. The CNN demonstrated significant efficacy in analyzing fNIRS data, and it outperformed the other machine learning methods. The results indicate that judicious input data selection can enhance accuracy by over 85%, while including the correlation matrix as an input further improves the accuracy to more than 90%. This study underscores the promising role of CNN-based fNIRS data analysis in the diagnosis and management of the PD. This approach enhances diagnostic accuracy, particularly in resting-state conditions, and can reduce the discomfort and risks associated with current diagnostic methods, such as the head-up tilt test.

SELECTION OF CITATIONS
SEARCH DETAIL
...