Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Intell Neurosci ; 2022: 1339469, 2022.
Article in English | MEDLINE | ID: mdl-36465951

ABSTRACT

Image processing is an important domain for identifying various crop varieties. Due to the large amount of rice and its varieties, manually detecting its qualities is a very tedious and time-consuming task. In this work, we propose a two-stage deep learning framework for detecting and classifying multiclass rice grain varieties. A series of steps is included in the proposed framework. The first step is to perform preprocessing on the selected dataset. The second step involves selecting and fine-tuning pretrained deep models from Darknet19 and SqueezeNet. Transfer learning is used to train the fine-tuned models on the selected dataset. The 50% sample images are employed for the training and rest 50% are used for the testing. Features are extracted and fused using a maximum correlation-based approach. This approach improved the classification performance; however, redundant information has also been included. An improved butterfly optimization algorithm (BOA) is proposed, in the next step, for the selection of the best features that are finally classified using several machine learning classifiers. The experimental process was conducted on selected rice datasets that include five types of rice varieties and achieves a maximum accuracy of 100% that was improved than the recent method. The average accuracy of the proposed method is obtained at 99.2%, through confidence interval-based analysis that shows the significance of this work.


Subject(s)
Oryza , Edible Grain , Intelligence , Algorithms , Data Accuracy
2.
Diagnostics (Basel) ; 12(11)2022 Nov 07.
Article in English | MEDLINE | ID: mdl-36359566

ABSTRACT

In the last few years, artificial intelligence has shown a lot of promise in the medical domain for the diagnosis and classification of human infections. Several computerized techniques based on artificial intelligence (AI) have been introduced in the literature for gastrointestinal (GIT) diseases such as ulcer, bleeding, polyp, and a few others. Manual diagnosis of these infections is time consuming, expensive, and always requires an expert. As a result, computerized methods that can assist doctors as a second opinion in clinics are widely required. The key challenges of a computerized technique are accurate infected region segmentation because each infected region has a change of shape and location. Moreover, the inaccurate segmentation affects the accurate feature extraction that later impacts the classification accuracy. In this paper, we proposed an automated framework for GIT disease segmentation and classification based on deep saliency maps and Bayesian optimal deep learning feature selection. The proposed framework is made up of a few key steps, from preprocessing to classification. Original images are improved in the preprocessing step by employing a proposed contrast enhancement technique. In the following step, we proposed a deep saliency map for segmenting infected regions. The segmented regions are then used to train a pre-trained fine-tuned model called MobileNet-V2 using transfer learning. The fine-tuned model's hyperparameters were initialized using Bayesian optimization (BO). The average pooling layer is then used to extract features. However, several redundant features are discovered during the analysis phase and must be removed. As a result, we proposed a hybrid whale optimization algorithm for selecting the best features. Finally, the selected features are classified using an extreme learning machine classifier. The experiment was carried out on three datasets: Kvasir 1, Kvasir 2, and CUI Wah. The proposed framework achieved accuracy of 98.20, 98.02, and 99.61% on these three datasets, respectively. When compared to other methods, the proposed framework shows an improvement in accuracy.

3.
Comput Intell Neurosci ; 2022: 4254631, 2022.
Article in English | MEDLINE | ID: mdl-35845911

ABSTRACT

COVID-19 detection and classification using chest X-ray images is a current hot research topic based on the important application known as medical image analysis. To halt the spread of COVID-19, it is critical to identify the infection as soon as possible. Due to time constraints and the expertise of radiologists, manually diagnosing this infection from chest X-ray images is a difficult and time-consuming process. Artificial intelligence techniques have had a significant impact on medical image analysis and have also introduced several techniques for COVID-19 diagnosis. Deep learning and explainable AI have shown significant popularity among AL techniques for COVID-19 detection and classification. In this work, we propose a deep learning and explainable AI technique for the diagnosis and classification of COVID-19 using chest X-ray images. Initially, a hybrid contrast enhancement technique is proposed and applied to the original images that are later utilized for the training of two modified deep learning models. The deep transfer learning concept is selected for the training of pretrained modified models that are later employed for feature extraction. Features of both deep models are fused using improved canonical correlation analysis that is further optimized using a hybrid algorithm named Whale-Elephant Herding. Through this algorithm, the best features are selected and classified using an extreme learning machine (ELM). Moreover, the modified deep models are utilized for Grad-CAM visualization. The experimental process was conducted on three publicly available datasets and achieved accuracies of 99.1, 98.2, and 96.7%, respectively. Moreover, the ablation study was performed and showed that the proposed accuracy is better than the other methods.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , COVID-19 Testing , Humans , X-Rays
4.
Comput Intell Neurosci ; 2022: 1575303, 2022.
Article in English | MEDLINE | ID: mdl-35733564

ABSTRACT

In this paper, a novel multistep ahead predictor based upon a fusion of kernel recursive least square (KRLS) and Gaussian process regression (GPR) is proposed for the accurate prediction of the state of health (SoH) and remaining useful life (RUL) of lithium-ion batteries. The empirical mode decomposition is utilized to divide the battery capacity into local regeneration (intrinsic mode functions) and global degradation (residual). The KRLS and GPR submodels are employed to track the residual and intrinsic mode functions. For RUL, the KRLS predicted residual signal is utilized. The online available experimental battery aging data are used for the evaluation of the proposed model. The comparison analysis with other methodologies (i.e., GPR, KRLS, empirical mode decomposition with GPR, and empirical mode decomposition with KRLS) reveals the distinctiveness and superiority of the proposed approach. For 1-step ahead prediction, the proposed method tracks the trajectory with the root mean square error (RMSE) of 0.2299, and the increase of only 0.2243 RMSE is noted for 30-step ahead prediction. The RUL prediction using residual signal shows an increase of 3 to 5% in accuracy. This proposed methodology is a prospective approach for an efficient battery health prognostic.


Subject(s)
Algorithms , Lithium , Electric Power Supplies , Normal Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...