Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
iScience ; 27(1): 108709, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38269095

ABSTRACT

The increasing demand for food production due to the growing population is raising the need for more food-productive environments for plants. The genetic behavior of plant traits remains different in different growing environments. However, it is tedious and impossible to look after the individual plant component traits manually. Plant breeders need computer vision-based plant monitoring systems to analyze different plants' productivity and environmental suitability. It leads to performing feasible quantitative analysis, geometric analysis, and yield rate analysis of the plants. Many of the data collection methods have been used by plant breeders according to their needs. In the presented review, most of them are discussed with their corresponding challenges and limitations. Furthermore, the traditional approaches of segmentation and classification of plant phenotyping are also discussed. The data limitation problems and their currently adapted solutions in the computer vision aspect are highlighted, which somehow solve the problem but are not genuine. The available datasets and current issues are enlightened. The presented study covers the plants phenotyping problems, suggested solutions, and current challenges from data collection to classification steps.

2.
Comput Intell Neurosci ; 2023: 6357252, 2023.
Article in English | MEDLINE | ID: mdl-37538561

ABSTRACT

Lung cancer is one of the deadliest cancers around the world, with high mortality rate in comparison to other cancers. A lung cancer patient's survival probability in late stages is very low. However, if it can be detected early, the patient survival rate can be improved. Diagnosing lung cancer early is a complicated task due to having the visual similarity of lungs nodules with trachea, vessels, and other surrounding tissues that leads toward misclassification of lung nodules. Therefore, correct identification and classification of nodules is required. Previous studies have used noisy features, which makes results comprising. A predictive model has been proposed to accurately detect and classify the lung nodules to address this problem. In the proposed framework, at first, the semantic segmentation was performed to identify the nodules in images in the Lungs image database consortium (LIDC) dataset. Optimal features for classification include histogram oriented gradients (HOGs), local binary patterns (LBPs), and geometric features are extracted after segmentation of nodules. The results shown that support vector machines performed better in identifying the nodules than other classifiers, achieving the highest accuracy of 97.8% with sensitivity of 100%, specificity of 93%, and false positive rate of 6.7%.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Diagnosis, Computer-Assisted/methods , Lung/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Databases, Factual
3.
Biomedicines ; 11(3)2023 Mar 07.
Article in English | MEDLINE | ID: mdl-36979795

ABSTRACT

The Human Activity Recognition (HAR) system is the hottest research area in clinical research. The HAR plays a vital role in learning about a patient's abnormal activities; based upon this information, the patient's psychological state can be estimated. An epileptic seizure is a neurological disorder of the human brain and affects millions of people worldwide. If epilepsy is diagnosed correctly and in an early stage, then up to 70% of people can be seizure-free. There is a need for intelligent automatic HAR systems that help clinicians diagnose neurological disorders accurately. In this research, we proposed a Deep Learning (DL) model that enables the detection of epileptic seizures in an automated way, addressing a need in clinical research. To recognize epileptic seizures from brain activities, EEG is a raw but good source of information. In previous studies, many techniques used raw data from EEG to help recognize epileptic patient activities; however, the applied method of extracting features required much intensive expertise from clinical aspects such as radiology and clinical methods. The image data are also used to diagnose epileptic seizures, but applying Machine Learning (ML) methods could address the overfitting problem. In this research, we mainly focused on classifying epilepsy through physical epileptic activities instead of feature engineering and performed the detection of epileptic seizures in three steps. In the first step, we used the open-source numerical dataset of epilepsy of Bonn university from the UCI Machine Learning repository. In the second step, data were fed to the proposed ELM model for training in different training and testing ratios with a little bit of rescaling because the dataset was already pre-processed, normalized, and restructured. In the third step, epileptic and non-epileptic activity was recognized, and in this step, EEG signal feature extraction was automatically performed by a DL model named ELM; features were selected by a Feature Selection (FS) algorithm based on ELM and the final classification was performed using the ELM classifier. In our presented research, seven different ML algorithms were applied for the binary classification of epileptic activities, including K-Nearest Neighbor (KNN), Naïve Bayes (NB), Logistic Regression (LR), Stochastic Gradient Boosting Classifier (SGDC), Gradient Boosting Classifier (GB), Decision Trees (DT), and three deep learning models named Extreme Learning Machine (ELM), Long Short-Term Memory (LSTM), and Artificial Neural Network (ANN). After deep analysis, it is observed that the best results were obtained by our proposed DL model, Extreme Learning Machine (ELM), with an accuracy of 100% accuracy and a 0.99 AUC. Such high performance has not attained in previous research. The proposed model's performance was checked with other models in terms of performance parameters, namely confusion matrix, accuracy, precision, recall, F1-score, specificity, sensitivity, and the ROC curve.

4.
Life (Basel) ; 13(1)2023 Jan 03.
Article in English | MEDLINE | ID: mdl-36676082

ABSTRACT

The emergency department of hospitals receives a massive number of patients with wrist fracture. For the clinical diagnosis of a suspected fracture, X-ray imaging is the major screening tool. A wrist fracture is a significant global health concern for children, adolescents, and the elderly. A missed diagnosis of wrist fracture on medical imaging can have significant consequences for patients, resulting in delayed treatment and poor functional recovery. Therefore, an intelligent method is needed in the medical department to precisely diagnose wrist fracture via an automated diagnosing tool by considering it a second option for doctors. In this research, a fused model of the deep learning method, a convolutional neural network (CNN), and long short-term memory (LSTM) is proposed to detect wrist fractures from X-ray images. It gives a second option to doctors to diagnose wrist facture using the computer vision method to lessen the number of missed fractures. The dataset acquired from Mendeley comprises 192 wrist X-ray images. In this framework, image pre-processing is applied, then the data augmentation approach is used to solve the class imbalance problem by generating rotated oversamples of images for minority classes during the training process, and pre-processed images and augmented normalized images are fed into a 28-layer dilated CNN (DCNN) to extract deep valuable features. Deep features are then fed to the proposed LSTM network to distinguish wrist fractures from normal ones. The experimental results of the DCNN-LSTM with and without augmentation is compared with other deep learning models. The proposed work is also compared to existing algorithms in terms of accuracy, sensitivity, specificity, precision, the F1-score, and kappa. The results show that the DCNN-LSTM fusion achieves higher accuracy and has high potential for medical applications to use as a second option.

5.
Multimed Tools Appl ; 81(26): 37569-37589, 2022.
Article in English | MEDLINE | ID: mdl-35968412

ABSTRACT

To identify various pneumonia types, a gap of 15% value is being created every five years. To fill this gap, accurate detection of chest disease is required in the healthcare department to avoid any serious issues in the future. Testing the affected lungs to detect a Coronavirus 2019 (COVID-19) using the same imaging modalities may detect some other chest diseases. This wrong diagnosis strongly needs a multidisciplinary approach to the right diagnosis of chest-related diseases. Only a few works till now are targeting pathological x-ray images. Many studies target only a single chest disease that is not enough to automate chest disease detection. Only a few studies regarding the observation of the COVID-19, but more cases are those where it can be misclassified as detecting techniques not providing any generic solution for all types of chest diseases. However, the existing studies can only detect if the person has COVID-19 or not. The proposed work significantly contributes to detecting COVID-19 and other chest diseases by providing useful analysis of chest-related diseases. One of our testing approaches achieves 90.22% accuracy for 15 types of chest disease with 100% correct classification of COVID-19. Though it analyzes the perfect detection as the accuracy level is high enough, but it would be an excellent decision to consider the proposed study until doctors can visually inspect the input images used by models that lead to its detection.

6.
PeerJ Comput Sci ; 8: e879, 2022.
Article in English | MEDLINE | ID: mdl-35494833

ABSTRACT

A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA's need improvement to prevent computer vision-based malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multi-aspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA's. Therefore, the proposed study uses two publicly available datasets of 4- and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 different CNN models on two datasets with promising results compared to the other studies.

7.
Sensors (Basel) ; 21(18)2021 Sep 15.
Article in English | MEDLINE | ID: mdl-34577402

ABSTRACT

In the recent era, various diseases have severely affected the lifestyle of individuals, especially adults. Among these, bone diseases, including Knee Osteoarthritis (KOA), have a great impact on quality of life. KOA is a knee joint problem mainly produced due to decreased Articular Cartilage between femur and tibia bones, producing severe joint pain, effusion, joint movement constraints and gait anomalies. To address these issues, this study presents a novel KOA detection at early stages using deep learning-based feature extraction and classification. Firstly, the input X-ray images are preprocessed, and then the Region of Interest (ROI) is extracted through segmentation. Secondly, features are extracted from preprocessed X-ray images containing knee joint space width using hybrid feature descriptors such as Convolutional Neural Network (CNN) through Local Binary Patterns (LBP) and CNN using Histogram of oriented gradient (HOG). Low-level features are computed by HOG, while texture features are computed employing the LBP descriptor. Lastly, multi-class classifiers, that is, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbour (KNN), are used for the classification of KOA according to the Kellgren-Lawrence (KL) system. The Kellgren-Lawrence system consists of Grade I, Grade II, Grade III, and Grade IV. Experimental evaluation is performed on various combinations of the proposed framework. The experimental results show that the HOG features descriptor provides approximately 97% accuracy for the early detection and classification of KOA for all four grades of KL.


Subject(s)
Osteoarthritis, Knee , Humans , Knee Joint/diagnostic imaging , Neural Networks, Computer , Osteoarthritis, Knee/diagnostic imaging , Quality of Life , Support Vector Machine
8.
Sensors (Basel) ; 21(11)2021 Jun 07.
Article in English | MEDLINE | ID: mdl-34200216

ABSTRACT

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Algorithms , Artificial Intelligence , Diabetic Retinopathy/diagnosis , Humans , Neural Networks, Computer , Reproducibility of Results
9.
PeerJ Comput Sci ; 7: e805, 2021.
Article in English | MEDLINE | ID: mdl-35036531

ABSTRACT

Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.

SELECTION OF CITATIONS
SEARCH DETAIL
...