Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
J Neurosci Methods ; : 110215, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38968976

ABSTRACT

Brain-computer interface (BCI) technology holds promise for individuals with profound motor impairments, offering the potential for communication and control. Motor imagery (MI)-based BCI systems are particularly relevant in this context. Despite their potential, achieving accurate and robust classification of MI tasks using electroencephalography (EEG) data remains a significant challenge. In this paper, we employed the Minimum Redundancy Maximum Relevance (MRMR) algorithm to optimize channel selection. Furthermore, we introduced a hybrid optimization approach that combines the War Strategy Optimization (WSO) and Chimp Optimization Algorithm (ChOA). This hybridization significantly enhances the classification model's overall performance and adaptability. A two-tier deep learning architecture is proposed for classification, consisting of a Convolutional Neural Network (CNN) and a modified Deep Neural Network (M-DNN). The CNN focuses on capturing temporal correlations within EEG data, while the M-DNN is designed to extract high-level spatial characteristics from selected EEG channels. Integrating optimal channel selection, hybrid optimization, and the two-tier deep learning methodology in our BCI framework presents an enhanced approach for precise and effective BCI control. Our model got 95.06% accuracy with high precision. This advancement has the potential to significantly impact neurorehabilitation and assistive technology applications, facilitating improved communication and control for individuals with motor impairments.

2.
Sensors (Basel) ; 23(19)2023 Sep 25.
Article in English | MEDLINE | ID: mdl-37836904

ABSTRACT

Battery replacement or recharging is essential for sensor nodes because they are typically powered by batteries in wireless sensor network (WSN) applications. Therefore, creating an energy-efficient data transfer technique is required. The base station (BS) receives data from one sensor node and routes the data to another sensor node. As a result, an energy-efficient routing algorithm using fuzzy logic (EERF) represents a novel approach that is suggested in this study. One of the reasoning techniques utilized in scenarios where there is a lot of ambiguity is fuzzy logic. The remaining energy, the distance between the sensor node and the base station, and the total number of connected sensor nodes are all inputs given to the fuzzy system of the proposed EERF algorithm. The proposed EERF is contrasted with the current systems, like the energy-aware unequal clustering using fuzzy logic (EAUCF) and distributed unequal clustering using fuzzy logic (DUCF) algorithms, in terms of evaluation criteria, including energy consumption, the number of active sensor nodes for each round in the network, and network stability. EAUCF and DUCF were outperformed by EERF.

3.
J Ind Inf Integr ; : 100485, 2023 Jun 20.
Article in English | MEDLINE | ID: mdl-37359315

ABSTRACT

In the present era of the pandemic, vaccination is necessary to prevent severe infectious diseases, i.e., COVID-19. Specifically, vaccine safety is strongly linked to global health and security. However, the main concerns regarding vaccine record forgery and counterfeiting of vaccines are still common in the traditional vaccine supply chains. The conventional vaccine supply chains do not have proper authentication among all supply chain entities. Blockchain technology is an excellent contender to resolve the issues mentioned above. Although, blockchain based vaccine supply chains can potentially satisfy the objectives and functions of the next-generation supply chain model. However, its integration with the supply chain model is still constrained by substantial scalability and security issues. So, the current blockchain technology with traditional Proof-of-Work (PoW) consensus is incompatible with the next-generation vaccine supply chain framework. This paper introduces a model named "VaccineChain" - a novel checkpoint-assisted scalable blockchain based secure vaccine supply chain. VaccineChain guarantees the complete integrity and immutability of vaccine supply records to combat counterfeited vaccines over the supply chain. The dynamic consensus algorithm with various validating difficulty levels supports the efficient scalability of VaccineChain. Moreover, VaccineChain includes anonymous authentication among entities to provide selective revocation. This work also consists of a use case example of a secure vaccine supply chain using checkpoint assisted scalable blockchain with customized transaction generation-rule and smart contracts to demonstrate the application of VaccineChain. The comprehensive security analysis with standard theoretical proofs ensures the computational infeasibility of VaccineChain. Further, the detailed performance analysis with test simulations shows the practicability of VaccineChain.

4.
SN Comput Sci ; 4(3): 214, 2023.
Article in English | MEDLINE | ID: mdl-36811126

ABSTRACT

The coronavirus disease (COVID-19) is a very contagious and dangerous disease that affects the human respiratory system. Early detection of this disease is very crucial to contain the further spread of the virus. In this paper, we proposed a methodology using DenseNet-169 architecture for diagnosing the disease from chest X-ray images of the patients. We used a pretrained neural network and then utilised the transfer learning method for training on our dataset. We also used Nearest-Neighbour interpolation technique for data preprocessing and Adam Optimizer at the end for optimization. Our methodology achieved 96.37 % accuracy which was better than that obtained using other deep learning models like AlexNet, ResNet-50, VGG-16, and VGG-19.

5.
Med Biol Eng Comput ; 60(5): 1511-1525, 2022 May.
Article in English | MEDLINE | ID: mdl-35320457

ABSTRACT

Acutance is a subjective parameter which indicates the quality of edges in an image. Objective metrics for measuring image acutance are helpful for designing new imaging protocols and sequences in magnetic resonance imaging (MRI) studies. In addition to this, image acutance metrics have a significant role in the design and optimisation of post-processing algorithms used for restoration and sharpening of MR imagery. Most of the existing blur/sharpness metrics are specifically designed for natural-scene (panoramic) images. A blur/sharpness metric suitable for MR imaging applications is absent in the literature. To fill this gap, a computationally fast metric, 'largest local gradient-based sharpness metric (LLGSM)', for measuring sharpness and blur in MR imagery, is proposed in this paper. The LLGSM is the root mean square (RMS) of exponentially weighted elements in an array of lexicographically ordered largest local gradient (LLG) values in the image, sorted in descending order. In terms of overall agreement with subjective scores, and computational speed, the LLGSM is observed to be more efficient than its alternatives available in the literature.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Quality Control
6.
J Digit Imaging ; 35(4): 1041-1060, 2022 08.
Article in English | MEDLINE | ID: mdl-35296942

ABSTRACT

Poor acutance of images (unsharpness) is one of the major concerns in magnetic resonance imaging (MRI). MRI-based diagnosis and clinical interventions become difficult due to the vague textural information and weak morphological margins on images. A novel image sharpening algorithm named as maximum local variation-based unsharp masking (MLVUM) to address the issue of 'unsharpness' in MRI is proposed in this paper. In the MLVUM, the sharpened image is the algebraic sum of the input image and the product of the user-defined scale and the difference between the output of a newly designed nonlinear spatial filter named maximum local variation-controlled edge smoothing Gaussian filter (MLVESGF) and the input image, weighted by the normalised MLV. The MLVESGF is a locally adaptive 2D Gaussian edge smoothing kernel whose standard deviation is directly proportional to the local value of the normalized MLV. The values of the acutance-to-noise ratio (ANR) and absolute mean brightness error (AMBE) shown by the MLVUM on 100 MRI slices are 0.6463 ± 0.1852 and 0.3323 ± 0.2200, respectively. Compared to 17 state-of-the-art image sharpening algorithms, the MLVUM exhibited a higher ANR and lower AMBE. The MLVUM selectively enhances the sharpness of edges in the MR images without amplifying the background noise without altering the mean brightness level.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Humans , Image Enhancement/methods , Image Processing, Computer-Assisted
7.
ACS Omega ; 7(51): 47796-47805, 2022 Dec 27.
Article in English | MEDLINE | ID: mdl-36591164

ABSTRACT

This paper focused on the preparation of pure and Cr-doped tungsten trioxide (WO3) thin films using the spray pyrolysis method. Different techniques were adopted to analyze these films' structural and morphological properties. The X-ray detection analysis showed that the average crystallite size of the WO3-nanostructured thin films increased as the Cr doping concentration increased. The atomic force microscopy results showed that the root-mean-square roughness of the films increased with Cr doping concentration up to 3 wt % and then decreased. The increased roughness is favorable for gas-sensing applications. Surface morphology and elemental analysis of the films were studied by field emission scanning electron microscopy with energy-dispersive X-ray spectroscopy measurements. The 3 wt % Cr-WO3 has a large nanoflake-like structure with high surface roughness and porous morphology. Gas-sensing characteristics of undoped and Cr-doped WO3 thin films were investigated with various gases at room temperature. The results showed that 3 wt % Cr-doped WO3 film performed the maximum response toward 50 ppm of xylene with excellent selectivity at room temperature. We believe that increased lattice defects, surface morphology, and roughness due to Cr doping in the WO3 crystal matrix might be responsible for increased xylene sensitivity.

8.
Big Data ; 9(6): 480-498, 2021 12.
Article in English | MEDLINE | ID: mdl-34191590

ABSTRACT

Accurate detection of malignant tumor on lung computed tomography scans is crucial for early diagnosis of lung cancer and hence the faster recovery of patients. Several deep learning methodologies have been proposed for lung tumor detection, especially the convolution neural network (CNN). However, as CNN may lose some of the spatial relationships between features, we plan to combine texture features such as fractal features and gray-level co-occurrence matrix (GLCM) features along with the CNN features to improve the accuracy of tumor detection. Our framework has two advantages. First it fuses the advantage of CNN features with hand-crafted features such as fractal and GLCM features to gather the spatial information. Second, we reduce the overfitting effect by replacing the softmax layer with the support vector machine classifier. Experiments have shown that texture features such as fractal and GLCM when concatenated with deep features extracted from DenseNet architecture have a better accuracy of 95.42%, sensitivity of 97.49%, and specificity of 93.97%, and a positive predictive value of 95.96% with area under curve score of 0.95.


Subject(s)
Fractals , Neoplasms , Humans , Lung , Neural Networks, Computer , Tomography, X-Ray Computed
9.
Med Biol Eng Comput ; 57(12): 2673-2682, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31713709

ABSTRACT

Cancer classification is one of the crucial tasks in medical field. The gene expression of cells helps in identifying the cancer. The high dimensionality of gene expression data hinders the classification performance of any machine learning models. Therefore, we propose, in this paper a methodology to classify cancer using gene expression data. We employ a bio-inspired algorithm called binary bat algorithm for feature selection and extreme learning machine for classification purpose. We also propose a novel fitness function for optimizing the feature selection process by binary bat algorithm. Our proposed methodology has been compared with original fitness function that has been found in the literature. The experiments conducted show that the former outperforms the latter. Graphical Abstract Classification using Binary Bat Optimization and Extreme Learning Machine.


Subject(s)
Neoplasms/genetics , Algorithms , Gene Expression/genetics , Humans , Machine Learning
10.
Eur J Radiol ; 114: 14-24, 2019 May.
Article in English | MEDLINE | ID: mdl-31005165

ABSTRACT

The advent of Deep Learning (DL) is poised to dramatically change the delivery of healthcare in the near future. Not only has DL profoundly affected the healthcare industry it has also influenced global businesses. Within a span of very few years, advances such as self-driving cars, robots performing jobs that are hazardous to human, and chat bots talking with human operators have proved that DL has already made large impact on our lives. The open source nature of DL and decreasing prices of computer hardware will further propel such changes. In healthcare, the potential is immense due to the need to automate the processes and evolve error free paradigms. The sheer quantum of DL publications in healthcare has surpassed other domains growing at a very fast pace, particular in radiology. It is therefore imperative for the radiologists to learn about DL and how it differs from other approaches of Artificial Intelligence (AI). The next generation of radiology will see a significant role of DL and will likely serve as the base for augmented radiology (AR). Better clinical judgement by AR will help in improving the quality of life and help in life saving decisions, while lowering healthcare costs. A comprehensive review of DL as well as its implications upon the healthcare is presented in this review. We had analysed 150 articles of DL in healthcare domain from PubMed, Google Scholar, and IEEE EXPLORE focused in medical imagery only. We have further examined the ethic, moral and legal issues surrounding the use of DL in medical imaging.


Subject(s)
Deep Learning/trends , Radiology/trends , Artificial Intelligence/trends , Delivery of Health Care/trends , Forecasting , Humans , Quality of Life , Radiologists/standards , Radiologists/statistics & numerical data , Radiologists/trends
11.
J Neurosci Methods ; 314: 31-40, 2019 02 15.
Article in English | MEDLINE | ID: mdl-30660481

ABSTRACT

BACKGROUND: Brain-computer interface (BCI) is a combination of hardware and software that provides a non-muscular channel to send various messages and commands to the outside world and control external devices such as computers. BCI helps severely disabled patients having neuromuscular injuries, locked-in syndrome (LiS) to lead their life as a normal person to the best extent possible. There are various applications of BCI not only in the field of medicine but also in entertainment, lie detection, gaming, etc. METHODOLOGY: In this work, using BCI a Deceit Identification Test (DIT) is performed based on P300, which has a positive peak from 300 ms to 1000 ms of stimulus onset. The goal is to recognize and classify P300 signals with excellent results. The pre-processing has been performed using the band-pass filter to eliminate the artifacts. COMPARISON WITH EXISTING METHODS: Wavelet packet transform (WPT) is applied for feature extraction whereas linear discriminant analysis (LDA) is used as a classifier. Comparison with the other existing methods namely BCD, BAD, BPNN etc has been performed. RESULTS: A novel experiment is conducted using EEG acquisition device for the collection of data set on 20 subjects, where 10 subjects acted as guilty and 10 subjects acted as innocent. Training and testing data are in the ratio of 90:10 and the accuracy obtained is up to 91.67%. The proposed approach that uses WPT and LDA results in high accuracy, sensitivity, and specificity. CONCLUSION: The method provided better results in comparison with the other existing methods. It is an efficient approach for deceit identification for EEG based BCI.


Subject(s)
Brain-Computer Interfaces , Electroencephalography/methods , Lie Detection , Pattern Recognition, Automated/methods , Wavelet Analysis , Adult , Brain/physiology , Deception , Discriminant Analysis , Event-Related Potentials, P300 , Female , Humans , Linear Models , Male , Visual Perception/physiology , Young Adult
12.
Med Biol Eng Comput ; 57(2): 543-564, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30255236

ABSTRACT

Manual ultrasound (US)-based methods are adapted for lumen diameter (LD) measurement to estimate the risk of stroke but they are tedious, error prone, and subjective causing variability. We propose an automated deep learning (DL)-based system for lumen detection. The system consists of a combination of two DL systems: encoder and decoder for lumen segmentation. The encoder employs a 13-layer convolution neural network model (CNN) for rich feature extraction. The decoder employs three up-sample layers of fully convolution network (FCN) for lumen segmentation. Three sets of manual tracings were used during the training paradigm leading to the design of three DL systems. Cross-validation protocol was implemented for all three DL systems. Using the polyline distance metric, the precision of merit for three DL systems over 407 US scans was 99.61%, 97.75%, and 99.89%, respectively. The Jaccard index and Dice similarity of DL lumen segmented region against three ground truth (GT) regions were 0.94, 0.94, and 0.93 and 0.97, 0.97, and 0.97, respectively. The corresponding AUC for three DL systems was 0.95, 0.91, and 0.93. The experimental results demonstrated superior performance of proposed deep learning system over conventional methods in literature. Graphical abstract ᅟ.


Subject(s)
Carotid Arteries/physiopathology , Diabetes Mellitus/physiopathology , Stroke/physiopathology , Aged , Deep Learning , Female , Humans , Machine Learning , Male , Neural Networks, Computer , Retrospective Studies , Risk Assessment/methods , Ultrasonography/methods
13.
Front Biosci (Landmark Ed) ; 24(3): 392-426, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30468663

ABSTRACT

Deep learning (DL) is affecting each and every sphere of public and private lives and becoming a tool for daily use. The power of DL lies in the fact that it tries to imitate the activities of neurons in the neocortex of human brain where the thought process takes place. Therefore, like the brain, it tries to learn and recognize patterns in the form of digital images. This power is built on the depth of many layers of computing neurons backed by high power processors and graphics processing units (GPUs) easily available today. In the current scenario, we have provided detailed survey of various types of DL systems available today, and specifically, we have concentrated our efforts on current applications of DL in medical imaging. We have also focused our efforts on explaining the readers the rapid transition of technology from machine learning to DL and have tried our best in reasoning this paradigm shift. Further, a detailed analysis of complexities involved in this shift and possible benefits accrued by the users and developers.


Subject(s)
Algorithms , Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Machine Learning , Neural Networks, Computer , Brain/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods
14.
Comput Biol Med ; 98: 100-117, 2018 07 01.
Article in English | MEDLINE | ID: mdl-29778925

ABSTRACT

MOTIVATION: The carotid intima-media thickness (cIMT) is an important biomarker for cardiovascular diseases and stroke monitoring. This study presents an intelligence-based, novel, robust, and clinically-strong strategy that uses a combination of deep-learning (DL) and machine-learning (ML) paradigms. METHODOLOGY: A two-stage DL-based system (a class of AtheroEdge™ systems) was proposed for cIMT measurements. Stage I consisted of a convolution layer-based encoder for feature extraction and a fully convolutional network-based decoder for image segmentation. This stage generated the raw inner lumen borders and raw outer interadventitial borders. To smooth these borders, the DL system used a cascaded stage II that consisted of ML-based regression. The final outputs were the far wall lumen-intima (LI) and media-adventitia (MA) borders which were used for cIMT measurements. There were two sets of gold standards during the DL design, therefore two sets of DL systems (DL1 and DL2) were derived. RESULTS: A total of 396 B-mode ultrasound images of the right and left common carotid artery were used from 203 patients (Institutional Review Board approved, Toho University, Japan). For the test set, the cIMT error for the DL1 and DL2 systems with respect to the gold standard was 0.126 ±â€¯0.134 and 0.124 ±â€¯0.100 mm, respectively. The corresponding LI error for the DL1 and DL2 systems was 0.077 ±â€¯0.057 and 0.077 ±â€¯0.049 mm, respectively, while the corresponding MA error for DL1 and DL2 was 0.113 ±â€¯0.105 and 0.109 ±â€¯0.088 mm, respectively. The results showed up to 20% improvement in cIMT readings for the DL system compared to the sonographer's readings. Four statistical tests were conducted to evaluate reliability, stability, and statistical significance. CONCLUSION: The results showed that the performance of the DL-based approach was superior to the nonintelligence-based conventional methods that use spatial intensities alone. The DL system can be used for stroke risk assessment during routine or clinical trial modes.


Subject(s)
Carotid Arteries/diagnostic imaging , Carotid Intima-Media Thickness , Deep Learning , Image Interpretation, Computer-Assisted/methods , Ultrasonography/methods , Aged , Aged, 80 and over , Carotid Artery Diseases/diagnostic imaging , Cohort Studies , Databases, Factual , Diabetes Complications , Female , Humans , Japan , Male , ROC Curve
15.
Comput Methods Programs Biomed ; 155: 165-177, 2018 03.
Article in English | MEDLINE | ID: mdl-29512496

ABSTRACT

Background and Objective Fatty Liver Disease (FLD) - a disease caused by deposition of fat in liver cells, is predecessor to terminal diseases such as liver cancer. The machine learning (ML) techniques applied for FLD detection and risk stratification using ultrasound (US) have limitations in computing tissue characterization features, thereby limiting the accuracy. Methods Under the class of Symtosis for FLD detection and risk stratification, this study presents a Deep Learning (DL)-based paradigm that computes nearly seven million weights per image when passed through a 22 layered neural network during the cross-validation (training and testing) paradigm. The DL architecture consists of cascaded layers of operations such as: convolution, pooling, rectified linear unit, dropout and a special block called inception model that provides speed and efficiency. All data analysis is performed in optimized tissue region, obtained by removing background information. We benchmark the DL system against the conventional ML protocols: support vector machine (SVM) and extreme learning machine (ELM). Results The liver US data consists of 63 patients (27 normal/36 abnormal). Using the K10 cross-validation protocol (90% training and 10% testing), the detection and risk stratification accuracies are: 82%, 92% and 100% for SVM, ELM and DL systems, respectively. The corresponding area under the curve is: 0.79, 0.92 and 1.0, respectively. We further validate our DL system using two class biometric facial data that yields an accuracy of 99%. Conclusion DL system shows a superior performance for liver detection and risk stratification compared to conventional machine learning systems: SVM and ELM.


Subject(s)
Diagnosis, Computer-Assisted , Fatty Liver/diagnostic imaging , Machine Learning , Benchmarking , Computational Biology , Fatty Liver/diagnosis , Humans , Image Interpretation, Computer-Assisted , Neural Networks, Computer , ROC Curve , Reproducibility of Results , Risk Factors , Support Vector Machine , Ultrasonography
16.
J Med Syst ; 42(1): 18, 2017 12 07.
Article in English | MEDLINE | ID: mdl-29218604

ABSTRACT

The original version of this article unfortunately contained a mistake. The family name of Rui Tato Marinho was incorrectly spelled as Marinhoe.

17.
Healthc Technol Lett ; 4(4): 122-128, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28868148

ABSTRACT

Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues.

18.
J Med Syst ; 41(10): 152, 2017 08 23.
Article in English | MEDLINE | ID: mdl-28836045

ABSTRACT

Fatty Liver Disease (FLD) is caused by the deposition of fat in liver cells and leads to deadly diseases such as liver cancer. Several FLD detection and characterization systems using machine learning (ML) based on Support Vector Machines (SVM) have been applied. These ML systems utilize large number of ultrasonic grayscale features, pooling strategy for selecting the best features and several combinations of training/testing. As result, they are computationally intensive, slow and do not guarantee high performance due to mismatch between grayscale features and classifier type. This study proposes a reliable and fast Extreme Learning Machine (ELM)-based tissue characterization system (a class of Symtosis) for risk stratification of ultrasound liver images. ELM is used to train single layer feed forward neural network (SLFFNN). The input-to-hidden layer weights are randomly generated reducing computational cost. The only weights to be trained are hidden-to-output layer which is done in a single pass (without any iteration) making ELM faster than conventional ML methods. Adapting four types of K-fold cross-validation (K = 2, 3, 5 and 10) protocols on three kinds of data sizes: S0-original, S4-four splits, S8-sixty four splits (a total of 12 cases) and 46 types of grayscale features, we stratify the FLD US images using ELM and benchmark against SVM. Using the US liver database of 63 patients (27 normal/36 abnormal), our results demonstrate superior performance of ELM compared to SVM, for all cross-validation protocols (K2, K3, K5 and K10) and all types of US data sets (S0, S4, and S8) in terms of sensitivity, specificity, accuracy and area under the curve (AUC). Using the K10 cross-validation protocol on S8 data set, ELM showed an accuracy of 96.75% compared to 89.01% for SVM, and correspondingly, the AUC: 0.97 and 0.91, respectively. Further experiments also showed the mean reliability of 99% for ELM classifier, along with the mean speed improvement of 40% using ELM against SVM. We validated the symtosis system using two class biometric facial public data demonstrating an accuracy of 100%.


Subject(s)
Liver Diseases , Algorithms , Humans , Neural Networks, Computer , Reproducibility of Results , Support Vector Machine
19.
Comput Biol Med ; 81: 79-92, 2017 02 01.
Article in English | MEDLINE | ID: mdl-28027460

ABSTRACT

Diabetes is a major health challenge around the world. Existing rule-based classification systems have been widely used for diabetes diagnosis, even though they must overcome the challenge of producing a comprehensive optimal ruleset while balancing accuracy, sensitivity and specificity values. To resolve this drawback, in this paper, a Spider Monkey Optimization-based rule miner (SM-RuleMiner) has been proposed for diabetes classification. A novel fitness function has also been incorporated into SM-RuleMiner to generate a comprehensive optimal ruleset while balancing accuracy, sensitivity and specificity. The proposed rule-miner is compared against three rule-based algorithms, namely ID3, C4.5 and CART, along with several meta-heuristic-based rule mining algorithms, on the Pima Indians Diabetes dataset using 10-fold cross validation. It has been observed that the proposed rule miner outperforms several well-known algorithms in terms of average classification accuracy and average sensitivity. Moreover, the proposed rule miner outperformed the other algorithms in terms of mean rule length and mean ruleset size.


Subject(s)
Algorithms , Data Mining/methods , Decision Support Systems, Clinical/organization & administration , Diabetes Mellitus/diagnosis , Diagnosis, Computer-Assisted/methods , Electronic Health Records/organization & administration , Animals , Atelinae , Biomimetics/methods , Diabetes Mellitus/classification , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...