Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 569
Filter
1.
Comput Math Methods Med ; 2022: 4596552, 2022.
Article in English | MEDLINE | ID: mdl-35309845

ABSTRACT

The objective of this study was to explore the predictive value of electrocardiogram (ECG) based on intelligent analysis algorithm for atrial fibrillation (AF) in elderly patients undergoing coronary artery bypass grafting (CABG). Specifically, 106 elderly patients with coronary heart disease who underwent CABG in the hospital were selected, including 52 patients with postoperative AF (AF group) and 54 patients without arrhythmia (control group). Within 1-3 weeks after operation, the dynamic ECG monitoring system based on Gentle AdaBoost algorithm constructed in this study was adopted. After the measurement of the 12-lead P wave duration, the maximum P wave duration (Pmax) and minimum P wave duration (Pmin) were recorded. As for simulation experiments, the same data was used as the back-propagation algorithm. The results showed that for the detection accuracy of the test samples, the Gentle AdaBoost algorithm showed 93.7% accuracy after the first iteration, and the Gentle AdaBoost algorithm was 16.1% higher than the back-propagation algorithm. Compared with the control group, the detection rate of arrhythmia in patients after CABG was significantly lower (P < 0.05). Bivariate logistic regression analysis on Pmax and Pmin showed as follows: Pmax: 95% confidential interval (CI): 1.024-1.081, P < 0.05; Pmin: 95% CI: 1.036-1.117, P < 0.05. The sensitivity of Pmax and Pmin in predicting paroxysmal AF was 78.2% and 73.4%, respectively; the specificity of them was 80.1% and 85.6%, respectively; the positive predictive value was 81.2% and 83.4%, respectively; and the negative predictive value was 79.5% and 75.3%, respectively. In conclusion, the generalization ability of Gentle AdaBoost algorithm was better than that of back-propagation algorithm, and it can identify arrhythmia better. Pmax and Pmin were important indicators of AF after CABG.


Subject(s)
Algorithms , Atrial Fibrillation/diagnosis , Atrial Fibrillation/etiology , Coronary Artery Bypass/adverse effects , Electrocardiography/statistics & numerical data , Postoperative Complications/diagnosis , Postoperative Complications/etiology , Aged , Case-Control Studies , Computational Biology , Confidence Intervals , Coronary Disease/surgery , Diagnosis, Computer-Assisted/statistics & numerical data , Female , Humans , Logistic Models , Male , Middle Aged , Predictive Value of Tests
2.
Comput Math Methods Med ; 2022: 7531371, 2022.
Article in English | MEDLINE | ID: mdl-35211186

ABSTRACT

OBJECTIVE: To explore the establishment and verification of logistic regression model for qualitative diagnosis of ovarian cancer based on MRI and ultrasonic signs. METHOD: 207 patients with ovarian tumors in our hospital from April 2018 to April 2021 were selected, of which 138 were used as the training group for model creation and 69 as the validation group for model evaluation. The differences of MRI and ultrasound signs in patients with ovarian cancer and benign ovarian tumor in the training group were analyzed. The risk factors were screened by multifactor unconditional logistic regression analysis, and the regression equation was established. The self-verification was carried out by subject working characteristics (ROC), and the external verification was carried out by K-fold cross verification. RESULT: There was no significant difference in age, body mass index, menstruation, dysmenorrhea, times of pregnancy, cumulative menstrual years, and marital status between the two groups (P > 0.05). After logistic regression analysis, the diagnostic model of ovarian cancer was established: logit (P) = -1.153 + [MRI signs : morphology × 1.459 + boundary × 1.549 + reinforcement × 1.492 + tumor components × 1.553] + [ultrasonic signs : morphology × 1.594 + mainly real × 1.417 + separated form × 1.294 + large nipple × 1.271 + blood supply × 1.364]; self-verification: AUC of the model is 0.883, diagnostic sensitivity is 93.94%, and specificity is 80.95%; K-fold cross validation: the training accuracy was 0.904 ± 0.009 and the prediction accuracy was 0.881 ± 0.049. CONCLUSION: Irregular shape, unclear boundary, obvious enhancement in MRI signs, cystic or solid tumor components and irregular shape, solid-dominated shape, thick septate shape, large nipple, and abundant blood supply in ultrasound signs are independent risk factors for ovarian cancer. After verification, the diagnostic model has good accuracy and stability, which provides basis for clinical decision-making.


Subject(s)
Diagnosis, Computer-Assisted/methods , Logistic Models , Magnetic Resonance Imaging/statistics & numerical data , Ovarian Neoplasms/diagnostic imaging , Ultrasonography/statistics & numerical data , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , Female , Humans , Middle Aged , Multivariate Analysis , Retrospective Studies , Risk Factors
3.
Comput Math Methods Med ; 2022: 8724536, 2022.
Article in English | MEDLINE | ID: mdl-35211188

ABSTRACT

The precise detection of epileptic seizure helps to prevent the serious consequences of seizures. As the electroencephalogram (EEG) reflects the brain activity of patients effectively, it has been widely used in epileptic seizure detection in the past decades. Recently, deep learning-based detection methods which automatically learn features from the EEG signals have attracted much attention. However, with deep learning-based detection methods, different input formats of EEG signals will lead to different detection performances. In this paper, we propose a deep learning-based epileptic seizure detection method with hybrid input formats of EEG signals, i.e., original EEG, Fourier transform of EEG, short-time Fourier transform of EEG, and wavelet transform of EEG. Convolutional neural networks (CNNs) are designed for extracting latent features from these inputs. A feature fusion mechanism is applied to integrate the learned features to generate a more stable syncretic feature for seizure detection. The experimental results show that our proposed hybrid method is effective to improve the seizure detection performance in few-shot scenarios.


Subject(s)
Deep Learning , Diagnosis, Computer-Assisted/methods , Electroencephalography/statistics & numerical data , Seizures/diagnosis , Algorithms , Computational Biology , Databases, Factual/statistics & numerical data , Diagnosis, Computer-Assisted/statistics & numerical data , Epilepsy/classification , Epilepsy/diagnosis , Fourier Analysis , Humans , Neural Networks, Computer , Signal Processing, Computer-Assisted , Wavelet Analysis
4.
Comput Math Methods Med ; 2022: 9797844, 2022.
Article in English | MEDLINE | ID: mdl-35211190

ABSTRACT

Accurate prediction of cardiovascular disease is necessary and considered to be a difficult attempt to treat a patient effectively before a heart attack occurs. According to recent studies, heart disease is said to be one of the leading origins of death worldwide. Early identification of CHD can assist to reduce death rates. When it comes to prediction using traditional methodologies, the difficulty arises in the intricacy of the data and relationships. This research is aimed at applying recent machine learning technology to identify heart disease from past medical data to uncover correlations in data that can greatly improve the accuracy of prediction rates using various machine learning models. Models have been implemented using naive Bayes, random forest algorithms, and the combinations of two models such as naive Bayes and random forest methods. These methods offer numerous attributes associated with heart disease. This proposed system foresees the chance of rising heart disease. The proposed system uses 14 parameters such as age, sex, quick blood sugar, chest discomfort, and other medical parameters which are used in the proposed system. Our proposed systems find the probability of developing heart disease in percentages as well as the accuracy level (accuracy of 93%). Finally, this proposed method will support the doctors to analyze the heart patients competently.


Subject(s)
Heart Diseases/diagnosis , Heart Diseases/prevention & control , Machine Learning , Models, Cardiovascular , Algorithms , Bayes Theorem , Computational Biology , Databases, Factual/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Diagnosis, Computer-Assisted/statistics & numerical data , Female , Heart Disease Risk Factors , Heart Diseases/etiology , Humans , Male , Probability
5.
Comput Math Methods Med ; 2022: 8000781, 2022.
Article in English | MEDLINE | ID: mdl-35140806

ABSTRACT

Due to the black box model nature of convolutional neural networks, computer-aided diagnosis methods based on depth learning are usually poorly interpretable. Therefore, the diagnosis results obtained by these unexplained methods are difficult to gain the trust of patients and doctors, which limits their application in the medical field. To solve this problem, an interpretable depth learning image segmentation framework is proposed in this paper for processing brain tumor magnetic resonance images. A gradient-based class activation mapping method is introduced into the segmentation model based on pyramid structure to visually explain it. The pyramid structure constructs global context information with features after multiple pooling layers to improve image segmentation performance. Therefore, class activation mapping is used to visualize the features concerned by each layer of pyramid structure and realize the interpretation of PSPNet. After training and testing the model on the public dataset BraTS2018, several sets of visualization results were obtained. By analyzing these visualization results, the effectiveness of pyramid structure in brain tumor segmentation task is proved, and some improvements are made to the structure of pyramid model based on the shortcomings of the model shown in the visualization results. In summary, the interpretable brain tumor image segmentation method proposed in this paper can well explain the role of pyramid structure in brain tumor image segmentation, which provides a certain idea for the application of interpretable method in brain tumor segmentation and has certain practical value for the evaluation and optimization of brain tumor segmentation model.


Subject(s)
Brain Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted/statistics & numerical data , Magnetic Resonance Imaging/statistics & numerical data , Neural Networks, Computer , Neuroimaging/statistics & numerical data , Algorithms , Computational Biology , Databases, Factual/statistics & numerical data , Humans
6.
Comput Math Methods Med ; 2022: 8158634, 2022.
Article in English | MEDLINE | ID: mdl-35140807

ABSTRACT

This study was aimed at analyzing the diagnostic value of convolutional neural network models on account of deep learning for severe sepsis complicated with acute kidney injury and providing an effective theoretical reference for the clinical use of ultrasonic image diagnoses. 50 patients with severe sepsis complicated with acute kidney injury and 50 healthy volunteers were selected in this study. They all underwent ultrasound scans. Different deep learning convolutional neural network models dense convolutional network (DenseNet121), Google inception net (GoogLeNet), and Microsoft's residual network (ResNet) were used for training and diagnoses. Then, the diagnostic results were compared with professional image physicians' artificial diagnoses. The results showed that accuracy and sensitivity of the three deep learning algorithms were significantly higher than professional image physicians' artificial diagnoses. Besides, the error rates of the three algorithm models for severe sepsis complicated with acute kidney injury were significantly lower than professional physicians' artificial diagnoses. The areas under curves (AUCs) of the three algorithms were significantly higher than AUCs of doctors' diagnosis results. The loss function parameters of DenseNet121 and GoogLeNet were significantly lower than that of ResNet, with the statistically significant difference (P < 0.05). There was no significant difference in training time of ResNet, GoogLeNet, and DenseNet121 algorithms under deep learning, as the convergence was reached after 700 times, 700 times, and 650 times, respectively (P > 0.05). In conclusion, the value of the three algorithms on account of deep learning in the diagnoses of severe sepsis complicated with acute kidney injury was higher than professional physicians' artificial judgments and had great clinical value for the diagnoses and treatments of the disease.


Subject(s)
Acute Kidney Injury/complications , Acute Kidney Injury/diagnostic imaging , Deep Learning , Sepsis/complications , Sepsis/diagnostic imaging , Ultrasonography/statistics & numerical data , Algorithms , Area Under Curve , Case-Control Studies , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Neural Networks, Computer , ROC Curve
7.
Comput Math Methods Med ; 2022: 9251225, 2022.
Article in English | MEDLINE | ID: mdl-35140808

ABSTRACT

Heart disease is a common disease affecting human health. Electrocardiogram (ECG) classification is the most effective and direct method to detect heart disease, which is helpful to the diagnosis of most heart disease symptoms. At present, most ECG diagnosis depends on the personal judgment of medical staff, which leads to heavy burden and low efficiency of medical staff. Automatic ECG analysis technology will help the work of relevant medical staff. In this paper, we use the MIT-BIH ECG database to extract the QRS features of ECG signals by using the Pan-Tompkins algorithm. After extraction of the samples, K-means clustering is used to screen the samples, and then, RBF neural network is used to analyze the ECG information. The classifier trains the electrical signal features, and the classification accuracy of the final classification model can reach 98.9%. Our experiments show that this method can effectively detect the abnormality of ECG signal and implement it for the diagnosis of heart disease.


Subject(s)
Diagnosis, Computer-Assisted/methods , Electrocardiography/classification , Electrocardiography/statistics & numerical data , Heart Diseases/classification , Heart Diseases/diagnosis , Neural Networks, Computer , Algorithms , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , Humans , Signal Processing, Computer-Assisted , Supervised Machine Learning , Wavelet Analysis
8.
Comput Math Methods Med ; 2022: 9398551, 2022.
Article in English | MEDLINE | ID: mdl-35132334

ABSTRACT

To analyze the application value of artificial intelligence model based on Visual Geometry Group- (VGG-) 16 combined with quantitative electroencephalography (QEEG) in cerebral small vessel disease (CSVD) with cognitive impairment, 72 patients with CSVD complicated by cognitive impairment were selected as the research subjects. As per Diagnostic and Statistical Manual (5th Edition), they were divided into the vascular dementia (VD) group of 34 cases and vascular cognitive impairment with no dementia (VCIND) group of 38 cases. The two groups were analyzed for the clinical information, neuropsychological test results, and monitoring results of QEEG based on intelligent algorithms for more than 2 hours. The accuracy rate of VGG was 84.27% and Kappa value was 0.7, while that of modified VGG (nVGG) was 88.76% and Kappa value was 0.78. The improved VGG algorithm obviously had higher accuracy. The test results found that the QEEG identified 8 normal, 19 mild, 10 moderate, and 0 severe cases in the VCIND group, while in the VD group, the corresponding numbers were 4, 13, 11, and 7; in the VCIND group, 7 cases had the normal QEEG, 11 cases had background changes, 9 cases had abnormal waves, and 11 cases had in both background changes and abnormal waves, and in the VD group, the corresponding numbers were 5, 2, 5, and 22, respectively; in the VCIND group, QEEG of 18 patients had no abnormal waves, QEEG of 11 patients had a few abnormal waves, and QEEG of 9 patients had many abnormal waves, and QEEG of 0 people had a large number of abnormal waves, and in the VD group, the corresponding numbers were 7, 6, 12, and 9. The above data were statistically different between the two groups (P < 0.05). Hence, QEEG based on intelligent algorithms can make a good assessment of CSVD with cognitive impairment, which had good clinical application value.


Subject(s)
Cerebral Small Vessel Diseases/complications , Cerebral Small Vessel Diseases/diagnosis , Cognitive Dysfunction/complications , Cognitive Dysfunction/diagnosis , Diagnosis, Computer-Assisted/methods , Electroencephalography/methods , Aged , Algorithms , Artificial Intelligence , Cerebral Small Vessel Diseases/psychology , Cognitive Dysfunction/psychology , Computational Biology , Dementia, Vascular/complications , Dementia, Vascular/diagnosis , Dementia, Vascular/psychology , Diagnosis, Computer-Assisted/statistics & numerical data , Electroencephalography/statistics & numerical data , Female , Humans , Male , Middle Aged , Neuropsychological Tests
9.
Comput Math Methods Med ; 2022: 1527292, 2022.
Article in English | MEDLINE | ID: mdl-35178112

ABSTRACT

BACKGROUND: Atrial fibrillation (AF) is associated with the worsening of cognitive function. Strategies that are both convenient and reliable for cognitive screening of AF patients remain underdeveloped. We aimed to analyze the sensitivity and specificity of computerized cognitive screening strategies using subtests from Cambridge Neuropsychological Test Automated Battery (CANTAB) in AF patients. METHODS: The Multitasking Test (MTT), Rapid Visual Information Processing (RVP), and Paired Associates Learning (PAL) subtests from CANTAB were performed in 105 AF patients. Traditional standard neuropsychological tests were used as a reference standard. Cognitive screening models using different CANTAB subtests were established using multivariable logistic regression. Further stepwise regression using the Akaike Information Criterion (AIC) was applied to optimize the models. Receiver operating characteristic curve analyses were used to study the sensitivity and specificity of these models. RESULTS: Fifty-eight (55%) patients were diagnosed with mild cognitive impairment (MCI). MTT alone had reasonable sensitivity (82.8%) and specificity (74.5%) for MCI screening, while RVP (sensitivity 72.4%, specificity 70.2%) and PAL (sensitivity 70.7%, specificity 57.4%) were less effective. Stepwise regression of all available variables revealed that a combination of MTT and RVP brought about higher specificity (sensitivity 82.8%, specificity 85.8%), while PAL was not included in the optimal model. Moreover, adding education to the models did not result in improved validity for MCI screening. CONCLUSION: The CANTAB subtests are feasible and effective strategies for MCI screening among AF patients independent of patients' education levels. Hence, they are practical for cardiologists or general practitioners.


Subject(s)
Atrial Fibrillation/complications , Atrial Fibrillation/psychology , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/etiology , Diagnosis, Computer-Assisted/methods , Neuropsychological Tests , Aged , Cognitive Dysfunction/psychology , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , Feasibility Studies , Female , Humans , Male , Middle Aged , Neuropsychological Tests/statistics & numerical data , Reproducibility of Results , Sensitivity and Specificity
10.
Comput Math Methods Med ; 2022: 9508004, 2022.
Article in English | MEDLINE | ID: mdl-35103073

ABSTRACT

As an effective tool for colorectal lesion detection, it is still difficult to avoid the phenomenon of missed and false detection when using white-light endoscopy. In order to improve the lesion detection rate of colorectal cancer patients, this paper proposes a real-time lesion diagnosis model (YOLOv5x-CG) based on YOLOv5 improvement. In this diagnostic model, colorectal lesions were subdivided into three categories: micropolyps, adenomas, and cancer. In the course of convolutional network training, Mosaic data enhancement strategy was used to improve the detection rate of small target polyps. At the same time, coordinate attention (CA) mechanism was introduced to take into account channel and location information in the network, so as to realize the effective extraction of three kinds of pathological features. The Ghost module was also used to generate more feature maps through linear processing, which reduces the stress of learning model parameters and speeds up detection. The experimental results show that the lesion diagnosis model proposed in this paper has a more rapid and accurate lesion detection ability, and the AP value of polyps, adenomas, and cancer is 0.923, 0.955, and 0.87, and mAP@50 is 0.916.


Subject(s)
Colorectal Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Endoscopy, Gastrointestinal/methods , Adenoma/diagnostic imaging , Algorithms , Computational Biology , Deep Learning , Diagnosis, Computer-Assisted/statistics & numerical data , Diagnostic Errors , Endoscopy, Gastrointestinal/statistics & numerical data , Humans , Intestinal Polyps/diagnostic imaging , Light , Neural Networks, Computer
11.
Comput Math Methods Med ; 2022: 9288452, 2022.
Article in English | MEDLINE | ID: mdl-35154361

ABSTRACT

One of the leading causes of deaths around the globe is heart disease. Heart is an organ that is responsible for the supply of blood to each part of the body. Coronary artery disease (CAD) and chronic heart failure (CHF) often lead to heart attack. Traditional medical procedures (angiography) for the diagnosis of heart disease have higher cost as well as serious health concerns. Therefore, researchers have developed various automated diagnostic systems based on machine learning (ML) and data mining techniques. ML-based automated diagnostic systems provide an affordable, efficient, and reliable solutions for heart disease detection. Various ML, data mining methods, and data modalities have been utilized in the past. Many previous review papers have presented systematic reviews based on one type of data modality. This study, therefore, targets systematic review of automated diagnosis for heart disease prediction based on different types of modalities, i.e., clinical feature-based data modality, images, and ECG. Moreover, this paper critically evaluates the previous methods and presents the limitations in these methods. Finally, the article provides some future research directions in the domain of automated heart disease detection based on machine learning and multiple of data modalities.


Subject(s)
Diagnosis, Computer-Assisted/methods , Heart Failure/diagnosis , Machine Learning , Algorithms , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/diagnostic imaging , Computational Biology , Coronary Artery Disease/diagnosis , Coronary Artery Disease/diagnostic imaging , Data Mining/statistics & numerical data , Databases, Factual/statistics & numerical data , Diagnosis, Computer-Assisted/statistics & numerical data , Diagnosis, Computer-Assisted/trends , Electrocardiography/statistics & numerical data , Heart Failure/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Machine Learning/trends , Neural Networks, Computer
12.
Comput Math Methods Med ; 2022: 7751263, 2022.
Article in English | MEDLINE | ID: mdl-35096136

ABSTRACT

Epileptic seizures occur due to brain abnormalities that can indirectly affect patient's health. It occurs abruptly without any symptoms and thus increases the mortality rate of humans. Almost 1% of world's population suffers from epileptic seizures. Prediction of seizures before the beginning of onset is beneficial for preventing seizures by medication. Nowadays, modern computational tools, machine learning, and deep learning methods have been used to predict seizures using EEG. However, EEG signals may get corrupted with background noise, and artifacts such as eye blinks and physical movements of muscles may lead to "pops" in the signal, resulting in electrical interference, which is cumbersome to detect through visual inspection for longer duration recordings. These limitations in automatic detection of interictal spikes and epileptic seizures are preferred, which is an essential tool for examining and scrutinizing the EEG recording more precisely. These restrictions bring our attention to present a review of automated schemes that will help neurologists categorize epileptic and nonepileptic signals. While preparing this review paper, it is observed that feature selection and classification are the main challenges in epilepsy prediction algorithms. This paper presents various techniques depending on various features and classifiers over the last few years. The methods presented will give a detailed understanding and ideas about seizure prediction and future research directions.


Subject(s)
Deep Learning , Diagnosis, Computer-Assisted/methods , Electroencephalography/methods , Machine Learning , Seizures/diagnosis , Algorithms , Bayes Theorem , Computational Biology , Databases, Factual/statistics & numerical data , Diagnosis, Computer-Assisted/statistics & numerical data , Electroencephalography/statistics & numerical data , Epilepsy/diagnosis , Humans , Logistic Models , Neural Networks, Computer , Seizures/classification , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Support Vector Machine
13.
Comput Math Methods Med ; 2022: 7729524, 2022.
Article in English | MEDLINE | ID: mdl-35047057

ABSTRACT

At present, the diagnosis and treatment of lung cancer have always been one of the research hotspots in the medical field. Early diagnosis and treatment of this disease are necessary means to improve the survival rate of lung cancer patients and reduce their mortality. The introduction of computer-aided diagnosis technology can easily, quickly, and accurately identify the lung nodule area as an imaging feature of early lung cancer for the clinical diagnosis of lung cancer and is helpful for the quantitative analysis of the characteristics of lung nodules and is useful for distinguishing benign and malignant lung nodules. Growth provides an objective diagnostic reference standard. This paper studies ITK and VTK toolkits and builds a system platform with MFC. By studying the process of doctors diagnosing lung nodules, the whole system is divided into seven modules: suspected lung shadow detection, image display and image annotation, and interaction. The system passes through the entire lung nodule auxiliary diagnosis process and obtains the number of nodules, the number of malignant nodules, and the number of false positives in each set of lung CT images to analyze the performance of the auxiliary diagnosis system. In this paper, a lung region segmentation method is proposed, which makes use of the obvious differences between the lung parenchyma and other human tissues connected with it, as well as the position relationship and shape characteristics of each human tissue in the image. Experiments are carried out to solve the problems of lung boundary, inaccurate segmentation of lung wall, and depression caused by noise and pleural nodule adhesion. Experiments show that there are 2316 CT images in 8 sets of images of different patients, and the number of nodules is 56. A total of 49 nodules were detected by the system, 7 were missed, and the detection rate was 87.5%. A total of 64 false-positive nodules were detected, with an average of 8 per set of images. This shows that the system is effective for CT images of different devices, pixel pitch, and slice pitch and has high sensitivity, which can provide doctors with good advice.


Subject(s)
Lung Neoplasms/diagnostic imaging , Multiple Pulmonary Nodules/diagnostic imaging , Solitary Pulmonary Nodule/diagnostic imaging , Algorithms , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , False Positive Reactions , Humans , Imaging, Three-Dimensional/statistics & numerical data , Lung/diagnostic imaging , Normal Distribution , ROC Curve , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , Tomography, X-Ray Computed/statistics & numerical data
14.
Comput Math Methods Med ; 2022: 8754693, 2022.
Article in English | MEDLINE | ID: mdl-35035525

ABSTRACT

The area of medical diagnosis has been transformed by computer-aided diagnosis (CAD). With the advancement of technology and the widespread availability of medical data, CAD has gotten a lot of attention, and numerous methods for predicting different pathological diseases have been created. Ultrasound (US) is the safest clinical imaging method; therefore, it is widely utilized in medical and healthcare settings with computer-aided systems. However, owing to patient movement and equipment constraints, certain artefacts make identification of these US pictures challenging. To enhance the quality of pictures for classification and segmentation, certain preprocessing techniques are required. Hence, we proposed a three-stage image segmentation method using U-Net and Iterative Random Forest Classifier (IRFC) to detect orthopedic diseases in ultrasound images efficiently. Initially, the input dataset is preprocessed using Enhanced Wiener Filter for image denoising and image enhancement. Then, the proposed segmentation method is applied. Feature extraction is performed by transform-based analysis. Finally, obtained features are reduced to optimal subset using Principal Component Analysis (PCA). The classification is done using the proposed Iterative Random Forest Classifier. The proposed method is compared with the conventional performance measures like accuracy, specificity, sensitivity, and dice score. The proposed method is proved to be efficient for detecting orthopedic diseases in ultrasound images than the conventional methods.


Subject(s)
Diagnosis, Computer-Assisted/statistics & numerical data , Musculoskeletal Diseases/diagnostic imaging , Ultrasonography/statistics & numerical data , Algorithms , Artifacts , Computational Biology , Databases, Factual/statistics & numerical data , Deep Learning , Humans , Image Enhancement/methods , Osteoporosis/diagnostic imaging , Principal Component Analysis
15.
Comput Math Methods Med ; 2022: 7631271, 2022.
Article in English | MEDLINE | ID: mdl-35069792

ABSTRACT

The diagnosis of new diseases is a challenging problem. In the early stage of the emergence of new diseases, there are few case samples; this may lead to the low accuracy of intelligent diagnosis. Because of the advantages of support vector machine (SVM) in dealing with small sample problems, it is selected for the intelligent diagnosis method. The standard SVM diagnosis model updating needs to retrain all samples. It costs huge storage and calculation costs and is difficult to adapt to the changing reality. In order to solve this problem, this paper proposes a new disease diagnosis method based on Fuzzy SVM incremental learning. According to SVM theory, the support vector set and boundary sample set related to the SVM diagnosis model are extracted. Only these sample sets are considered in incremental learning to ensure the accuracy and reduce the cost of calculation and storage. To reduce the impact of noise points caused by the reduction of training samples, FSVM is used to update the diagnosis model, and the generalization is improved. The simulation results on the banana dataset show that the proposed method can improve the classification accuracy from 86.4% to 90.4%. Finally, the method is applied in COVID-19's diagnostic. The diagnostic accuracy reaches 98.2% as the traditional SVM only gets 84%. With the increase of the number of case samples, the model is updated. When the training samples increase to 400, the number of samples participating in training is only 77; the amount of calculation of the updated model is small.


Subject(s)
Diagnosis, Computer-Assisted/methods , Fuzzy Logic , Support Vector Machine , Algorithms , Artificial Intelligence/statistics & numerical data , COVID-19/diagnosis , Computational Biology , Diagnosis, Computer-Assisted/statistics & numerical data , Humans , SARS-CoV-2
16.
Comput Math Methods Med ; 2022: 7020209, 2022.
Article in English | MEDLINE | ID: mdl-35082914

ABSTRACT

This study was to analyze the diagnostic value of coronary computed tomography angiography (CCTA) and fractional flow reserve (FFR) based on computer-aided diagnosis (CAD) system for coronary lesions and the possible impact of calcification. 80 patients who underwent CCTA and FFR examination in hospital were selected as the subjects. The FFR value of 0.8 was used as the dividing line and divided into the ischemic group (FFR ≤ 0.8) and nonischemic group (FFR > 0.8). The basic data and imaging characteristics of patients were analyzed. The maximum diameter stenosis rate (MDS %), maximum area stenosis rate (MAS %), and napkin ring sign (NRS) in the ischemic group were significantly lower than those in the nonischemic group (P < 0.05). Remodeling index (RI) and eccentric index (EI) compared with the nonischemic group had no significant difference (P > 0.05). The total plaque volume (TPV), total plaque burden (TPB), calcified plaque volume (CPV), lipid plaque volume (LPV), and lipid plaque burden (LPB) in the ischemic group were significantly different from those in the non-ischemic group (P < 0.05). MAS % had the largest area under curve (AUC) for the diagnosis of coronary myocardial ischemia (0.74), followed by MDS % (0.69) and LPV (0.68). CT-FFR had high diagnostic sensitivity, specificity, accuracy, truncation value, and AUC area data for patients in the ischemic group and nonischemic group. The diagnostic sensitivity, specificity, accuracy, cutoff value, and AUC area data of CT-FFR were higher in the ischemic group (89.93%, 92.07%, 95.84%, 60.51%, 0.932) and nonischemic group (93.75%, 90.88%, 96.24%, 58.22%, 0.944), but there were no significant differences between the two groups (P > 0.05). In summary, CT-FFR based on CAD system has high accuracy in evaluating myocardial ischemia caused by coronary artery stenosis, and within a certain range of calcification scores, calcification does not affect the diagnostic accuracy of CT-FFR.


Subject(s)
Calcinosis/diagnostic imaging , Computed Tomography Angiography/statistics & numerical data , Coronary Angiography/statistics & numerical data , Coronary Artery Disease/diagnostic imaging , Fractional Flow Reserve, Myocardial/physiology , Adult , Aged , Aged, 80 and over , Algorithms , Computational Biology , Coronary Artery Disease/physiopathology , Coronary Stenosis/diagnostic imaging , Coronary Stenosis/physiopathology , Coronary Vessels/diagnostic imaging , Coronary Vessels/physiopathology , Diagnosis, Computer-Assisted/statistics & numerical data , Female , Humans , Male , Middle Aged , Myocardial Ischemia/diagnostic imaging , Myocardial Ischemia/physiopathology , Plaque, Atherosclerotic/diagnostic imaging , Plaque, Atherosclerotic/physiopathology
17.
Comput Math Methods Med ; 2021: 8500314, 2021.
Article in English | MEDLINE | ID: mdl-34966445

ABSTRACT

Cardiovascular disease (CVD) is one of the most common causes of death that kills approximately 17 million people annually. The main reasons behind CVD are myocardial infarction and the failure of the heart to pump blood normally. Doctors could diagnose heart failure (HF) through electronic medical records on the basis of patient's symptoms and clinical laboratory investigations. However, accurate diagnosis of HF requires medical resources and expert practitioners that are not always available, thus making the diagnosing challengeable. Therefore, predicting the patients' condition by using machine learning algorithms is a necessity to save time and efforts. This paper proposed a machine-learning-based approach that distinguishes the most important correlated features amongst patients' electronic clinical records. The SelectKBest function was applied with chi-squared statistical method to determine the most important features, and then feature engineering method has been applied to create new features correlated strongly in order to train machine learning models and obtain promising results. Optimised hyperparameter classification algorithms SVM, KNN, Decision Tree, Random Forest, and Logistic Regression were used to train two different datasets. The first dataset, called Cleveland, consisted of 303 records. The second dataset, which was used for predicting HF, consisted of 299 records. Experimental results showed that the Random Forest algorithm achieved accuracy, precision, recall, and F1 scores of 95%, 97.62%, 95.35%, and 96.47%, respectively, during the test phase for the second dataset. The same algorithm achieved accuracy scores of 100% for the first dataset and 97.68% for the second dataset, while 100% precision, recall, and F1 scores were reached for both datasets.


Subject(s)
Algorithms , Heart Failure/diagnosis , Machine Learning , Adult , Aged , Aged, 80 and over , Chi-Square Distribution , Computational Biology , Databases, Factual , Decision Trees , Diagnosis, Computer-Assisted/statistics & numerical data , Electronic Health Records/statistics & numerical data , Female , Heart Disease Risk Factors , Humans , Logistic Models , Male , Middle Aged , Neural Networks, Computer , Support Vector Machine
18.
Comput Math Methods Med ; 2021: 2370496, 2021.
Article in English | MEDLINE | ID: mdl-34950223

ABSTRACT

A combination of various risk factors results in the development of coronary heart disease. The earlier that one identifies and deals with reversible risk factors for coronary heart disease, the greater the chance of recovery. The main goal of this research is to learn whether risk variables are associated with greater extent of coronary artery disease in people with coronary heart disease. This article selects 290 patients who had had coronary angiography in our hospital from September 2018 to March 2019 using a retrospective research and analytic methodology. Coronary angiography split the patients into two groups: those with coronary heart disease and those without. To determine the correlation between risk factors and a score related to heart disease, computer-aided statistical analysis of data about the differences in those risk factors was performed. The results were analyzed using the Spearman correlation and partial correlation, and the relationship between risk factors and Gensini score was analyzed by multiple linear regression. For the analysis, binary logistic regression was used to calculate the correlation between the risk factors of coronary heart disease and the probability of developing coronary heart disease. The findings concluded that increased age, smoking, elevated hs-CRP, HbA1c, hypertension, diabetes, and hyperuricemia are all contributors to coronary heart disease. Coronary heart disease is an independent risk factor for this condition. Many of the factors that play a role in the long-term development of the severity of coronary artery disease, such as hypertension, diabetes, smoking, elevated hs-CRP, decreased HDL-C, raised LDL-C, and TG, are commonly found in men. hs-CRP is the primary risk factor for the degree of coronary artery stenosis and could contribute to the progression of the condition by playing a major role in creating more stenosis.


Subject(s)
Coronary Angiography/statistics & numerical data , Coronary Disease/diagnostic imaging , Adult , Aged , Aged, 80 and over , C-Reactive Protein/metabolism , Case-Control Studies , Computational Biology , Coronary Artery Disease/diagnostic imaging , Coronary Disease/blood , Coronary Disease/etiology , Coronary Stenosis/diagnostic imaging , Coronary Vessels/diagnostic imaging , Diagnosis, Computer-Assisted/statistics & numerical data , Female , Heart Disease Risk Factors , Humans , Linear Models , Lipids/blood , Male , Middle Aged , Retrospective Studies
19.
Comput Math Methods Med ; 2021: 1972662, 2021.
Article in English | MEDLINE | ID: mdl-34721654

ABSTRACT

In recent years, the research on electroencephalography (EEG) has focused on the feature extraction of EEG signals. The development of convenient and simple EEG acquisition devices has produced a variety of EEG signal sources and the diversity of the EEG data. Thus, the adaptability of EEG classification methods has become significant. This study proposed a deep network model for autonomous learning and classification of EEG signals, which could self-adaptively classify EEG signals with different sampling frequencies and lengths. The artificial design feature extraction methods could not obtain stable classification results when analyzing EEG data with different sampling frequencies. However, the proposed depth network model showed considerably better universality and classification accuracy, particularly for EEG signals with short length, which was validated by two datasets.


Subject(s)
Deep Learning , Electroencephalography/statistics & numerical data , Epilepsy/diagnosis , Algorithms , Brain-Computer Interfaces , Computational Biology , Databases, Factual , Diagnosis, Computer-Assisted/statistics & numerical data , Electroencephalography/classification , Epilepsy/classification , Humans , Neural Networks, Computer , Signal Processing, Computer-Assisted
20.
Comput Math Methods Med ; 2021: 6919483, 2021.
Article in English | MEDLINE | ID: mdl-34721659

ABSTRACT

In March 2020, the World Health Organization announced the COVID-19 pandemic, its dangers, and its rapid spread throughout the world. In March 2021, the second wave of the pandemic began with a new strain of COVID-19, which was more dangerous for some countries, including India, recording 400,000 new cases daily and more than 4,000 deaths per day. This pandemic has overloaded the medical sector, especially radiology. Deep-learning techniques have been used to reduce the burden on hospitals and assist physicians for accurate diagnoses. In our study, two models of deep learning, ResNet-50 and AlexNet, were introduced to diagnose X-ray datasets collected from many sources. Each network diagnosed a multiclass (four classes) and a two-class dataset. The images were processed to remove noise, and a data augmentation technique was applied to the minority classes to create a balance between the classes. The features extracted by convolutional neural network (CNN) models were combined with traditional Gray-level Cooccurrence Matrix (GLCM) and Local Binary Pattern (LBP) algorithms in a 1-D vector of each image, which produced more representative features for each disease. Network parameters were tuned for optimum performance. The ResNet-50 network reached accuracy, sensitivity, specificity, and Area Under the Curve (AUC) of 95%, 94.5%, 98%, and 97.10%, respectively, with the multiclasses (COVID-19, viral pneumonia, lung opacity, and normal), while it reached accuracy, sensitivity, specificity, and AUC of 99%, 98%, 98%, and 97.51%, respectively, with the binary classes (COVID-19 and normal).


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , SARS-CoV-2 , Tomography, X-Ray Computed/methods , Algorithms , Computational Biology , Databases, Factual/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Diagnosis, Computer-Assisted/statistics & numerical data , Early Diagnosis , Humans , Lung/diagnostic imaging , Neural Networks, Computer , Pandemics , Pneumonia, Viral/diagnostic imaging , Tomography, X-Ray Computed/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...