Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
Journal of Biomedical Engineering ; (6): 249-256, 2023.
Article in Chinese | WPRIM | ID: wpr-981536

ABSTRACT

Hypertension is the primary disease that endangers human health. A convenient and accurate blood pressure measurement method can help to prevent the hypertension. This paper proposed a continuous blood pressure measurement method based on facial video signal. Firstly, color distortion filtering and independent component analysis were used to extract the video pulse wave of the region of interest in the facial video signal, and the multi-dimensional feature extraction of the pulse wave was preformed based on the time-frequency domain and physiological principles; Secondly, an integrated feature selection method was designed to extract the universal optimal feature subset; After that, we compared the single person blood pressure measurement models established by Elman neural network based on particle swarm optimization, support vector machine (SVM) and deep belief network; Finally, we used SVM algorithm to build a general blood pressure prediction model, which was compared and evaluated with the real blood pressure value. The experimental results showed that the blood pressure measurement results based on facial video were in good agreement with the standard blood pressure values. Comparing the estimated blood pressure from the video with standard blood pressure value, the mean absolute error (MAE) of systolic blood pressure was 4.9 mm Hg with a standard deviation (STD) of 5.9 mm Hg, and the MAE of diastolic blood pressure was 4.6 mm Hg with a STD of 5.0 mm Hg, which met the AAMI standards. The non-contact blood pressure measurement method based on video stream proposed in this paper can be used for blood pressure measurement.


Subject(s)
Humans , Blood Pressure/physiology , Blood Pressure Determination/methods , Algorithms , Hypertension/diagnosis , Sexually Transmitted Diseases
2.
Journal of Biomedical Engineering ; (6): 1160-1167, 2023.
Article in Chinese | WPRIM | ID: wpr-1008946

ABSTRACT

Heart valve disease (HVD) is one of the common cardiovascular diseases. Heart sound is an important physiological signal for diagnosing HVDs. This paper proposed a model based on combination of basic component features and envelope autocorrelation features to detect early HVDs. Initially, heart sound signals lasting 5 minutes were denoised by empirical mode decomposition (EMD) algorithm and segmented. Then the basic component features and envelope autocorrelation features of heart sound segments were extracted to construct heart sound feature set. Then the max-relevance and min-redundancy (MRMR) algorithm was utilized to select the optimal mixed feature subset. Finally, decision tree, support vector machine (SVM) and k-nearest neighbor (KNN) classifiers were trained to detect the early HVDs from the normal heart sounds and obtained the best accuracy of 99.9% in clinical database. Normal valve, abnormal semilunar valve and abnormal atrioventricular valve heart sounds were classified and the best accuracy was 99.8%. Moreover, normal valve, single-valve abnormal and multi-valve abnormal heart sounds were classified and the best accuracy was 98.2%. In public database, this method also obtained the good overall accuracy. The result demonstrated this proposed method had important value for the clinical diagnosis of early HVDs.


Subject(s)
Humans , Heart Sounds , Heart Valve Diseases/diagnosis , Algorithms , Support Vector Machine , Signal Processing, Computer-Assisted
3.
Journal of Biomedical Engineering ; (6): 1126-1134, 2023.
Article in Chinese | WPRIM | ID: wpr-1008942

ABSTRACT

Due to the high complexity and subject variability of motor imagery electroencephalogram, its decoding is limited by the inadequate accuracy of traditional recognition models. To resolve this problem, a recognition model for motor imagery electroencephalogram based on flicker noise spectrum (FNS) and weighted filter bank common spatial pattern ( wFBCSP) was proposed. First, the FNS method was used to analyze the motor imagery electroencephalogram. Using the second derivative moment as structure function, the ensued precursor time series were generated by using a sliding window strategy, so that hidden dynamic information of transition phase could be captured. Then, based on the characteristic of signal frequency band, the feature of the transition phase precursor time series and reaction phase series were extracted by wFBCSP, generating features representing relevant transition and reaction phase. To make the selected features adapt to subject variability and realize better generalization, algorithm of minimum redundancy maximum relevance was further used to select features. Finally, support vector machine as the classifier was used for the classification. In the motor imagery electroencephalogram recognition, the method proposed in this study yielded an average accuracy of 86.34%, which is higher than the comparison methods. Thus, our proposed method provides a new idea for decoding motor imagery electroencephalogram.


Subject(s)
Brain-Computer Interfaces , Imagination , Signal Processing, Computer-Assisted , Electroencephalography/methods , Algorithms , Spectrum Analysis
4.
Journal of Biomedical Engineering ; (6): 820-828, 2023.
Article in Chinese | WPRIM | ID: wpr-1008905

ABSTRACT

Attention level evaluation refers to the evaluation of people's attention level through observation or experimental testing, and its research results have great application value in education and teaching, intelligent driving, medical health and other fields. With its objective reliability and security, electroencephalogram signals have become one of the most important technical means to analyze and express attention level. At present, there is little review literature that comprehensively summarize the application of electroencephalogram signals in the field of attention evaluation. To this end, this paper first summarizes the research progress on attention evaluation; then the important methods for electroencephalogram attention evaluation are analyzed, including data preprocessing, feature extraction and selection, attention evaluation methods, etc.; finally, the shortcomings of the current development in the field of electroencephalogram attention evaluation are discussed, and the future development trend is prospected, to provide research references for researchers in related fields.


Subject(s)
Humans , Reproducibility of Results , Electroencephalography
5.
Journal of Environmental and Occupational Medicine ; (12): 1115-1120, 2023.
Article in Chinese | WPRIM | ID: wpr-998764

ABSTRACT

Background Identification and analysis of influencing factors of occupational injury is an important research content of feature selection. In recent years, with the rise of machine learning algorithms, feature selection combined with Boosting algorithm provides a new analysis idea to construct occupational injury prediction models. Objective To evaluate applicability of Boosting algorithm-based model in predicting severity of miners' non-fatal occupational injuries, and provide a basis for rationally predicting the severity level of miners' non-fatal occupational injuries. Methods The publicly available data of the US Mine Safety and Health Administration (MSHA) from 2001 to 2021 on metal miners' non-fatal occupational injuries were used, and the outcome variables were lost working days < 105 d (minor injury) and ≥ 105 d (serious injury). Four different feature sets were screened out by four feature selection methods including least absolute shrinkage and selection operator (Lasso) regression, stepwise regression, single factor + Lasso regression, and single factor + stepwise regression. Logistic regression, gradient boosting decision tree (GBDT), and extreme gradient boosting (XGBoost) were selected to construct prediction models by training with the four feature sets. A total of 12 prediction models of severity of miners' non-fatal occupational injuries were built and their area under the curve (AUC), sensitivity, specificity, and Youden index were calculated for model evaluation. Results According to the results of four feature selection methods, age, time of accident occurrence, total length of service, cause of injury, activities that triggered injury occurrence, body part of injury, nature of injury, and outcome of injury were identified as influencing factors of non-fatal occupational injury severity in miners. Feature set 4 was the optimal set screened out by single factor+stepwise regression and the GBDT model presented the best predictive performance in predicting the severity of non-fatal occupational injuries. The associated specificity, sensitivity, and Youden index were 0.7530, 0.9490, and 0.7020, respectively. The AUC values of logistic regression, GBDT, and XGBoost models trained by feature set 4 were 0.8526 (95%CI: 0.8387, 0.8750), 0.8640 (95%CI: 0.8474, 0.8806), and 0.8603 (95%CI: 0.8439, 0.8773), respectively, higher than the AUC values trained by feature set 2 [0.8487 (95%CI: 0.8203, 0.8669), 0.8110 (95%CI: 0.8012, 0.8344), and 0.8439 (95%CI: 0.8245, 0.8561), respectively] . The AUC values of GBDT and XGBoost models trained by feature set 4 were higher than that of logistic regression model. Conclusion The performance of the prediction models constructed by predictors screened out by two feature selection methods is better than those by single feature selection methods. At the same time, under the condition of optimal feature set, the performance of model prediction based on Boosting is better than that of traditional logistic regression model.

6.
Philippine Journal of Health Research and Development ; (4): 83-92, 2022.
Article in English | WPRIM | ID: wpr-987199

ABSTRACT

Background@#Cardiovascular diseases belong to the top three leading causes of mortality in the Philippines with 17.8 % of the total deaths. Lifestyle-related habits such as alcohol consumption, smoking, poor diet and nutrition, high sedentary behavior, overweight, and obesity have been increasingly implicated in the high rates of heart disease among Filipinos leading to a significant burden to the country's healthcare system. The objective of this study was to predict the presence of heart disease using various machine learning algorithms (support vector machine, naïve Bayes, random forest, logistic regression, decision tree, and adaptive boosting) evaluated on an anonymized publicly available cardiovascular disease dataset. @*Methodology@#Various machine learning algorithms were applied on an anonymized publicly available cardiovascular dataset from a machine learning data repository (IEEE Dataport). A web-based application system named Heart Alert was developed based on the best machine learning model that would predict the risk of developing heart disease. An assessment of the effects of different optimization techniques as to the imputation methods (mean, median, mode, and multiple imputation by chained equations) and as to the feature selection method (recursive feature elimination) on the classification performance of the machine learning algorithms was made. All simulation experiments were implemented via Python 3.8 and its machine learning libraries (Scikit-learn, Keras, Tensorflow, Pandas, Matplotlib, Seaborn, NumPy). @*Results@#The support vector machine without imputation and feature selection obtained the highest performance metrics (90.2% accuracy, 87.7% sensitivity, 93.6% specificity, 94.9% precision, 91.2% F1-score and an area under the receiver operating characteristic curve of 0.902 ) and was used to implement the heart disease prediction system (Heart Alert). Following very closely were random forest with mean or median imputation and logistic regression with mode imputation, all having no feature selection which also performed well. @*Conclusion@#The performance of the best four machine learning models suggests that for this dataset, imputation technique for missing values may or may not be done. Likewise, recursive feature elimination for feature selection may not apply as all variables seem to be important in heart disease prediction. An early accurate diagnosis leading to prompt intervention efforts is very crucial as it improves the patient's quality of life and diminishes the risk of developing cardiac events.


Subject(s)
Machine Learning , Support Vector Machine
7.
Chinese Journal of Biochemistry and Molecular Biology ; (12): 1106-1116, 2022.
Article in Chinese | WPRIM | ID: wpr-1015784

ABSTRACT

Early diagnosis of cancer can significantly improve the survival rate of cancer patients, especially in patients with hepatocellular carcinoma (HCC). Machine learning is an effective tool in cancer classification. How to select high⁃classification accuracy feature subsets with low dimension in complex and high⁃dimensional cancer datasets is a difficult problem in cancer classification. In this paper, we propose a novel feature selection method, SC⁃BPSO: a two⁃stage feature selection method implemented by combining the Spearman correlation coefficient, chi⁃square independent test⁃based filter method, and binary particle swarm optimal (BPSO) based wrapper method. It has been applied to the cancer classification of high⁃dimensional data to classify normal samples and HCC samples. The dataset in this paper is obtained from 130 liver tissue microRNA sequence data (64 hepatocellular carcinoma, 66 normal liver tissue) from National Center for Bioinformatics (NCBI) and European Bioinformatics Institute (EBI). First, the liver tissue microRNA sequence data was preprocessed to extract the three types of features of microRNA expression, editing level and post⁃editing expression. Then, the parameters of the SC⁃BPSO algorithm in the liver cancer classification were adjusted to select a subset of key features. Finally, classifiers were used to establish classification models, predict the results, and compare the classification results with the feature subset selected by the information gain filter, the information gain ratio filter and the BPSO wrapper feature selection algorithm using the same classifier. Using the feature subset selected by the SC⁃BPSO algorithm, the classification accuracy is up to 98. 4%. The experimental results showed that compared with the other three feature selection algorithms, the SC⁃ BPSO algorithm can effectively find feature subsets with relatively small size and higher accuracy. This may have important implications for cancer classification with a small number of samples and high⁃ dimension features.

8.
Chinese Journal of Medical Instrumentation ; (6): 57-62, 2022.
Article in Chinese | WPRIM | ID: wpr-928858

ABSTRACT

This paper reviews some recent studies on the recognition and evaluation of facial paralysis based on artificial intelligence. The research methods can be divided into two categories: facial paralysis evaluation based on artificial selection of patients' facial image eigenvalues and facial paralysis evaluation based on neural network and patients' facial images. The analysis shows that the method of manual selection of eigenvalues is suitable for small sample size, but the classification effect of adjacent ratings of facial paralysis needs to be further optimized. The neural network method can distinguish the neighboring grades of facial paralysis relatively well, but it has a higher requirement for sample size. Both of the two methods have good prospects. The features that are more closely related to the evaluation scale are selected manually, and the common development direction may be to extract the time-domain features, so as to achieve the purpose of improving the evaluation accuracy of facial paralysis.


Subject(s)
Humans , Artificial Intelligence , Face , Facial Paralysis/diagnosis , Neural Networks, Computer
9.
International Eye Science ; (12): 1644-1648, 2021.
Article in Chinese | WPRIM | ID: wpr-886453

ABSTRACT

@#AIM:To build prediction model of dry eye with data mining techniques.<p>METHODS: From March 2020 to January 2021, 218 patients(436 eyes)with dry eye were selected as dry eye group, and 212 patients(424 eyes)without dry eye were selected as control group. Schirmer Ⅰ test(SⅠt), fluorescein staining tear film break-up time(FBUT), non-contact tear film break-up time(NI-BUT), tear meniscus height(TMH), corneal fluorescein staining(FL)and meibomian gland function score(MG-SCORE)were performed in both groups. Totally 200 eyes of 100 samples were randomly selected from the dry eye group and the control group to form a test set of 400 eyes of 200 samples. The remaining 118 samples(236 eyes)in the dry eye group and 112 samples(224 eyes)in the control group were used as the training set. Correlation feature searching(CFS)feature selection algorithm was used to search the factors related to the detection of dry eye. C4.5, Random Forest, Rondom Tree, Naïve Bayes, KNN, SVM, Decision Stump and Bagging methods were used to construct the prediction model, respectively.<p>RESULTS:By using CFS feature selection algorithm, an optimal sub-feature set including SⅠt, NI-BUT, TMH and FL were obtained. Based on the four features, eight machine learning algorithms were employed to build the prediction model, respectively. The results show that the prediction accuracies were all higher than 75%. Among the eight prediction models, the prediction accuracy model by using Random Forest is the highest, which achieved 91.8% and 88.3%, respectively. And the total prediction accuracy reached 90.1%. In addition, through the analysis of single factor modeling, we found that FL and NI-BUT had the highest prediction accuracy, which exceeded 74%.<p>CONCLUSION: Random Forest could be considered as a stable and well generalization algorithm to build prediction model for dry eye with well generalization. NI-BUT and FL have a strong correlation with dry eye, which can be considered as the standard for clinical examination of dry eye.

10.
Chinese Journal of Medical Instrumentation ; (6): 361-365, 2021.
Article in Chinese | WPRIM | ID: wpr-888624

ABSTRACT

OBJECTIVE@#According to the digital image features of corneal opacity, a multi classification model of support vector machine (SVM) was established to explore the objective quantification method of corneal opacity.@*METHODS@#The cornea digital images of dead pigs were collected, part of the color features and texture features were extracted according to the previous experience, and the SVM multi classification model was established. The test results of the model were evaluated by precision, sensitivity and @*RESULTS@#In the classification of corneal opacity, the highest @*CONCLUSIONS@#The SVM multi classification model can classify the degree of corneal opacity.


Subject(s)
Animals , Corneal Opacity , Support Vector Machine , Swine
11.
Braz. arch. biol. technol ; 64: e21210240, 2021. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1355817

ABSTRACT

Abstract The ambitious task in the domain of medical informatics is medical data classification. From medical datasets, intention to ameliorate human burden with the medical data classification entails to taking in classification designs. The medical data classification is the major focus of this paper, where a Decision Tree based Salp Swarm Optimization (DT-SWO) algorithm is proposed. After pre-processingthe hybrid feature selection method selects the medical data features. The high dimensional features are reduced by Discriminant Independent Component Analysis (DICA) and DT-SWO is to classify the most relevant class of medical data. The details of four datasets namely Leukemia, Diffuse Larger B-cell Lymphomas (DLBCL), Lung cancer and Colon relating to four diseases for heart, liver, cancer and lungs are collected from the UCI machine learning repository. Ultimately, the experimental outcomes demonstrated that the proposed DT-SWO algorithm is suitable for medical data classification than other algorithms.

12.
Neuroscience Bulletin ; (6): 985-996, 2020.
Article in English | WPRIM | ID: wpr-828333

ABSTRACT

Hydrocephalus is often treated with a cerebrospinal fluid shunt (CFS) for excessive amounts of cerebrospinal fluid in the brain. However, it is very difficult to distinguish whether the ventricular enlargement is due to hydrocephalus or other causes, such as brain atrophy after brain damage and surgery. The non-trivial evaluation of the consciousness level, along with a continuous drainage test of the lumbar cistern is thus clinically important before the decision for CFS is made. We studied 32 secondary mild hydrocephalus patients with different consciousness levels, who received T1 and diffusion tensor imaging magnetic resonance scans before and after lumbar cerebrospinal fluid drainage. We applied a novel machine-learning method to find the most discriminative features from the multi-modal neuroimages. Then, we built a regression model to regress the JFK Coma Recovery Scale-Revised (CRS-R) scores to quantify the level of consciousness. The experimental results showed that our method not only approximated the CRS-R scores but also tracked the temporal changes in individual patients. The regression model has high potential for the evaluation of consciousness in clinical practice.

13.
Journal of Biomedical Engineering ; (6): 661-669, 2020.
Article in Chinese | WPRIM | ID: wpr-828121

ABSTRACT

How to extract high discriminative features that help classification from complex resting-state fMRI (rs-fMRI) data is the key to improving the accuracy of brain disease recognition such as schizophrenia. In this work, we use a weighted sparse model for brain network construction, and utilize the Kendall correlation coefficient (KCC) to extract the discriminative connectivity features for schizophrenia classification, which is conducted with the linear support vector machine. Experimental results based on the rs-fMRI of 57 schizophrenia patients and 64 healthy controls show that our proposed method is more effective ( ., achieving a significantly higher classification accuracy, 81.82%) than other competing methods. Specifically, compared with the traditional network construction methods (Pearson's correlation and sparse representation) and the commonly used feature selection methods (two-sample -test and Least absolute shrinkage and selection operator (Lasso)), the algorithm proposed in this paper can more effectively extract the discriminative connectivity features between the schizophrenia patients and the healthy controls, and further improve the classification accuracy. At the same time, the discriminative connectivity features extracted in the work could be used as the potential clinical biomarkers to assist the identification of schizophrenia.


Subject(s)
Humans , Algorithms , Brain , Brain Mapping , Magnetic Resonance Imaging , Schizophrenia , Diagnostic Imaging
14.
Neuroscience Bulletin ; (6): 985-996, 2020.
Article in English | WPRIM | ID: wpr-826744

ABSTRACT

Hydrocephalus is often treated with a cerebrospinal fluid shunt (CFS) for excessive amounts of cerebrospinal fluid in the brain. However, it is very difficult to distinguish whether the ventricular enlargement is due to hydrocephalus or other causes, such as brain atrophy after brain damage and surgery. The non-trivial evaluation of the consciousness level, along with a continuous drainage test of the lumbar cistern is thus clinically important before the decision for CFS is made. We studied 32 secondary mild hydrocephalus patients with different consciousness levels, who received T1 and diffusion tensor imaging magnetic resonance scans before and after lumbar cerebrospinal fluid drainage. We applied a novel machine-learning method to find the most discriminative features from the multi-modal neuroimages. Then, we built a regression model to regress the JFK Coma Recovery Scale-Revised (CRS-R) scores to quantify the level of consciousness. The experimental results showed that our method not only approximated the CRS-R scores but also tracked the temporal changes in individual patients. The regression model has high potential for the evaluation of consciousness in clinical practice.

15.
Article | IMSEAR | ID: sea-209972

ABSTRACT

Recently, breast cancer is one of the most popular cancers that women could suffer from. The gravity and seriousness of breast cancer can be evidenced by the fact that the mortality rates associated with it are the second highest after lung cancer. For the treatment of breast cancer, Mammography has emerged as the one whose modality when it comes to the defection of this cancer is most effective despite the challenges posed by dense breast parenchyma. In thisregard, computer-aided diagnosis (CADe) leverages the mammography systems’ output to facilitate the radiologist’s decision. It can be defined as a system that makes a similar diagnosis to the one done by a radiologist who relies for his/her interpretationon the suggestions generated by a computer after it analyzed a set of patient radiological images when making. Against this backdrop, the current paper examines different ways of utilizing known image processing and techniques of machine learning detection of breast cancer using CAD –more specifically, using mammogram images. This, in turn, helps pathologist in their decision-making process. For effective implementation of this methodology, CADe system was developed and tested on the public and freely available mammographic databases named MIAS database. CADe system is developed to differentiate between normal and abnormal tissues, and it assists radiologists to avoid missing breast abnormalities. The performance of all classifiers is the best by using thesequential forward selection (SFS) method. Also, we can conclude that the quantization grey level of (gray-level co-occurrence matrices) GLCM is a very significant factor to get robust high order features where the results are better with L equal to the size of ROI. Using an enormous number of several features assist the CADe system to be strong enough to distinguish between the different tissues

16.
Chinese Journal of Medical Imaging Technology ; (12): 1569-1573, 2019.
Article in Chinese | WPRIM | ID: wpr-861218

ABSTRACT

With the development of medical imaging technology, research of Alzheimer disease (AD) based on radiomics has become one of the current hot spots. The applications of existing radiomics analysis methods in AD were reviewed in this article. Firstly, the workflows of radiomics with machine learning and deep learning were described. Secondly, the key methods about feature extraction, selection, dimensionality reduction and statistical classification models of radiomics with machine learning were summarized. Then the applications of recent deep learning-based radiomics method in AD were discussed. Finally, the limitations and challenges of machine learning and deep learning methods in practical applications were compared and analyzed.

17.
Journal of Biomedical Engineering ; (6): 957-963, 2019.
Article in Chinese | WPRIM | ID: wpr-781841

ABSTRACT

The purpose of our study is to evaluate the diagnostic performance of radiomics in multi-class discrimination of lymphadenopathy based on elastography and B-mode dual-modal ultrasound images. We retrospectively analyzed a total of 251 lymph nodes (89 benign lymph nodes, 70 lymphoma and 92 metastatic lymph nodes) from 248 patients, which were examined by both elastography and B-mode sonography. Firstly, radiomic features were extracted from multimodal ultrasound images, including shape features, intensity statistics features and gray-level co-occurrence matrix texture features. Secondly, three feature selection methods based on information theory were used on the radiomic features to select different subsets of radiomic features, consisting of conditional infomax feature extraction, conditional mutual information maximization, and double input symmetric relevance. Thirdly, the support vector machine classifier was performed for diagnosis of lymphadenopathy on each radiomic subsets. Finally, we fused the results from different modalities and different radiomic feature subsets with Adaboost to improve the performance of lymph node classification. The results showed that the accuracy and overall 1 score with five-fold cross-validation were 76.09%±1.41% and 75.88%±4.32%, respectively. Moreover, when considering on benign lymph nodes, lymphoma or metastatic lymph nodes respectively, the area under the receiver operating characteristic curve of multi-class classification were 0.77, 0.93 and 0.84, respectively. This study indicates that radiomic features derived from multimodal ultrasound images are benefit for diagnosis of lymphadenopathy. It is expected to be useful in clinical differentiation of lymph node diseases.


Subject(s)
Humans , Elasticity Imaging Techniques , Lymph Nodes , Lymphadenopathy , Retrospective Studies , Ultrasonography
18.
Journal of Southern Medical University ; (12): 547-553, 2019.
Article in Chinese | WPRIM | ID: wpr-772045

ABSTRACT

To explore the application of radiomic analysis in differential diagnosis of renal cell carcinoma in patients with hydronephrosis and renal calculi using supervised machine learning methods.The abdominal CT scan data were retrospectively analyzed for 66 patients with pathologically confirmed hydronephrosis and renal calculi, among whom 35 patients had renal cell carcinoma. In each case 18 non-texture features and 344 texture features were extracted from the region of interest (ROI). Infinite feature selection (InfFS)-based forward feature selection method coupled with support vector machine (SVM) classifier was used to select the optimal feature subset. SVM was trained and performed the prediction using the selected feature subset to classify whether hydronephrosis with renal calculi was associated with renal cell carcinoma.A total of 12 texture features were selected as the optimal features. The area under curve (AUC), accuracy, sensitivity, specificity, false positive rate and false negative rate of the SVM- InfFS model for predicting accompanying renal tumors in patients with hydronephrosis and calculi were 0.907, 81.0%, 70.0%, 90.9%, 9.1%, and 30.0%, respectively. The diagnostic accuracy, sensitivity, specificity, false positive and false negative rates by the clinicians provided with these classification results were 90.5%, 80.0%, 100%, 0.00%, and 20.0%, respectively.The computer-aided classification model based on supervised machine learning can effectively extract the diagnostic information and improve the diagnostic rate of renal cell carcinoma associated with hydronephrosis and renal calculi.


Subject(s)
Humans , Carcinoma, Renal Cell , Diagnosis , Diagnosis, Differential , Hydronephrosis , Diagnosis , Kidney Calculi , Kidney Neoplasms , Diagnosis , Retrospective Studies
19.
International Journal of Biomedical Engineering ; (6): 336-341, 2019.
Article in Chinese | WPRIM | ID: wpr-789113

ABSTRACT

Objective To predict the 5-year survival of patients with non-small cell lung cancer (NSCLC) by machine learning, and to improve the prediction efficiency and prediction accuracy. Methods The experiments were performed using NSCLC data from the SEER database. According to the imbalance of patient data, the Borderline-SMOTE method was used for data sampling. The perturbation-based feature selection (PFS) method and decision tree ( DT ) algorithm were used to screen the features and construct the postoperative survival prediction model . Results The patient data was balanced, and seven prognostic variables were screened, including primary site, stage group, surgical primary site, international classification of diseases, race and grade. Compared with LASSO, Tree-based, PFS-SVM and PFS-kNN models, the model constructed using PFS-DT has the best predictive effect. Conclusions The patient survival prediction model based on PFS-DT can effectively improve the accuracy of postoperative survival prediction in patients with NSCLC, and can provide a reference for doctors to provide treatment and improve prognosis.

20.
Journal of Biomedical Engineering ; (6): 183-188, 2019.
Article in Chinese | WPRIM | ID: wpr-774222

ABSTRACT

The early diagnosis of children with autism spectrum disorders (ASD) is essential. Electroencephalography (EEG) is one of most commonly used neuroimaging techniques as the most accessible and informative method. In this study, approximate entropy (ApEn), sample entropy (SaEn), permutation entropy (PeEn) and wavelet entropy (WaEn) were extracted from EEGs of ASD child and a control group, and Student's -test was used to analyze between-group differences. Support vector machine (SVM) algorithm was utilized to build classification models for each entropy measure derived from different regions. Permutation test was applied in search for optimize subset of features, with which the SVM model achieved best performance. The results showed that the complexity of EEGs in children with autism was lower than that of the normal control group. Among all four entropies, WaEn got a better classification performance than others. Classification results vary in different regions, and the frontal lobe showed the best performance. After feature selection, six features were filtered out and the accuracy rate was increased to 84.55%, which can be convincing for assisting early diagnosis of autism.


Subject(s)
Child , Humans , Algorithms , Autism Spectrum Disorder , Classification , Diagnosis , Electroencephalography , Entropy , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL