Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Chinese Journal of Neonatology ; (6): 150-156, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1022553

RESUMO

Objective:To construct prediction models of necrotizing enterocolitis (NEC) using machine learning (ML) methods.Methods:From January 2015 to October 2021, neonates with suspected NEC symptoms receiving abdominal ultrasound examinations in our hospital were retrospectively analyzed. The neonates were assigned into NEC group (modified Bell's staging≥Ⅱ) and non-NEC group for diagnostic prediction analysis (dataset 1). The NEC group was subgrouped into surgical NEC group (staging≥Ⅲ) and conservative NEC group for severity analysis (dataset 2). Feature selection algorithms including extremely randomized trees, elastic net and recursive feature elimination were used to screen all variables. The diagnostic and severity prediction models for NEC were established using logistic regression, support vector machine (SVM), random forest, light gradient boosting machine and other ML methods. The performances of different models were evaluated using area under the receiver operating characteristic curve (AUC), sensitivity, specificity, negative predictive value and positive predictive value.Results:A total of 536 neonates were enrolled, including 234 in the NEC group and 302 in the non-NEC group (dataset 1).70 were in the surgical NEC group and 164 in the conservative NEC group (dataset 2). The variables selected by extremely randomized trees showed the best predictive performance in two datasets. For diagnostic prediction models, the SVM model had the best predictive performance, with AUC of 0.932 (95% CI 0.891-0.973) and accuracy of 0.844 (95% CI 0.793-0.895). A total of 11 predictive variables were determined, including portal venous gas, intestinal dilation, neutrophil percentage and absolute monocyte count at the onset of illness. For NEC severity prediction models, the SVM model showed the best predictive performance, with AUC of 0.835 (95% CI 0.737-0.933) and accuracy of 0.787 (95% CI 0.703-0.871). A total of 25 predictive variables were identified, including age of onset, C-reactive protein and absolute neutrophil count at clincial onset. Conclusions:NEC prediction model established using feature selection algorithm and SVM classification model in ML is helpful for the diagnosis of NEC and grading of disease severity.

2.
Artigo em Chinês | WPRIM | ID: wpr-1031682

RESUMO

@#Objective To propose a heart sound segmentation method based on multi-feature fusion network. Methods Data were obtained from the CinC/PhysioNet 2016 Challenge dataset (a total of 3 153 recordings from 764 patients, about 91.93% of whom were male, with an average age of 30.36 years). Firstly the features were extracted in time domain and time-frequency domain respectively, and reduced redundant features by feature dimensionality reduction. Then, we selected optimal features separately from the two feature spaces that performed best through feature selection. Next, the multi-feature fusion was completed through multi-scale dilated convolution, cooperative fusion, and channel attention mechanism. Finally, the fused features were fed into a bidirectional gated recurrent unit (BiGRU) network to heart sound segmentation results. Results The proposed method achieved precision, recall and F1 score of 96.70%, 96.99%, and 96.84% respectively. Conclusion The multi-feature fusion network proposed in this study has better heart sound segmentation performance, which can provide high-accuracy heart sound segmentation technology support for the design of automatic analysis of heart diseases based on heart sounds.

3.
Artigo em Chinês | WPRIM | ID: wpr-1026238

RESUMO

Tumors are serious diseases threatening human health,and the early diagnosis is essential to improve treatment success and patient survival.The study of tumor gene expression data has become a major tool for revealing tumor disease mechanisms,in which artificial intelligence plays an important role.The potential advantages of supervised learning,unsupervised learning and deep learning in tumor prediction and classification are explored from the perspective of machine learning methods.Special attention is paid to the impact of feature selection algorithms on gene screening and their importance in high-dimensional gene expression data.By providing a comprehensive overview of the application and development of artificial intelligence in the analysis of tumor gene expression data,the study aims to provide an outlook for future research directions and promote further development.

4.
Artigo em Chinês | WPRIM | ID: wpr-981536

RESUMO

Hypertension is the primary disease that endangers human health. A convenient and accurate blood pressure measurement method can help to prevent the hypertension. This paper proposed a continuous blood pressure measurement method based on facial video signal. Firstly, color distortion filtering and independent component analysis were used to extract the video pulse wave of the region of interest in the facial video signal, and the multi-dimensional feature extraction of the pulse wave was preformed based on the time-frequency domain and physiological principles; Secondly, an integrated feature selection method was designed to extract the universal optimal feature subset; After that, we compared the single person blood pressure measurement models established by Elman neural network based on particle swarm optimization, support vector machine (SVM) and deep belief network; Finally, we used SVM algorithm to build a general blood pressure prediction model, which was compared and evaluated with the real blood pressure value. The experimental results showed that the blood pressure measurement results based on facial video were in good agreement with the standard blood pressure values. Comparing the estimated blood pressure from the video with standard blood pressure value, the mean absolute error (MAE) of systolic blood pressure was 4.9 mm Hg with a standard deviation (STD) of 5.9 mm Hg, and the MAE of diastolic blood pressure was 4.6 mm Hg with a STD of 5.0 mm Hg, which met the AAMI standards. The non-contact blood pressure measurement method based on video stream proposed in this paper can be used for blood pressure measurement.


Assuntos
Humanos , Pressão Sanguínea/fisiologia , Determinação da Pressão Arterial/métodos , Algoritmos , Hipertensão/diagnóstico , Infecções Sexualmente Transmissíveis
5.
Artigo em Chinês | WPRIM | ID: wpr-1019669

RESUMO

Objective To use feature selection and Likert grading method to quantify the data of lung cancer medical records,to construct a deep extreme learning machine model optimized by the sparrow search algorithm,to classify and predict the syndrome types of traditional Chinese medicine medical record data of lung cancer,and to provide scientific and effective research on syndrome type classification of traditional Chinese medicine.means.Methods The medical records of 497 cases diagnosed with lung cancer from January 2015 to December 2021 were collected from the Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine,and 412 medical records were screened as the research objects.Syndromic factors of different syndromes were summarized by feature selection and feature importance ranking,and the syndrome factors were quantified by Likert grading method.Build a deep extreme learning machine optimized based on the sparrow search algorithm,and train and test the model.Finally,the model built in this paper is compared with other machine learning models according to three evaluation criteria.Results The average classification accuracy of the SSA-DELM model established in this paper was 88.44%,while the average accuracy of the support vector machine and Bayesian network was 83.39%and 84.53%,respectively.The recall rate and F1 value of the SSA-DELM model on the five syndrome types are mostly above 80%,which is also better than other traditional machine learning models.Conclusion The results of the study show that the use of feature selection combined with Likert grading method to quantify the lung cancer medical record data,compared with the 0-1 processing data,can show the characteristics of the data,improve the accuracy of the classification model,SSA-DELM new Compared with other traditional machine learning classification models,the model has better representation learning ability and learning speed.This model not only provides a scientific and technical means for the clinical treatment of lung cancer,but also provides a useful reference for the informatization and intelligent development of TCM syndrome differentiation and treatment.

6.
Artigo em Chinês | WPRIM | ID: wpr-998764

RESUMO

Background Identification and analysis of influencing factors of occupational injury is an important research content of feature selection. In recent years, with the rise of machine learning algorithms, feature selection combined with Boosting algorithm provides a new analysis idea to construct occupational injury prediction models. Objective To evaluate applicability of Boosting algorithm-based model in predicting severity of miners' non-fatal occupational injuries, and provide a basis for rationally predicting the severity level of miners' non-fatal occupational injuries. Methods The publicly available data of the US Mine Safety and Health Administration (MSHA) from 2001 to 2021 on metal miners' non-fatal occupational injuries were used, and the outcome variables were lost working days < 105 d (minor injury) and ≥ 105 d (serious injury). Four different feature sets were screened out by four feature selection methods including least absolute shrinkage and selection operator (Lasso) regression, stepwise regression, single factor + Lasso regression, and single factor + stepwise regression. Logistic regression, gradient boosting decision tree (GBDT), and extreme gradient boosting (XGBoost) were selected to construct prediction models by training with the four feature sets. A total of 12 prediction models of severity of miners' non-fatal occupational injuries were built and their area under the curve (AUC), sensitivity, specificity, and Youden index were calculated for model evaluation. Results According to the results of four feature selection methods, age, time of accident occurrence, total length of service, cause of injury, activities that triggered injury occurrence, body part of injury, nature of injury, and outcome of injury were identified as influencing factors of non-fatal occupational injury severity in miners. Feature set 4 was the optimal set screened out by single factor+stepwise regression and the GBDT model presented the best predictive performance in predicting the severity of non-fatal occupational injuries. The associated specificity, sensitivity, and Youden index were 0.7530, 0.9490, and 0.7020, respectively. The AUC values of logistic regression, GBDT, and XGBoost models trained by feature set 4 were 0.8526 (95%CI: 0.8387, 0.8750), 0.8640 (95%CI: 0.8474, 0.8806), and 0.8603 (95%CI: 0.8439, 0.8773), respectively, higher than the AUC values trained by feature set 2 [0.8487 (95%CI: 0.8203, 0.8669), 0.8110 (95%CI: 0.8012, 0.8344), and 0.8439 (95%CI: 0.8245, 0.8561), respectively] . The AUC values of GBDT and XGBoost models trained by feature set 4 were higher than that of logistic regression model. Conclusion The performance of the prediction models constructed by predictors screened out by two feature selection methods is better than those by single feature selection methods. At the same time, under the condition of optimal feature set, the performance of model prediction based on Boosting is better than that of traditional logistic regression model.

7.
Artigo em Chinês | WPRIM | ID: wpr-1008905

RESUMO

Attention level evaluation refers to the evaluation of people's attention level through observation or experimental testing, and its research results have great application value in education and teaching, intelligent driving, medical health and other fields. With its objective reliability and security, electroencephalogram signals have become one of the most important technical means to analyze and express attention level. At present, there is little review literature that comprehensively summarize the application of electroencephalogram signals in the field of attention evaluation. To this end, this paper first summarizes the research progress on attention evaluation; then the important methods for electroencephalogram attention evaluation are analyzed, including data preprocessing, feature extraction and selection, attention evaluation methods, etc.; finally, the shortcomings of the current development in the field of electroencephalogram attention evaluation are discussed, and the future development trend is prospected, to provide research references for researchers in related fields.


Assuntos
Humanos , Reprodutibilidade dos Testes , Eletroencefalografia
8.
Journal of Biomedical Engineering ; (6): 1126-1134, 2023.
Artigo em Chinês | WPRIM | ID: wpr-1008942

RESUMO

Due to the high complexity and subject variability of motor imagery electroencephalogram, its decoding is limited by the inadequate accuracy of traditional recognition models. To resolve this problem, a recognition model for motor imagery electroencephalogram based on flicker noise spectrum (FNS) and weighted filter bank common spatial pattern ( wFBCSP) was proposed. First, the FNS method was used to analyze the motor imagery electroencephalogram. Using the second derivative moment as structure function, the ensued precursor time series were generated by using a sliding window strategy, so that hidden dynamic information of transition phase could be captured. Then, based on the characteristic of signal frequency band, the feature of the transition phase precursor time series and reaction phase series were extracted by wFBCSP, generating features representing relevant transition and reaction phase. To make the selected features adapt to subject variability and realize better generalization, algorithm of minimum redundancy maximum relevance was further used to select features. Finally, support vector machine as the classifier was used for the classification. In the motor imagery electroencephalogram recognition, the method proposed in this study yielded an average accuracy of 86.34%, which is higher than the comparison methods. Thus, our proposed method provides a new idea for decoding motor imagery electroencephalogram.


Assuntos
Interfaces Cérebro-Computador , Imaginação , Processamento de Sinais Assistido por Computador , Eletroencefalografia/métodos , Algoritmos , Análise Espectral
9.
Journal of Biomedical Engineering ; (6): 1160-1167, 2023.
Artigo em Chinês | WPRIM | ID: wpr-1008946

RESUMO

Heart valve disease (HVD) is one of the common cardiovascular diseases. Heart sound is an important physiological signal for diagnosing HVDs. This paper proposed a model based on combination of basic component features and envelope autocorrelation features to detect early HVDs. Initially, heart sound signals lasting 5 minutes were denoised by empirical mode decomposition (EMD) algorithm and segmented. Then the basic component features and envelope autocorrelation features of heart sound segments were extracted to construct heart sound feature set. Then the max-relevance and min-redundancy (MRMR) algorithm was utilized to select the optimal mixed feature subset. Finally, decision tree, support vector machine (SVM) and k-nearest neighbor (KNN) classifiers were trained to detect the early HVDs from the normal heart sounds and obtained the best accuracy of 99.9% in clinical database. Normal valve, abnormal semilunar valve and abnormal atrioventricular valve heart sounds were classified and the best accuracy was 99.8%. Moreover, normal valve, single-valve abnormal and multi-valve abnormal heart sounds were classified and the best accuracy was 98.2%. In public database, this method also obtained the good overall accuracy. The result demonstrated this proposed method had important value for the clinical diagnosis of early HVDs.


Assuntos
Humanos , Ruídos Cardíacos , Doenças das Valvas Cardíacas/diagnóstico , Algoritmos , Máquina de Vetores de Suporte , Processamento de Sinais Assistido por Computador
10.
Artigo em Chinês | WPRIM | ID: wpr-1027434

RESUMO

Objective:To explore the feasibility and validity of constructing an intensity-modulated radiotherapy gamma pass rate prediction model after combining the SHAP values with the extreme gradient boosting tree (XGBoost) algorithm feature selection technique, and to deliver corresponding model interpretation.Methods:The dose validation results of 196 patients with pelvic tumors receiving fixed-field intensity-modulated radiotherapy using modality-based measurements with a gamma pass rate criterion of 3%/2 mm and 10% dose threshold in Hunan Provincial Tumor Hospital from November 2020 to November 2021 were retrospectively analyzed. Prediction models were constructed by extracting radiomic features based on dose files and using SHAP values combined with the XGBoost algorithm for feature filtering. Four machine learning classification models were constructed when the number of features was 50, 80, 110 and 140, respectively. The area under the receiver operating characteristic curve (AUC), recall rate and F1 score were calculated to assess the classification performance of the prediction models.Results:The AUC of prediction model constructed with 110 features selected based on the SHAP-valued features was 0.81, the recall rate was 0.93 and the F1 score was 0.82, which were all better than the other 3 models.Conclusion:For intensity-modulated radiotherapy of pelvic tumor, SHAP values can be used in combination with the XGBoost algorithm to select the optimal subset of radiomic features to construct predictive models of gamma pass rates, and deliver an interpretation of the model output by SHAP values, which may provide value in understanding the prediction by machine learning-dependent models.

11.
Rev. mex. ing. bioméd ; 44(spe1): 38-52, Aug. 2023. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1565605

RESUMO

Abstract It is estimated that depression affects more than 300 million people in worldwide. Unfortunately, the current method of psychiatric evaluation requires a great effort on the part of clinicians to collect complete information. The aim of this paper is determine the optimal time intervals to detect depression using genetic algorithms and machine learning techniques; from motor activity readings of 55 participants during a week at one-minute intervals. The time intervals with the best performance in detecting depression in individuals were selected by applying Genetic Algorithms (GA). Methodology. 385 observations of the study participants were evaluated, obtaining an accuracy of 83.0 % with Logistic Regression (LR). Conclusion. There is a relationship between motor activity and people with depression since it is possible to detect it using machine learning techniques. However, the changes in the variables of the time intervals could be established as key factors since, at different times, they could give good or bad results because the motor activity in the patients could vary. However, the results present a first approximation for developing tools that help the opportune and objective diagnosis of depression.


Resumen Se estima que la depresión afecta a más de 300 millones de personas en el mundo. Desafortunadamente, el método de evaluación psiquiátrica actual requiere un gran esfuerzo por parte de los médicos para recopilar información completa. Objetivo. Determinar los intervalos de tiempo óptimos para detectar depresión mediante algoritmos genéticos y técnicas de aprendizaje automático, a partir de las lecturas de actividad motora de 55 sujetos durante una semana en intervalos de un minuto. Los intervalos de tiempo con mejor desempeño en la detección de depresión en individuos fueron seleccionados aplicando algoritmos genéticos. Metodología. Se evaluaron 385 observaciones de los sujetos de estudio, obteniendo una precisión del 83.0 % con Regresión Logística (LR). Conclusión. Existe una relación entre la actividad motora y las personas con depresión ya que es posible detectarla utilizando técnicas de aprendizaje automático. Sin embargo, los cambios en las variables de los intervalos de tiempo podrían establecerse como factores clave ya que en diferentes momentos podrían dar buenos o malos resultados debido a que la actividad motora en los pacientes podría llegar a variar. No obstante, los resultados presentan una primera aproximación para el desarrollo de herramientas que ayuden al diagnóstico oportuno y objetivo de la depresión.

12.
Artigo em Chinês | WPRIM | ID: wpr-928858

RESUMO

This paper reviews some recent studies on the recognition and evaluation of facial paralysis based on artificial intelligence. The research methods can be divided into two categories: facial paralysis evaluation based on artificial selection of patients' facial image eigenvalues and facial paralysis evaluation based on neural network and patients' facial images. The analysis shows that the method of manual selection of eigenvalues is suitable for small sample size, but the classification effect of adjacent ratings of facial paralysis needs to be further optimized. The neural network method can distinguish the neighboring grades of facial paralysis relatively well, but it has a higher requirement for sample size. Both of the two methods have good prospects. The features that are more closely related to the evaluation scale are selected manually, and the common development direction may be to extract the time-domain features, so as to achieve the purpose of improving the evaluation accuracy of facial paralysis.


Assuntos
Humanos , Inteligência Artificial , Face , Paralisia Facial/diagnóstico , Redes Neurais de Computação
13.
Artigo em Inglês | WPRIM | ID: wpr-987199

RESUMO

Background@#Cardiovascular diseases belong to the top three leading causes of mortality in the Philippines with 17.8 % of the total deaths. Lifestyle-related habits such as alcohol consumption, smoking, poor diet and nutrition, high sedentary behavior, overweight, and obesity have been increasingly implicated in the high rates of heart disease among Filipinos leading to a significant burden to the country's healthcare system. The objective of this study was to predict the presence of heart disease using various machine learning algorithms (support vector machine, naïve Bayes, random forest, logistic regression, decision tree, and adaptive boosting) evaluated on an anonymized publicly available cardiovascular disease dataset. @*Methodology@#Various machine learning algorithms were applied on an anonymized publicly available cardiovascular dataset from a machine learning data repository (IEEE Dataport). A web-based application system named Heart Alert was developed based on the best machine learning model that would predict the risk of developing heart disease. An assessment of the effects of different optimization techniques as to the imputation methods (mean, median, mode, and multiple imputation by chained equations) and as to the feature selection method (recursive feature elimination) on the classification performance of the machine learning algorithms was made. All simulation experiments were implemented via Python 3.8 and its machine learning libraries (Scikit-learn, Keras, Tensorflow, Pandas, Matplotlib, Seaborn, NumPy). @*Results@#The support vector machine without imputation and feature selection obtained the highest performance metrics (90.2% accuracy, 87.7% sensitivity, 93.6% specificity, 94.9% precision, 91.2% F1-score and an area under the receiver operating characteristic curve of 0.902 ) and was used to implement the heart disease prediction system (Heart Alert). Following very closely were random forest with mean or median imputation and logistic regression with mode imputation, all having no feature selection which also performed well. @*Conclusion@#The performance of the best four machine learning models suggests that for this dataset, imputation technique for missing values may or may not be done. Likewise, recursive feature elimination for feature selection may not apply as all variables seem to be important in heart disease prediction. An early accurate diagnosis leading to prompt intervention efforts is very crucial as it improves the patient's quality of life and diminishes the risk of developing cardiac events.


Assuntos
Aprendizado de Máquina , Máquina de Vetores de Suporte
14.
Artigo em Chinês | WPRIM | ID: wpr-1015784

RESUMO

Early diagnosis of cancer can significantly improve the survival rate of cancer patients, especially in patients with hepatocellular carcinoma (HCC). Machine learning is an effective tool in cancer classification. How to select high⁃classification accuracy feature subsets with low dimension in complex and high⁃dimensional cancer datasets is a difficult problem in cancer classification. In this paper, we propose a novel feature selection method, SC⁃BPSO: a two⁃stage feature selection method implemented by combining the Spearman correlation coefficient, chi⁃square independent test⁃based filter method, and binary particle swarm optimal (BPSO) based wrapper method. It has been applied to the cancer classification of high⁃dimensional data to classify normal samples and HCC samples. The dataset in this paper is obtained from 130 liver tissue microRNA sequence data (64 hepatocellular carcinoma, 66 normal liver tissue) from National Center for Bioinformatics (NCBI) and European Bioinformatics Institute (EBI). First, the liver tissue microRNA sequence data was preprocessed to extract the three types of features of microRNA expression, editing level and post⁃editing expression. Then, the parameters of the SC⁃BPSO algorithm in the liver cancer classification were adjusted to select a subset of key features. Finally, classifiers were used to establish classification models, predict the results, and compare the classification results with the feature subset selected by the information gain filter, the information gain ratio filter and the BPSO wrapper feature selection algorithm using the same classifier. Using the feature subset selected by the SC⁃BPSO algorithm, the classification accuracy is up to 98. 4%. The experimental results showed that compared with the other three feature selection algorithms, the SC⁃ BPSO algorithm can effectively find feature subsets with relatively small size and higher accuracy. This may have important implications for cancer classification with a small number of samples and high⁃ dimension features.

15.
International Eye Science ; (12): 1644-1648, 2021.
Artigo em Chinês | WPRIM | ID: wpr-886453

RESUMO

@#AIM:To build prediction model of dry eye with data mining techniques.<p>METHODS: From March 2020 to January 2021, 218 patients(436 eyes)with dry eye were selected as dry eye group, and 212 patients(424 eyes)without dry eye were selected as control group. Schirmer Ⅰ test(SⅠt), fluorescein staining tear film break-up time(FBUT), non-contact tear film break-up time(NI-BUT), tear meniscus height(TMH), corneal fluorescein staining(FL)and meibomian gland function score(MG-SCORE)were performed in both groups. Totally 200 eyes of 100 samples were randomly selected from the dry eye group and the control group to form a test set of 400 eyes of 200 samples. The remaining 118 samples(236 eyes)in the dry eye group and 112 samples(224 eyes)in the control group were used as the training set. Correlation feature searching(CFS)feature selection algorithm was used to search the factors related to the detection of dry eye. C4.5, Random Forest, Rondom Tree, Naïve Bayes, KNN, SVM, Decision Stump and Bagging methods were used to construct the prediction model, respectively.<p>RESULTS:By using CFS feature selection algorithm, an optimal sub-feature set including SⅠt, NI-BUT, TMH and FL were obtained. Based on the four features, eight machine learning algorithms were employed to build the prediction model, respectively. The results show that the prediction accuracies were all higher than 75%. Among the eight prediction models, the prediction accuracy model by using Random Forest is the highest, which achieved 91.8% and 88.3%, respectively. And the total prediction accuracy reached 90.1%. In addition, through the analysis of single factor modeling, we found that FL and NI-BUT had the highest prediction accuracy, which exceeded 74%.<p>CONCLUSION: Random Forest could be considered as a stable and well generalization algorithm to build prediction model for dry eye with well generalization. NI-BUT and FL have a strong correlation with dry eye, which can be considered as the standard for clinical examination of dry eye.

16.
Artigo em Chinês | WPRIM | ID: wpr-888624

RESUMO

OBJECTIVE@#According to the digital image features of corneal opacity, a multi classification model of support vector machine (SVM) was established to explore the objective quantification method of corneal opacity.@*METHODS@#The cornea digital images of dead pigs were collected, part of the color features and texture features were extracted according to the previous experience, and the SVM multi classification model was established. The test results of the model were evaluated by precision, sensitivity and @*RESULTS@#In the classification of corneal opacity, the highest @*CONCLUSIONS@#The SVM multi classification model can classify the degree of corneal opacity.


Assuntos
Animais , Opacidade da Córnea , Máquina de Vetores de Suporte , Suínos
17.
Braz. arch. biol. technol ; 64: e21210240, 2021. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1355817

RESUMO

Abstract The ambitious task in the domain of medical informatics is medical data classification. From medical datasets, intention to ameliorate human burden with the medical data classification entails to taking in classification designs. The medical data classification is the major focus of this paper, where a Decision Tree based Salp Swarm Optimization (DT-SWO) algorithm is proposed. After pre-processingthe hybrid feature selection method selects the medical data features. The high dimensional features are reduced by Discriminant Independent Component Analysis (DICA) and DT-SWO is to classify the most relevant class of medical data. The details of four datasets namely Leukemia, Diffuse Larger B-cell Lymphomas (DLBCL), Lung cancer and Colon relating to four diseases for heart, liver, cancer and lungs are collected from the UCI machine learning repository. Ultimately, the experimental outcomes demonstrated that the proposed DT-SWO algorithm is suitable for medical data classification than other algorithms.

18.
Neuroscience Bulletin ; (6): 985-996, 2020.
Artigo em Inglês | WPRIM | ID: wpr-828333

RESUMO

Hydrocephalus is often treated with a cerebrospinal fluid shunt (CFS) for excessive amounts of cerebrospinal fluid in the brain. However, it is very difficult to distinguish whether the ventricular enlargement is due to hydrocephalus or other causes, such as brain atrophy after brain damage and surgery. The non-trivial evaluation of the consciousness level, along with a continuous drainage test of the lumbar cistern is thus clinically important before the decision for CFS is made. We studied 32 secondary mild hydrocephalus patients with different consciousness levels, who received T1 and diffusion tensor imaging magnetic resonance scans before and after lumbar cerebrospinal fluid drainage. We applied a novel machine-learning method to find the most discriminative features from the multi-modal neuroimages. Then, we built a regression model to regress the JFK Coma Recovery Scale-Revised (CRS-R) scores to quantify the level of consciousness. The experimental results showed that our method not only approximated the CRS-R scores but also tracked the temporal changes in individual patients. The regression model has high potential for the evaluation of consciousness in clinical practice.

19.
Neuroscience Bulletin ; (6): 985-996, 2020.
Artigo em Inglês | WPRIM | ID: wpr-826744

RESUMO

Hydrocephalus is often treated with a cerebrospinal fluid shunt (CFS) for excessive amounts of cerebrospinal fluid in the brain. However, it is very difficult to distinguish whether the ventricular enlargement is due to hydrocephalus or other causes, such as brain atrophy after brain damage and surgery. The non-trivial evaluation of the consciousness level, along with a continuous drainage test of the lumbar cistern is thus clinically important before the decision for CFS is made. We studied 32 secondary mild hydrocephalus patients with different consciousness levels, who received T1 and diffusion tensor imaging magnetic resonance scans before and after lumbar cerebrospinal fluid drainage. We applied a novel machine-learning method to find the most discriminative features from the multi-modal neuroimages. Then, we built a regression model to regress the JFK Coma Recovery Scale-Revised (CRS-R) scores to quantify the level of consciousness. The experimental results showed that our method not only approximated the CRS-R scores but also tracked the temporal changes in individual patients. The regression model has high potential for the evaluation of consciousness in clinical practice.

20.
Artigo em Chinês | WPRIM | ID: wpr-828121

RESUMO

How to extract high discriminative features that help classification from complex resting-state fMRI (rs-fMRI) data is the key to improving the accuracy of brain disease recognition such as schizophrenia. In this work, we use a weighted sparse model for brain network construction, and utilize the Kendall correlation coefficient (KCC) to extract the discriminative connectivity features for schizophrenia classification, which is conducted with the linear support vector machine. Experimental results based on the rs-fMRI of 57 schizophrenia patients and 64 healthy controls show that our proposed method is more effective ( ., achieving a significantly higher classification accuracy, 81.82%) than other competing methods. Specifically, compared with the traditional network construction methods (Pearson's correlation and sparse representation) and the commonly used feature selection methods (two-sample -test and Least absolute shrinkage and selection operator (Lasso)), the algorithm proposed in this paper can more effectively extract the discriminative connectivity features between the schizophrenia patients and the healthy controls, and further improve the classification accuracy. At the same time, the discriminative connectivity features extracted in the work could be used as the potential clinical biomarkers to assist the identification of schizophrenia.


Assuntos
Humanos , Algoritmos , Encéfalo , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Esquizofrenia , Diagnóstico por Imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA