Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
J Neural Eng ; 18(2)2021 02 25.
Article in English | MEDLINE | ID: mdl-33418548

ABSTRACT

Objective.The novelty of this study consists of the exploration of multiple new approaches of data pre-processing of brainwave signals, wherein statistical features are extracted and then formatted as visual images based on the order in which dimensionality reduction algorithms select them. This data is then treated as visual input for 2D and 3D convolutional neural networks (CNNs) which then further extract 'features of features'.Approach.Statistical features derived from three electroencephalography (EEG) datasets are presented in visual space and processed in 2D and 3D space as pixels and voxels respectively. Three datasets are benchmarked, mental attention states and emotional valences from the four TP9, AF7, AF8 and TP10 10-20 electrodes and an eye state data from 64 electrodes. Seven hundred twenty-nine features are selected through three methods of selection in order to form 27 × 27 images and 9 × 9 × 9 cubes from the same datasets. CNNs engineered for the 2D and 3D preprocessing representations learn to convolve useful graphical features from the data.Main results.A 70/30 split method shows that the strongest methods for classification accuracy of feature selection are One Rule for attention state and Relative Entropy for emotional state both in 2D. In the eye state dataset 3D space is best, selected by Symmetrical Uncertainty. Finally, 10-fold cross validation is used to train best topologies. Final best 10-fold results are 97.03% for attention state (2D CNN), 98.4% for Emotional State (3D CNN), and 97.96% for Eye State (3D CNN).Significance.The findings of the framework presented by this work show that CNNs can successfully convolve useful features from a set of pre-computed statistical temporal features from raw EEG waves. The high performance of K-fold validated algorithms argue that the features learnt by the CNNs hold useful knowledge for classification in addition to the pre-computed features.


Subject(s)
Electroencephalography , Neural Networks, Computer , Algorithms , Electroencephalography/methods , Emotions , Research Design
2.
PLoS One ; 15(10): e0241332, 2020.
Article in English | MEDLINE | ID: mdl-33112931

ABSTRACT

In this work we present a three-stage Machine Learning strategy to country-level risk classification based on countries that are reporting COVID-19 information. A K% binning discretisation (K = 25) is used to create four risk groups of countries based on the risk of transmission (coronavirus cases per million population), risk of mortality (coronavirus deaths per million population), and risk of inability to test (coronavirus tests per million population). The four risk groups produced by K% binning are labelled as 'low', 'medium-low', 'medium-high', and 'high'. Coronavirus-related data are then removed and the attributes for prediction of the three types of risk are given as the geopolitical and demographic data describing each country. Thus, the calculation of class label is based on coronavirus data but the input attributes are country-level information regardless of coronavirus data. The three four-class classification problems are then explored and benchmarked through leave-one-country-out cross validation to find the strongest model, producing a Stack of Gradient Boosting and Decision Tree algorithms for risk of transmission, a Stack of Support Vector Machine and Extra Trees for risk of mortality, and a Gradient Boosting algorithm for the risk of inability to test. It is noted that high risk for inability to test is often coupled with low risks for transmission and mortality, therefore the risk of inability to test should be interpreted first, before consideration is given to the predicted transmission and mortality risks. Finally, the approach is applied to more recent risk levels to data from September 2020 and weaker results are noted due to the growth of international collaboration detracting useful knowledge from country-level attributes which suggests that similar machine learning approaches are more useful prior to situations later unfolding.


Subject(s)
Betacoronavirus , Coronavirus Infections/epidemiology , Disaster Planning , Machine Learning , Models, Theoretical , Pandemics , Pneumonia, Viral/epidemiology , Risk Assessment/methods , Algorithms , COVID-19 , COVID-19 Testing , Classification , Clinical Laboratory Techniques , Coronavirus Infections/diagnosis , Coronavirus Infections/mortality , Coronavirus Infections/transmission , Decision Trees , Forecasting , Global Health , Humans , International Cooperation , Pneumonia, Viral/diagnosis , Pneumonia, Viral/mortality , Pneumonia, Viral/transmission , Reagent Kits, Diagnostic/supply & distribution , SARS-CoV-2 , Support Vector Machine
3.
Sensors (Basel) ; 20(18)2020 Sep 09.
Article in English | MEDLINE | ID: mdl-32917024

ABSTRACT

In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44%) as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL), and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy.


Subject(s)
Machine Learning , Neural Networks, Computer , Sign Language , Computers , Humans , Movement , United Kingdom , United States
SELECTION OF CITATIONS
SEARCH DETAIL