Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Intell Neurosci ; 2022: 9737511, 2022.
Article in English | MEDLINE | ID: mdl-35528349

ABSTRACT

The brain is the most complex organ in the human body, and it is also the most complex organ in the whole biological system, making it the most complex organ on the planet. According to the findings of current studies, modern study that properly characterises the EEG data signal provides a clear classification accuracy of human activities which is distinct from previous research. Various brain wave patterns related to common activities such as sleeping, reading, and watching a movie may be found in the Electroencephalography (EEG) data that has been collected. As a consequence of these activities, we accumulate numerous sorts of emotion signals in our brains, including the Delta, Theta, and Alpha bands. These bands will provide different types of emotion signals in our brain as a result of these activities. As a consequence of the nonstationary nature of EEG recordings, time-frequency-domain techniques, on the other hand, are more likely to provide good findings. The ability to identify different neural rhythm scales using time-frequency representation has also been shown to be a legitimate EEG marker; this ability has also been demonstrated to be a powerful tool for investigating small-scale neural brain oscillations. This paper presents the first time that a frequency analysis of EEG dynamics has been undertaken. An augmenting decomposition consisting of the "Versatile Inspiring Wavelet Transform" and the "Adaptive Wavelet Transform" is used in conjunction with the EEG rhythms that were gathered to provide adequate temporal and spectral resolutions. Children's wearable sensors are being used to collect data from a number of sources, including the Internet. The signal is conveyed over the Internet of Things (IoT). Specifically, the suggested approach is assessed on two EEG datasets, one of which was obtained in a noisy (i.e., nonshielded) environment and the other was recorded in a shielded environment. The results illustrate the resilience of the proposed training strategy. Therefore, our method contributes to the identification of specific brain activity in children who are taking part in the research as a result of their participation. On the basis of several parameters such as filtering response, accuracy, precision, recall, and F-measure, the MATLAB simulation software was used to evaluate the performance of the proposed system.


Subject(s)
Internet of Things , Wearable Electronic Devices , Acoustics , Algorithms , Child , Child Health , Electroencephalography/methods , Humans , Vocabulary
2.
Comput Intell Neurosci ; 2022: 8777355, 2022.
Article in English | MEDLINE | ID: mdl-35378817

ABSTRACT

Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques.


Subject(s)
Communication Aids for Disabled , Gestures , Computers , Humans , Language , Sign Language , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...