Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Data ; 10(1): 132, 2023 03 11.
Article in English | MEDLINE | ID: mdl-36906700

ABSTRACT

Human Muscular Manipulability is a metric that measures the comfort of an specific pose and it can be used for a variety of applications related to healthcare. For this reason, we introduce KIMHu: a Kinematic, Imaging and electroMyography dataset for Human muscular manipulability index prediction. The dataset is comprised of images, depth maps, skeleton tracking data, electromyography recordings and 3 different Human Muscular Manipulability indexes of 20 participants performing different physical exercises with their arm. The methodology followed to acquire and process the data is also presented for future replication. A specific analysis framework for Human Muscular Manipulability is proposed in order to provide benchmarking tools based on this dataset.


Subject(s)
Musculoskeletal System , Humans , Biomechanical Phenomena , Electromyography , Diagnostic Imaging
2.
J Vis Exp ; (202)2023 Dec 15.
Article in English | MEDLINE | ID: mdl-38163270

ABSTRACT

The attention level of students in a classroom can be improved through the use of Artificial Intelligence (AI) techniques. By automatically identifying the attention level, teachers can employ strategies to regain students' focus. This can be achieved through various sources of information. One source is to analyze the emotions reflected on students' faces. AI can detect emotions, such as neutral, disgust, surprise, sadness, fear, happiness, and anger. Additionally, the direction of the students' gaze can also potentially indicate their level of attention. Another source is to observe the students' body posture. By using cameras and deep learning techniques, posture can be analyzed to determine the level of attention. For example, students who are slouching or resting their heads on their desks may have a lower level of attention. Smartwatches distributed to the students can provide biometric and other data, including heart rate and inertial measurements, which can also be used as indicators of attention. By combining these sources of information, an AI system can be trained to identify the level of attention in the classroom. However, integrating the different types of data poses a challenge that requires creating a labeled dataset. Expert input and existing studies are consulted for accurate labeling. In this paper, we propose the integration of such measurements and the creation of a dataset and a potential attention classifier. To provide feedback to the teacher, we explore various methods, such as smartwatches or direct computers. Once the teacher becomes aware of attention issues, they can adjust their teaching approach to re-engage and motivate the students. In summary, AI techniques can automatically identify the students' attention level by analyzing their emotions, gaze direction, body posture, and biometric data. This information can assist teachers in optimizing the teaching-learning process.


Subject(s)
Artificial Intelligence , Students , Humans , Students/psychology , Emotions/physiology , Fear , Attention
3.
Data Brief ; 42: 108172, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35510259

ABSTRACT

In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine learning methods. Nonetheless, these approaches only work with direct and bright light, namely, they will only perform correctly on daylight conditions. This is because deep learning algorithms require large amounts of data and the currently available datasets do not address this matter. In this work, we propose UrOAC, a dataset of urban objects captured in a range of different lightning conditions, from bright daylight to low and poor night-time lighting conditions. In the latter, the objects are only lit by low ambient light, street lamps and headlights of passing-by vehicles. The dataset depicts the following objects: pedestrian crosswalks, green traffic lights and red traffic lights. The annotations include the category and the bounding-box of each object. This dataset could be used for improve the performance at night-time and under low-light conditions of any vision-based method that involves urban objects. For instance, guidance and object detection devices for the visually challenged or self-driving and intelligent vehicles.

4.
Comput Intell Neurosci ; 2021: 6690590, 2021.
Article in English | MEDLINE | ID: mdl-33868399

ABSTRACT

The most common approaches for classification rely on the inference of a specific class. However, every category could be naturally organized within a taxonomic tree, from the most general concept to the specific element, and that is how human knowledge works. This representation avoids the necessity of learning roughly the same features for a range of very similar categories, and it is easier to understand and work with and provides a classification for each abstraction level. In this paper, we carry out an exhaustive study of different methods to perform multilevel classification applied to the task of classifying wild animals and plant species. Different convolutional backbones, data setups, and ensembling techniques are explored to find the model which provides the best performance. As our experimentation remarks, in order to achieve the best performance on the datasets that are arranged in a tree-like structure, the classifier must feature an EfficientNetB5 backbone with an input size of 300 × 300 px, followed by a multilevel classifier. In addition, a Multiscale Crop data augmentation process must be carried out. Finally, the accuracy of this setup is a 62% top-1 accuracy and 88% top-5 accuracy. The architecture could benefit for an accuracy boost if it is involved in an ensemble of cascade classifiers, but the computational demand is unbearable for any real application.


Subject(s)
Animals, Wild , Animals , Humans
5.
Sci Data ; 6(1): 162, 2019 08 29.
Article in English | MEDLINE | ID: mdl-31467361

ABSTRACT

In this paper, we propose a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset was acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning-based techniques, which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require large amounts of reliable ground-truth data. Despite there already existing several datasets that could be used for depth estimation, almost none of them are outdoor-oriented from an egocentric point of view. Our dataset introduces a large number of high-definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data, as shown in this work.

6.
Comput Intell Neurosci ; 2019: 9412384, 2019.
Article in English | MEDLINE | ID: mdl-31065258

ABSTRACT

Ambient assisted living (AAL) environments are currently a key focus of interest as an option to assist and monitor disabled and elderly people. These systems can improve their quality of life and personal autonomy by detecting events such as entering potentially dangerous areas, potential fall events, or extended stays in the same place. Nonetheless, there are areas that remain outside the scope of AAL systems due to the placement of cameras. There also exist sources of danger in the scope of the camera that the AAL system cannot detect. These sources of danger are relatively small in size, occluded, or nonstatic. To solve this problem, we propose the inclusion of a robot which maps such uncovered areas looking for new potentially dangerous areas that go unnoticed by the AAL. The robot then sends this information to the AAL system in order to improve its performance. Experimentation in real-life scenarios successfully validates our approach.


Subject(s)
Algorithms , Delivery of Health Care , Quality of Life , Robotics , Aging , Humans , Risk
7.
Sensors (Basel) ; 19(2)2019 Jan 17.
Article in English | MEDLINE | ID: mdl-30658480

ABSTRACT

Every year, a significant number of people lose a body part in an accident, through sickness or in high-risk manual jobs. Several studies and research works have tried to reduce the constraints and risks in their lives through the use of technology. This work proposes a learning-based approach that performs gesture recognition using a surface electromyography-based device, the Myo Armband released by Thalmic Labs, which is a commercial device and has eight non-intrusive low-cost sensors. With 35 able-bodied subjects, and using the Myo Armband device, which is able to record data at about 200 MHz, we collected a dataset that includes six dissimilar hand gestures. We used a gated recurrent unit network to train a system that, as input, takes raw signals extracted from the surface electromyography sensors. The proposed approach obtained a 99.90% training accuracy and 99.75% validation accuracy. We also evaluated the proposed system on a test set (new subjects) obtaining an accuracy of 77.85%. In addition, we showed the test prediction results for each gesture separately and analyzed which gestures for the Myo armband with our suggested network can be difficult to distinguish accurately. Moreover, we studied for first time the gated recurrent unit network capability in gesture recognition approaches. Finally, we integrated our method in a system that is able to classify live hand gestures.


Subject(s)
Costs and Cost Analysis , Electromyography/economics , Electromyography/instrumentation , Gestures , Hand/physiology , Humans , Neural Networks, Computer , Pattern Recognition, Automated , Signal Processing, Computer-Assisted
8.
Comput Intell Neurosci ; 2018: 4350272, 2018.
Article in English | MEDLINE | ID: mdl-30687398

ABSTRACT

The accelerated growth of the percentage of elder people and persons with brain injury-related conditions and who are intellectually challenged are some of the main concerns of the developed countries. These persons often require special cares and even almost permanent overseers that help them to carry out diary tasks. With this issue in mind, we propose an automated schedule system which is deployed on a social robot. The robot keeps track of the tasks that the patient has to fulfill in a diary basis. When a task is triggered, the robot guides the patient through its completion. The system is also able to detect if the steps are being properly carried out or not, issuing alerts in that case. To do so, an ensemble of deep learning techniques is used. The schedule is customizable by the carers and authorized relatives. Our system could enhance the quality of life of the patients and improve their self-autonomy. The experimentation, which was supervised by the ADACEA foundation, validates the achievement of these goals.


Subject(s)
Brain Injuries/physiopathology , Cognitive Dysfunction/physiopathology , Intelligence/physiology , Robotics , Aging/physiology , Brain/physiology , Humans , Quality of Life
SELECTION OF CITATIONS
SEARCH DETAIL
...