Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.696
Filtrar
1.
J Robot Surg ; 18(1): 245, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38847926

RESUMO

Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.


Assuntos
Competência Clínica , Dissecação , Prostatectomia , Procedimentos Cirúrgicos Robóticos , Prostatectomia/métodos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Masculino , Dissecação/métodos , Gestos , Neoplasias da Próstata/cirurgia , Cirurgiões
2.
Nat Commun ; 15(1): 4791, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38839754

RESUMO

The planum temporale (PT), a key language area, is specialized in the left hemisphere in prelinguistic infants and considered as a marker of the pre-wired language-ready brain. However, studies have reported a similar structural PT left-asymmetry not only in various adult non-human primates, but also in newborn baboons. Its shared functional links with language are not fully understood. Here we demonstrate using previously obtained MRI data that early detection of PT left-asymmetry among 27 newborn baboons (Papio anubis, age range of 4 days to 2 months) predicts the future development of right-hand preference for communicative gestures but not for non-communicative actions. Specifically, only newborns with a larger left-than-right PT were more likely to develop a right-handed communication once juvenile, a contralateral brain-gesture link which is maintained in a group of 70 mature baboons. This finding suggests that early PT asymmetry may be a common inherited prewiring of the primate brain for the ontogeny of ancient lateralised properties shared between monkey gesture and human language.


Assuntos
Animais Recém-Nascidos , Lateralidade Funcional , Gestos , Imageamento por Ressonância Magnética , Animais , Lateralidade Funcional/fisiologia , Feminino , Masculino , Papio anubis , Lobo Temporal/fisiologia , Lobo Temporal/diagnóstico por imagem , Idioma
3.
Sci Rep ; 14(1): 10607, 2024 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-38719866

RESUMO

Guilt is a negative emotion elicited by realizing one has caused actual or perceived harm to another person. One of guilt's primary functions is to signal that one is aware of the harm that was caused and regrets it, an indication that the harm will not be repeated. Verbal expressions of guilt are often deemed insufficient by observers when not accompanied by nonverbal signals such as facial expression, gesture, posture, or gaze. Some research has investigated isolated nonverbal expressions in guilt, however none to date has explored multiple nonverbal channels simultaneously. This study explored facial expression, gesture, posture, and gaze during the real-time experience of guilt when response demands are minimal. Healthy adults completed a novel task involving watching videos designed to elicit guilt, as well as comparison emotions. During the video task, participants were continuously recorded to capture nonverbal behaviour, which was then analyzed via automated facial expression software. We found that while feeling guilt, individuals engaged less in several nonverbal behaviours than they did while experiencing the comparison emotions. This may reflect the highly social aspect of guilt, suggesting that an audience is required to prompt a guilt display, or may suggest that guilt does not have clear nonverbal correlates.


Assuntos
Expressão Facial , Culpa , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Comunicação não Verbal/psicologia , Emoções/fisiologia , Gestos
4.
BMC Med Educ ; 24(1): 509, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38715008

RESUMO

BACKGROUND: In this era of rapid technological development, medical schools have had to use modern technology to enhance traditional teaching. Online teaching was preferred by many medical schools. However due to the complexity of intracranial anatomy, it was challenging for the students to study this part online, and the students were likely to be tired of neurosurgery, which is disadvantageous to the development of neurosurgery. Therefore, we developed this database to help students learn better neuroanatomy. MAIN BODY: The data were sourced from Rhoton's Cranial Anatomy and Surgical Approaches and Neurosurgery Tricks of the Trade in this database. Then we designed many hand gesture figures connected with the atlas of anatomy. Our database was divided into three parts: intracranial arteries, intracranial veins, and neurosurgery approaches. Each section below contains an atlas of anatomy, and gestures represent vessels and nerves. Pictures of hand gestures and atlas of anatomy are available to view on GRAVEN ( www.graven.cn ) without restrictions for all teachers and students. We recruited 50 undergraduate students and randomly divided them into two groups: using traditional teaching methods or GRAVEN database combined with above traditional teaching methods. Results revealed a significant improvement in academic performance in using GRAVEN database combined with traditional teaching methods compared to the traditional teaching methods. CONCLUSION: This database was vital to help students learn about intracranial anatomy and neurosurgical approaches. Gesture teaching can effectively simulate the relationship between human organs and tissues through the flexibility of hands and fingers, improving anatomy interest and education.


Assuntos
Bases de Dados Factuais , Educação de Graduação em Medicina , Gestos , Neurocirurgia , Humanos , Neurocirurgia/educação , Educação de Graduação em Medicina/métodos , Estudantes de Medicina , Neuroanatomia/educação , Ensino , Feminino , Masculino
5.
Autism Res ; 17(5): 989-1000, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38690644

RESUMO

Prior work examined how minimally verbal (MV) children with autism used their gestural communication during social interactions. However, interactions are exchanges between social partners. Examining parent-child social interactions is critically important given the influence of parent responsivity on children's communicative development. Specifically, parent responses that are semantically contingent to the child's communication plays an important role in further shaping children's language learning. This study examines whether MV autistic children's (N = 47; 48-95 months; 10 females) modality and form of communication are associated with parent responsivity during an in-home parent-child interaction (PCI). The PCI was collected using natural language sampling methods and coded for child modality and form of communication and parent responses. Findings from Kruskal-Wallis H tests revealed that there was no significant difference in parent semantically contingent responses based on child communication modality (spoken language, gesture, gesture-speech combinations, and AAC) and form of communication (precise vs. imprecise). Findings highlight the importance of examining multiple modalities and forms of communication in MV children with autism to obtain a more comprehensive understanding of their communication abilities; and underscore the inclusion of interactionist models of communication to examine children's input on parent responses in further shaping language learning experiences.


Assuntos
Transtorno Autístico , Comunicação , Relações Pais-Filho , Humanos , Feminino , Masculino , Criança , Pré-Escolar , Transtorno Autístico/psicologia , Gestos , Pais , Desenvolvimento da Linguagem , Fala
6.
J Neural Eng ; 21(3)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38722304

RESUMO

Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.


Assuntos
Eletromiografia , Gestos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Feminino , Adulto , Adulto Jovem , Membros Artificiais
7.
J Neural Eng ; 21(3)2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38754410

RESUMO

Objective.Upper limb loss can profoundly impact an individual's quality of life, posing challenges to both physical capabilities and emotional well-being. To restore limb function by decoding electromyography (EMG) signals, in this paper, we present a novel deep prototype learning method for accurate and generalizable EMG-based gesture classification. Existing methods suffer from limitations in generalization across subjects due to the diverse nature of individual muscle responses, impeding seamless applicability in broader populations.Approach.By leveraging deep prototype learning, we introduce a method that goes beyond direct output prediction. Instead, it matches new EMG inputs to a set of learned prototypes and predicts the corresponding labels.Main results.This novel methodology significantly enhances the model's classification performance and generalizability by discriminating subtle differences between gestures, making it more reliable and precise in real-world applications. Our experiments on four Ninapro datasets suggest that our deep prototype learning classifier outperforms state-of-the-art methods in terms of intra-subject and inter-subject classification accuracy in gesture prediction.Significance.The results from our experiments validate the effectiveness of the proposed method and pave the way for future advancements in the field of EMG gesture classification for upper limb prosthetics.


Assuntos
Eletromiografia , Gestos , Semântica , Humanos , Eletromiografia/métodos , Masculino , Feminino , Adulto , Aprendizado Profundo , Adulto Jovem
8.
J Acoust Soc Am ; 155(5): 3521-3536, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38809098

RESUMO

This electromagnetic articulography study explores the kinematic profile of Intonational Phrase boundaries in Seoul Korean. Recent findings suggest that the scope of phrase-final lengthening is conditioned by word- and/or phrase-level prominence. However, evidence comes mainly from head-prominence languages, which conflate positions of word prosody with positions of phrasal prominence. Here, we examine phrase-final lengthening in Seoul Korean, an edge-prominence language with no word prosody, with respect to focus location as an index of phrase-level prominence and Accentual Phrase (AP) length as an index of word demarcation. Results show that phrase-final lengthening extends over the phrase-final syllable. The effect is greater the further away that focus occurs. It also interacts with the domains of AP and prosodic word: lengthening is greater in smaller APs, whereas shortening is observed in the initial gesture of the phrase-final word. Additional analyses of kinematic displacement and peak velocity revealed that Korean phrase-final gestures bear the kinematic profile of IP boundaries concurrently to what is typically considered prominence marking. Based on these results, a gestural coordination account is proposed, in which boundary-related events interact systematically with phrase-level prominence as well as lower prosodic levels, and how this proposal relates to the findings in head-prominence languages is discussed.


Assuntos
Fonética , Acústica da Fala , Humanos , Masculino , Feminino , Adulto Jovem , Fenômenos Biomecânicos , Adulto , Idioma , Gestos , Medida da Produção da Fala , República da Coreia , Qualidade da Voz , Fatores de Tempo
9.
Artigo em Inglês | MEDLINE | ID: mdl-38771682

RESUMO

Gesture recognition has emerged as a significant research domain in computer vision and human-computer interaction. One of the key challenges in gesture recognition is how to select the most useful channels that can effectively represent gesture movements. In this study, we have developed a channel selection algorithm that determines the number and placement of sensors that are critical to gesture classification. To validate this algorithm, we constructed a Force Myography (FMG)-based signal acquisition system. The algorithm considers each sensor as a distinct channel, with the most effective channel combinations and recognition accuracy determined through assessing the correlation between each channel and the target gesture, as well as the redundant correlation between different channels. The database was created by collecting experimental data from 10 healthy individuals who wore 16 sensors to perform 13 unique hand gestures. The results indicate that the average number of channels across the 10 participants was 3, corresponding to an 75% decrease in the initial channel count, with an average recognition accuracy of 94.46%. This outperforms four widely adopted feature selection algorithms, including Relief-F, mRMR, CFS, and ILFS. Moreover, we have established a universal model for the position of gesture measurement points and verified it with an additional five participants, resulting in an average recognition accuracy of 96.3%. This study provides a sound basis for identifying the optimal and minimum number and location of channels on the forearm and designing specialized arm rings with unique shapes.


Assuntos
Algoritmos , Gestos , Reconhecimento Automatizado de Padrão , Humanos , Masculino , Feminino , Adulto , Reconhecimento Automatizado de Padrão/métodos , Adulto Jovem , Miografia/métodos , Mãos/fisiologia , Voluntários Saudáveis , Reprodutibilidade dos Testes
10.
Cognition ; 248: 105806, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38749291

RESUMO

The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.


Assuntos
Antecipação Psicológica , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Antecipação Psicológica/fisiologia , Percepção Visual/fisiologia , Gestos , Comunicação , Tempo de Reação/fisiologia
11.
Commun Biol ; 7(1): 472, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724671

RESUMO

Many species communicate by combining signals into multimodal combinations. Elephants live in multi-level societies where individuals regularly separate and reunite. Upon reunion, elephants often engage in elaborate greeting rituals, where they use vocalisations and body acts produced with different body parts and of various sensory modalities (e.g., audible, tactile). However, whether these body acts represent communicative gestures and whether elephants combine vocalisations and gestures during greeting is still unknown. Here we use separation-reunion events to explore the greeting behaviour of semi-captive elephants (Loxodonta africana). We investigate whether elephants use silent-visual, audible, and tactile gestures directing them at their audience based on their state of visual attention and how they combine these gestures with vocalisations during greeting. We show that elephants select gesture modality appropriately according to their audience's visual attention, suggesting evidence of first-order intentional communicative use. We further show that elephants integrate vocalisations and gestures into different combinations and orders. The most frequent combination consists of rumble vocalisations with ear-flapping gestures, used most often between females. By showing that a species evolutionarily distant to our own primate lineage shows sensitivity to their audience's visual attention in their gesturing and combines gestures with vocalisations, our study advances our understanding of the emergence of first-order intentionality and multimodal communication across taxa.


Assuntos
Comunicação Animal , Elefantes , Gestos , Vocalização Animal , Animais , Elefantes/fisiologia , Feminino , Masculino , Vocalização Animal/fisiologia , Comportamento Social
12.
Appl Ergon ; 119: 104306, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38714102

RESUMO

The 5.0 industry promotes collaborative robots (cobots). This research studies the impacts of cobot collaboration using an experimental setup. 120 participants realized a simple and a complex assembly task. 50% collaborated with another human (H/H) and 50% with a cobot (H/C). The workload and the acceptability of the cobotic collaboration were measured. Working with a cobot decreases the effect of the task complexity on the human workload and on the output quality. However, it increases the time completion and the number of gestures (while decreasing their frequency). The H/C couples have a higher chance of success but they take more time and more gestures to realize the task. The results of this research could help developers and stakeholders to understand the impacts of implementing a cobot in production chains.


Assuntos
Comportamento Cooperativo , Gestos , Robótica , Análise e Desempenho de Tarefas , Carga de Trabalho , Humanos , Carga de Trabalho/psicologia , Masculino , Feminino , Adulto , Adulto Jovem , Sistemas Homem-Máquina , Fatores de Tempo
13.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38732808

RESUMO

Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.


Assuntos
Eletromiografia , Gestos , Redes Neurais de Computação , Humanos , Eletromiografia/métodos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Aceleração , Algoritmos , Mãos/fisiologia , Aprendizado de Máquina , Fenômenos Biomecânicos/fisiologia
14.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732843

RESUMO

As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.


Assuntos
Gestos , Mãos , Aprendizado de Máquina , Redes Neurais de Computação , Humanos , Mãos/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia/métodos , Ultrassonografia/instrumentação , Ultrassom/instrumentação , Algoritmos
15.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732846

RESUMO

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Gestos , Humanos , Eletroencefalografia/métodos , Face/fisiologia , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Encéfalo/fisiologia , Masculino
16.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732933

RESUMO

This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.


Assuntos
Algoritmos , Braço , Eletromiografia , Movimento , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Eletromiografia/métodos , Braço/fisiologia , Movimento/fisiologia , Gestos , Masculino , Adulto
17.
CBE Life Sci Educ ; 23(2): ar16, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38620007

RESUMO

Interpreting three-dimensional models of biological macromolecules is a key skill in biochemistry, closely tied to students' visuospatial abilities. As students interact with these models and explain biochemical concepts, they often use gesture to complement verbal descriptions. Here, we utilize an embodied cognition-based approach to characterize undergraduate students' gesture production as they described and interpreted an augmented reality (AR) model of potassium channel structure and function. Our analysis uncovered two emergent patterns of gesture production employed by students, as well as common sets of gestures linked across categories of biochemistry content. Additionally, we present three cases that highlight changes in gesture production following interaction with a 3D AR visualization. Together, these observations highlight the importance of attending to gesture in learner-centered pedagogies in undergraduate biochemistry education.


Assuntos
Gestos , Estudantes , Humanos , Bioquímica/educação
18.
PLoS One ; 19(4): e0298699, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38574042

RESUMO

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.


Assuntos
Algoritmo Florestas Aleatórias , Língua de Sinais , Humanos , Reconhecimento Automatizado de Padrão/métodos , Gestos , Extremidade Superior
19.
Sci Rep ; 14(1): 7906, 2024 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-38575710

RESUMO

This paper delves into the specialized domain of human action recognition, focusing on the Identification of Indian classical dance poses, specifically Bharatanatyam. Within the dance context, a "Karana" embodies a synchronized and harmonious movement encompassing body, hands, and feet, as defined by the Natyashastra. The essence of Karana lies in the amalgamation of nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Although numerous, Natyashastra codifies 108 karanas, showcased in the intricate stone carvings adorning the Nataraj temples of Chidambaram, where Lord Shiva's association with these movements is depicted. Automating pose identification in Bharatanatyam poses challenges due to the vast array of variations, encompassing hand and body postures, mudras (hand gestures), facial expressions, and head gestures. To simplify this intricate task, this research employs image processing and automation techniques. The proposed methodology comprises four stages: acquisition and pre-processing of images involving skeletonization and Data Augmentation techniques, feature extraction from images, classification of dance poses using a deep learning network-based convolution neural network model (InceptionResNetV2), and visualization of 3D models through mesh creation from point clouds. The use of advanced technologies, such as the MediaPipe library for body key point detection and deep learning networks, streamlines the identification process. Data augmentation, a pivotal step, expands small datasets, enhancing the model's accuracy. The convolution neural network model showcased its effectiveness in accurately recognizing intricate dance movements, paving the way for streamlined analysis and interpretation. This innovative approach not only simplifies the identification of Bharatanatyam poses but also sets a precedent for enhancing accessibility and efficiency for practitioners and researchers in the Indian classical dance.


Assuntos
Realidade Aumentada , Humanos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Cabeça , Gestos
20.
Infant Behav Dev ; 75: 101953, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38653005

RESUMO

The emergence of the pointing gesture is a major developmental milestone in human infancy. Pointing fosters preverbal communication and is key for language and theory of mind development. Little is known about its ontogenetic origins and whether its pathway is similar across different cultures. The goal of this study was to examine the theoretical proposal that social pointing is preceded by a non-social use of the index finger and later becomes a social-communicative gesture. Moreover, the study investigated to which extent the emergence of social pointing differs cross-culturally. We assessed non-social index-finger use and social pointing in 647 infants aged 3- to 24 months from 4 different countries (China, Germany, Japan, and Türkiye). Non-social index-finger use and social pointing increased with infants' age, such that social pointing became more dominant than non-social index-finger use with age. Whereas social pointing was reported across countries, its reported frequency differed between cultures with significantly greater social pointing frequency in infants from Türkiye, China, and Germany compared to Japanese infants. Our study supports theoretical proposals of the dominance of non-social index-finger use during early infancy with social pointing becoming more prominent as infants get older. These findings contribute to our understanding of infants' use of their index finger for social and non-social purposes during the first two years of life.


Assuntos
Comparação Transcultural , Dedos , Gestos , Humanos , Lactente , Masculino , Feminino , Dedos/fisiologia , Desenvolvimento Infantil/fisiologia , Pré-Escolar , Comportamento Social , Alemanha , Japão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...