Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 456
Filter
1.
PeerJ Comput Sci ; 10: e2063, 2024.
Article in English | MEDLINE | ID: mdl-38983191

ABSTRACT

Lack of an effective early sign language learning framework for a hard-of-hearing population can have traumatic consequences, causing social isolation and unfair treatment in workplaces. Alphabet and digit detection methods have been the basic framework for early sign language learning but are restricted by performance and accuracy, making it difficult to detect signs in real life. This article proposes an improved sign language detection method for early sign language learners based on the You Only Look Once version 8.0 (YOLOv8) algorithm, referred to as the intelligent sign language detection system (iSDS), which exploits the power of deep learning to detect sign language-distinct features. The iSDS method could overcome the false positive rates and improve the accuracy as well as the speed of sign language detection. The proposed iSDS framework for early sign language learners consists of three basic steps: (i) image pixel processing to extract features that are underrepresented in the frame, (ii) inter-dependence pixel-based feature extraction using YOLOv8, (iii) web-based signer independence validation. The proposed iSDS enables faster response times and reduces misinterpretation and inference delay time. The iSDS achieved state-of-the-art performance of over 97% for precision, recall, and F1-score with the best mAP of 87%. The proposed iSDS method has several potential applications, including continuous sign language detection systems and intelligent web-based sign recognition systems.

2.
Data Brief ; 55: 110566, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38948409

ABSTRACT

Sign language is a complete language with its own grammatical rules, akin to any spoken language used worldwide. It comprises two main components: static words and ideograms. Ideograms involve hand movements and contact with various parts of the body to convey meaning. Variations in sign language are evident across different countries, necessitating comprehensive documentation of each country's sign language. In Mexico, there is a lack of formal datasets for Mexican Sign Language (MSL), to solve this issue we structure a dataset of 249 words of the MSL divided into 17 sub-sets, we use background and clothes of black color to enhance the areas of interest (hands and face), for each word we use an average of 11 individuals, from every video sequence we obtain an average of 15 frames from each individual, obtaining 31442 jpg images.

3.
Artif Intell Med ; 154: 102923, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38970987

ABSTRACT

Computerized cognitive training (CCT) is a scalable, well-tolerated intervention that has promise for slowing cognitive decline. The effectiveness of CCT is often affected by a lack of effective engagement. Mental fatigue is a the primary factor for compromising effective engagement in CCT, particularly in older adults at risk for dementia. There is a need for scalable, automated measures that can constantly monitor and reliably detect mental fatigue during CCT. Here, we develop and validate a novel Recurrent Video Transformer (RVT) method for monitoring real-time mental fatigue in older adults with mild cognitive impairment using their video-recorded facial gestures during CCT. The RVT model achieved the highest balanced accuracy (79.58%) and precision (0.82) compared to the prior models for binary and multi-class classification of mental fatigue. We also validated our model by significantly relating to reaction time across CCT tasks (Waldχ2=5.16,p=0.023). By leveraging dynamic temporal information, the RVT model demonstrates the potential to accurately measure real-time mental fatigue, laying the foundation for future CCT research aiming to enhance effective engagement by timely prevention of mental fatigue.

4.
J Robot Surg ; 18(1): 245, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38847926

ABSTRACT

Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.


Subject(s)
Clinical Competence , Dissection , Prostatectomy , Robotic Surgical Procedures , Prostatectomy/methods , Humans , Robotic Surgical Procedures/methods , Male , Dissection/methods , Gestures , Prostatic Neoplasms/surgery , Surgeons
5.
J Psycholinguist Res ; 53(4): 56, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926243

ABSTRACT

The present paper examines how English native speakers produce scopally ambiguous sentences and how they make use of gestures and prosody for disambiguation. As a case in point, the participants in the present study produced the English negative quantifiers. They appear in two different positions as (1) The election of no candidate was a surprise (a: 'for those elected, none of them was a surprise'; b: 'no candidate was elected, and that was a surprise') and (2) no candidate's election was a surprise (a: 'for those elected, none of them was a surprise'; b: # 'no candidate was elected, and that was a surprise.' We were able to investigate the gesture production and the prosodic patterns of the positional effects (i.e., a-interpretation is available at two different positions in 1 and 2) and the interpretation effects (i.e., two different interpretations are available in the same position in 1). We discovered that the participants tended to launch more head shakes in the (a) interpretation despites the different positions, but more head nod/beat in the (b) interpretation. While there is not a difference in prosody of no in (a) and (b) interpretation in (1), there are pitch and durational differences between (a) interpretations in (1) and (2). This study points out the abstract similarities across languages such as Catalan and Spanish (Prieto et al. in Lingua 131:136-150, 2013. 10.1016/j.lingua.2013.02.008; Tubau et al. in Linguist Rev 32(1):115-142, 2015. 10.1515/tlr-2014-0016) in the gestural movements, and the meaning is crucial for gesture patterns. We emphasize that gesture patterns disambiguate ambiguous interpretation when prosody cannot do so.


Subject(s)
Gestures , Psycholinguistics , Humans , Adult , Male , Female , Speech/physiology , Language , Young Adult
6.
Biomed Tech (Berl) ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38826069

ABSTRACT

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

7.
Sci Educ ; 108(2): 495-523, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38827519

ABSTRACT

Although adults are known to play an important role in young children's development, little work has focused on the enactive features of scaffolding in informal learning settings, and the embodied dynamics of intergenerational interaction. To address this gap, this paper undertakes a microinteractional analysis to examine intergenerational collaborative interaction in a science museum setting. The paper presents a fine-grained moment-by-moment analysis of video-recorded interaction of children and their adult carers around science-themed objects. Taking an enactive cognition perspective, the analysis enables access to subtle shifts in interactants' perception, action, gesture, and movement to examine how young children engage with exhibits, and the role adult action plays in supporting young children's engagement with exhibits and developing ideas about science. Our findings demonstrate that intergenerational "embodied scaffolding" is instrumental in making "enactive potentialities" in the environment more accessible for children, thus deepening and enriching children's engagement with science. Adult action is central to revealing scientific dimensions of objects' interaction and relationships in ways that expose novel types of perception and action opportunities in shaping science experiences and meaning making. This has implications for science education practices since it foregrounds not only "doing" science, through active hands-on activities, but also speaks to the interconnectedness between senses and the role of the body in thinking. Drawing on the findings, this paper also offers design implications for informal science learning environments.

8.
Exp Brain Res ; 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38842756

ABSTRACT

Recent studies on the imitation of intransitive gestures suggest that the body part effect relies mainly upon the direct route of the dual-route model through a visuo-transformation mechanism. Here, we test the visuo-constructive hypothesis which posits that the visual complexity may directly potentiate the body part effect for meaningless gestures. We predicted that the difference between imitation of hand and finger gestures would increase with the visuo-spatial complexity of gestures. Second, we aimed to identify some of the visuo-spatial predictors of meaningless finger imitation skills. Thirty-eight participants underwent an imitation task containing three distinct set of gestures, that is, meaningful gestures, meaningless gestures with low visual complexity, and meaningless gestures with higher visual complexity than the first set of meaningless gestures. Our results were in general agreement with the visuo-constructive hypothesis, showing an increase in the difference between hand and finger gestures, but only for meaningless gestures with higher visuo-spatial complexity. Regression analyses confirm that imitation accuracy decreases with resource-demanding visuo-spatial factors. Taken together, our results suggest that the body part effect is highly dependent on the visuo-spatial characteristics of the gestures.

9.
Lang Speech ; : 238309241258162, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38877720

ABSTRACT

Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.

10.
Front Psychol ; 15: 1324667, 2024.
Article in English | MEDLINE | ID: mdl-38882511

ABSTRACT

Research on the adaptations talkers make to different communication conditions during interactive conversations has primarily focused on speech signals. We extended this type of investigation to two other important communicative signals, i.e., partner-directed gaze and iconic co-speech hand gestures with the aim of determining if the adaptations made by older adults differ from younger adults across communication conditions. We recruited 57 pairs of participants, comprising 57 primary talkers and 57 secondary ones. Primary talkers consisted of three groups: 19 older adults with mild Hearing Loss (older adult-HL); 17 older adults with Normal Hearing (older adult-NH); and 21 younger adults. The DiapixUK "spot the difference" conversation-based task was used to elicit conversions in participant pairs. One easy (No Barrier: NB) and three difficult communication conditions were tested. The three conditions consisted of two in which the primary talker could hear clearly, but the secondary talkers could not, due to multi-talker babble noise (BAB1) or a less familiar hearing loss simulation (HLS), and a condition in which both the primary and secondary talkers heard each other in babble noise (BAB2). For primary talkers, we measured mean number of partner-directed gazes; mean total gaze duration; and the mean number of co-speech hand gestures. We found a robust effects of communication condition that interacted with participant group. Effects of age were found for both gaze and gesture in BAB1, i.e., older adult-NH looked and gestured less than younger adults did when the secondary talker experienced babble noise. For hearing status, a difference in gaze between older adult-NH and older adult-HL was found for the BAB1 condition; for gesture this difference was significant in all three difficult communication conditions (older adult-HL gazed and gestured more). We propose the age effect may be due to a decline in older adult's attention to cues signaling how well a conversation is progressing. To explain the hearing status effect, we suggest that older adult's attentional decline is offset by hearing loss because these participants have learned to pay greater attention to visual cues for understanding speech.

11.
HNO ; 2024 Jun 11.
Article in German | MEDLINE | ID: mdl-38861032

ABSTRACT

BACKGROUND: Very early bilateral cochlear implant (CI) provision is today's established standard for children. Therefore, the assessment of preverbal and verbal performance in very early stages of development is becoming increasingly important. Performance data from cohorts of children were evaluated and presented based on diagnostic assessment using chronological age (CA) and hearing age (HA). METHODS: The present study, as part of a retrospective multicentre study, included 4 cohorts (N = 72-233) of children with bilateral CI without additional disabilities. Their results in the German parent questionnaires Elternfragebögen zur Früherkennung von Risikokindern(ELFRA­1 and ELFRA-2) subdivided for CA and HA were statistically analysed. The data were also analysed in terms of mono-/bilingualism and age at CI provision. RESULTS: Overall, verbal performance in relation to CA was lower than in relation to HA. Preverbal skills were largely CA appropriate. Children with bi-/multilingual language acquisition performed significantly lower. Verbal performance in ELFRA­2 referenced to CA was negatively correlated with age at CI provision. CONCLUSION: In the case of early CI provision, CA should be the preferred reference mark in preverbal and verbal assessment in order to obtain exact individual performance levels and avoid bias in results. The percentiles determined are of limited use as generally valid reference values to which the individual performance of bilaterally implanted children could be compared. Further multicentre studies should be initiated.

12.
J Med Internet Res ; 26: e51695, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38819900

ABSTRACT

BACKGROUND: Informal carers play an important role in the everyday care of patients and the delivery of health care services. They aid patients in transportation to and from appointments, and they provide assistance during the appointments (eg, answering questions on the patient's behalf). Video consultations are often seen as a way of providing patients with easier access to care. However, few studies have considered how this affects the role of informal carers and how they are needed to make video consultations safe and feasible. OBJECTIVE: This study aims to identify how informal carers, usually friends or family who provide unpaid assistance, support patients and clinicians during video consultations. METHODS: We conducted an in-depth analysis of the communication in a sample of video consultations drawn from 7 clinical settings across 4 National Health Service Trusts in the United Kingdom. The data set consisted of 52 video consultation recordings (of patients with diabetes, gestational diabetes, cancer, heart failure, orthopedic problems, long-term pain, and neuromuscular rehabilitation) and interviews with all participants involved in these consultations. Using Linguistic Ethnography, which embeds detailed analysis of verbal and nonverbal communication in the context of the interaction, we examined the interactional, technological, and clinical work carers did to facilitate video consultations and help patients and clinicians overcome challenges of the remote and video-mediated context. RESULTS: Most patients (40/52, 77%) participated in the video consultation without support from an informal carer. Only 23% (12/52) of the consultations involved an informal carer. In addition to facilitating the clinical interaction (eg, answering questions on behalf of the patient), we identified 3 types of work that informal carers did: facilitating the use of technology; addressing problems when the patient could not hear or understand the clinician; and assisting with physical examinations, acting as the eyes, ears, and hands of the clinician. Carers often stayed in the background, monitoring the consultation to identify situations where they might be needed. In doing so, copresent carers reassured patients and helped them conduct the activities that make up a consultation. However, carers did not necessarily help patients solve all the challenges of a video consultation (eg, aiming the camera while laying hands on the patient during an examination). We compared cases where an informal carer was copresent with cases where the patient was alone, which showed that carers provided an important safety net, particularly for patients who were frail and experienced mobility difficulties. CONCLUSIONS: Informal carers play a critical role in making video consultations safe and feasible, particularly for patients with limited technological experience or complex needs. Guidance and research on video consulting need to consider the availability and work done by informal carers and how they can be supported in providing patients access to digital health care services.


Subject(s)
Anthropology, Cultural , Caregivers , Heart Failure , Neoplasms , Qualitative Research , Humans , Caregivers/psychology , Heart Failure/psychology , Female , Neoplasms/psychology , Anthropology, Cultural/methods , Male , United Kingdom , Video Recording , Adult , Middle Aged , Linguistics , Aged
13.
J Neural Eng ; 21(3)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38806038

ABSTRACT

Objective. Decoding gestures from the upper limb using noninvasive surface electromyogram (sEMG) signals is of keen interest for the rehabilitation of amputees, artificial supernumerary limb augmentation, gestural control of computers, and virtual/augmented realities. We show that sEMG signals recorded across an array of sensor electrodes in multiple spatial locations around the forearm evince a rich geometric pattern of global motor unit (MU) activity that can be leveraged to distinguish different hand gestures.Approach. We demonstrate a simple technique to analyze spatial patterns of muscle MU activity within a temporal window and show that distinct gestures can be classified in both supervised and unsupervised manners. Specifically, we construct symmetric positive definite covariance matrices to represent the spatial distribution of MU activity in a time window of interest, calculated as pairwise covariance of electrical signals measured across different electrodes.Main results. This allows us to understand and manipulate multivariate sEMG timeseries on a more natural subspace-the Riemannian manifold. Furthermore, it directly addresses signal variability across individuals and sessions, which remains a major challenge in the field. sEMG signals measured at a single electrode lack contextual information such as how various anatomical and physiological factors influence the signals and how their combined effect alters the evident interaction among neighboring muscles.Significance. As we show here, analyzing spatial patterns using covariance matrices on Riemannian manifolds allows us to robustly model complex interactions across spatially distributed MUs and provides a flexible and transparent framework to quantify differences in sEMG signals across individuals. The proposed method is novel in the study of sEMG signals and its performance exceeds the current benchmarks while being computationally efficient.


Subject(s)
Electromyography , Gestures , Hand , Muscle, Skeletal , Humans , Electromyography/methods , Hand/physiology , Male , Female , Adult , Muscle, Skeletal/physiology , Young Adult , Algorithms
14.
Front Neurosci ; 18: 1329411, 2024.
Article in English | MEDLINE | ID: mdl-38737097

ABSTRACT

Myoelectric prostheses have recently shown significant promise for restoring hand function in individuals with upper limb loss or deficiencies, driven by advances in machine learning and increasingly accessible bioelectrical signal acquisition devices. Here, we first introduce and validate a novel experimental paradigm using a virtual reality headset equipped with hand-tracking capabilities to facilitate the recordings of synchronized EMG signals and hand pose estimation. Using both the phasic and tonic EMG components of data acquired through the proposed paradigm, we compare hand gesture classification pipelines based on standard signal processing features, convolutional neural networks, and covariance matrices with Riemannian geometry computed from raw or xDAWN-filtered EMG signals. We demonstrate the performance of the latter for gesture classification using EMG signals. We further hypothesize that introducing physiological knowledge in machine learning models will enhance their performances, leading to better myoelectric prosthesis control. We demonstrate the potential of this approach by using the neurophysiological integration of the "move command" to better separate the phasic and tonic components of the EMG signals, significantly improving the performance of sustained posture recognition. These results pave the way for the development of new cutting-edge machine learning techniques, likely refined by neurophysiology, that will further improve the decoding of real-time natural gestures and, ultimately, the control of myoelectric prostheses.

15.
J Infect Dev Ctries ; 18(3): 362-370, 2024 Mar 31.
Article in English | MEDLINE | ID: mdl-38635617

ABSTRACT

INTRODUCTION: Coronavirus disease 2019 (COVID-19) is caused by the SARS-CoV-2 virus. It has impacted millions of individuals and caused numerous casualties. Consequently, there was a race to develop vaccines against the virus. However, there has been unequal vaccine distribution among nations, and concerns over side effects have resulted in vaccine hesitancy, reducing vaccination rates in many countries and hindering pandemic eradication. Preventive measures like well-fitted masks, frequent hand washing, alcohol-based sanitizers, and maintaining physical distance remain crucial to curb SARS-CoV-2 transmission. This study examined the adoption of these preventive measures among sellers in the Beni Mellal region of Morocco. RESULTS: We analyzed a cohort of 700 merchants, including 40.28% middle-aged males. Among them, 53% (371/700) wore masks, with 61.08% using medical masks, and 44.05% correctly positioned their masks. Additionally, 20.29% (142/700) carried disinfectants, of whom 117 used them at least once in 30 minutes. However, physical distancing was lacking in 78.29% of sellers, particularly among young and middle-aged males (18% and 31.86%, respectively). More than 80% of the vendors had frequent physical contact with others, primarily through hands. Surprisingly, only 1% (7/700) of participants combined the following preventive measures: using a disinfectant at least once, wearing a well-fitted mask, practicing physical distancing, and avoiding contact with others. Two individuals (0.29%) refrained from touching any surfaces. Money accounted for 76.57% of commonly touched surfaces; yet only 0.29% adhered to the preventive measures while handling money. Furthermore, a majority of individuals (92.14%, 645/700) were observed touching their faces at least once.


Subject(s)
COVID-19 , Disinfectants , Male , Middle Aged , Humans , COVID-19/epidemiology , COVID-19/prevention & control , SARS-CoV-2 , Morocco/epidemiology , Masks , Pandemics/prevention & control
16.
Neuropsychol Rev ; 2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38448754

ABSTRACT

Researchers and clinicians have long used meaningful intransitive (i.e., not tool-related; MFI) gestures to assess apraxia-a complex and frequent motor-cognitive disorder. Nevertheless, the neurocognitive bases of these gestures remain incompletely understood. Models of apraxia have assumed that meaningful intransitive gestures depend on either long-term memory (i.e., semantic memory and action lexicons) stored in the left hemisphere, or social cognition and the right hemisphere. This meta-analysis of 42 studies reports the performance of 2659 patients with either left or right hemisphere damage in tests of meaningful intransitive gestures, as compared to other gestures (i.e., MFT or meaningful transitive and MLI or meaningless intransitive) and cognitive tests. The key findings are as follows: (1) deficits of meaningful intransitive gestures are more frequent and severe after left than right hemisphere lesions, but they have been reported in both groups; (2) we found a transitivity effect in patients with lesions of the left hemisphere (i.e., meaningful transitive gestures more difficult than meaningful intransitive gestures) but a "reverse" transitivity effect in patients with lesions of the right hemisphere (i.e., meaningful transitive gestures easier than meaningful intransitive gestures); (3) there is a strong association between meaningful intransitive and transitive (but not meaningless) gestures; (4) isolated deficits of meaningful intransitive gestures are more frequent in cases with right than left hemisphere lesions; (5) these deficits may occur in the absence of language and semantic memory impairments; (6) meaningful intransitive gesture performance seems to vary according to the emotional content of gestures (i.e., body-centered gestures and emotional valence-intensity). These findings are partially consistent with the social cognition hypothesis. Methodological recommendations are given for future studies.

17.
Res Dev Disabil ; 148: 104711, 2024 May.
Article in English | MEDLINE | ID: mdl-38520885

ABSTRACT

BACKGROUND: Studies on late talkers (LTs) highlighted their heterogeneity and the relevance of describing different communicative profiles. AIMS: To examine lexical skills and gesture use in expressive (E-LTs) vs. receptive-expressive (R/E-LTs) LTs through a structured task. METHODS AND PROCEDURES: Forty-six 30-month-old screened LTs were distinguished into E-LTs (n= 35) and R/E-LTs (n= 11) according to their receptive skills. Lexical skills and gesture use were assessed with a Picture Naming Game by coding answer accuracy (correct, incorrect, no response), modality of expression (spoken, spoken-gestural, gestural), type of gestures (deictic, representational), and spoken-gestural answers' semantic relationship (complementary, equivalent, supplementary). OUTCOMES AND RESULTS: R/E-LTs showed lower scores than E-LTs for noun and predicate comprehension with fewer correct answers, and production with fewer correct and incorrect answers, and more no responses. R/E-LTs also exhibited lower scores in spoken answers, representational gestures, and equivalent spoken-gestural answers for noun production and in all spoken and gestural answers for predicate production. CONCLUSIONS AND IMPLICATIONS: Findings highlighted more impaired receptive and expressive lexical skills and lower gesture use in R/E-LTs compared to E-LTs, underlying the relevance of assessing both lexical and gestural skills through a structured task, besides parental questionnaires and developmental scales, to describe LTs' communicative profiles.


Subject(s)
Gestures , Language Development Disorders , Humans , Comprehension/physiology , Parents , Language Tests , Vocabulary
18.
Data Brief ; 54: 110299, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38524840

ABSTRACT

The dataset includes thermal videos of various hand gestures captured by the FLIR Lepton Thermal Camera. A large dataset is created to accurately classify hand gestures captured from eleven different individuals. The dataset consists of 9 classes corresponding to various hand gestures from different people collected at different time instances with complex backgrounds. This data includes flat/leftward, flat/rightward, flat/contract, spread/ leftward, spread/rightward, spread/contract, V-shape/leftward, V-shape/rightward, and V-shape/contract. There are 110 videos in the dataset for each gesture and a total of 990 videos corresponding to 9 gestures. Each video has data of three different (15/10/5) frame lengths.

19.
Top Cogn Sci ; 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38493475

ABSTRACT

Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.

20.
Anim Cogn ; 27(1): 18, 2024 Mar 02.
Article in English | MEDLINE | ID: mdl-38429467

ABSTRACT

Gestures play a central role in the communication systems of several animal families, including primates. In this study, we provide a first assessment of the gestural systems of a Platyrrhine species, Geoffroy's spider monkeys (Ateles geoffroyi). We observed a wild group of 52 spider monkeys and assessed the distribution of visual and tactile gestures in the group, the size of individual repertoires and the intentionality and effectiveness of individuals' gestural production. Our results showed that younger spider monkeys were more likely than older ones to use tactile gestures. In contrast, we found no inter-individual differences in the probability of producing visual gestures. Repertoire size did not vary with age, but the probability of accounting for recipients' attentional state was higher for older monkeys than for younger ones, especially for gestures in the visual modality. Using vocalizations right before the gesture increased the probability of gesturing towards attentive recipients and of receiving a response, although age had no effect on the probability of gestures being responded. Overall, our study provides first evidence of gestural production in a Platyrrhine species, and confirms this taxon as a valid candidate for research on animal communication.


Subject(s)
Ateles geoffroyi , Atelinae , Humans , Animals , Gestures , Animal Communication , Individuality
SELECTION OF CITATIONS
SEARCH DETAIL
...