Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.945
Filter
1.
MedEdPORTAL ; 20: 11396, 2024.
Article in English | MEDLINE | ID: mdl-38722734

ABSTRACT

Introduction: People with disabilities and those with non-English language preferences have worse health outcomes than their counterparts due to barriers to communication and poor continuity of care. As members of both groups, people who are Deaf users of American Sign Language have compounded health disparities. Provider discomfort with these specific demographics is a contributing factor, often stemming from insufficient training in medical programs. To help address these health disparities, we created a session on disability, language, and communication for undergraduate medical students. Methods: This 2-hour session was developed as a part of a 2020 curriculum shift for a total of 404 second-year medical student participants. We utilized a retrospective postsession survey to analyze learning objective achievement through a comparison of medians using the Wilcoxon signed rank test (α = .05) for the first 2 years of course implementation. Results: When assessing 158 students' self-perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session compared to their retrospective presession confidence for all four learning objectives (ps < .001, respectively). Responses signifying learning objective achievement (scores of 4, probably yes, or 5, definitely yes), when averaged across the first 2 years of implementation, increased from 73% before the session to 98% after the session. Discussion: Our evaluation suggests medical students could benefit from increased educational initiatives on disability culture and health disparities caused by barriers to communication, to strengthen cultural humility, the delivery of health care, and, ultimately, health equity.


Subject(s)
Curriculum , Decision Making, Shared , Disabled Persons , Education, Medical, Undergraduate , Students, Medical , Humans , Students, Medical/psychology , Students, Medical/statistics & numerical data , Retrospective Studies , Education, Medical, Undergraduate/methods , Communication Barriers , Surveys and Questionnaires , Male , Female , Sign Language , Language
2.
CBE Life Sci Educ ; 23(2): ar22, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38709798

ABSTRACT

In recent years, an increasing number of deaf and hard of hearing (D/HH) undergraduates have chosen to study in STEM fields and pursue careers in research. Yet, very little research has been undertaken on the barriers and inclusive experiences often faced by D/HH undergraduates who prefer to use spoken English in research settings, instead of American Sign Language (ASL). To identify barriers and inclusive strategies, we studied six English speaking D/HH undergraduate students working in research laboratories with their eight hearing mentors, and their three hearing peers sharing their experiences. Three researchers observed the interactions between all three groups and conducted interviews and focus groups, along with utilizing the Communication Assessment Self-Rating Scale (CASS). The main themes identified in the findings were communication and environmental barriers in research laboratories, creating accessible and inclusive laboratory environments, communication strategies, and self-advocating for effective communication. Recommendations for mentors include understanding the key elements of creating an inclusive laboratory environment for English speaking D/HH students and effectively demonstrating cultural competence to engage in inclusive practices.


Subject(s)
Students , Humans , Deafness , Male , Female , Persons With Hearing Impairments , Research , Sign Language , Mentors , Language , Communication , Communication Barriers
3.
Biosensors (Basel) ; 14(5)2024 May 03.
Article in English | MEDLINE | ID: mdl-38785701

ABSTRACT

At the heart of the non-implantable electronic revolution lies ionogels, which are remarkably conductive, thermally stable, and even antimicrobial materials. Yet, their potential has been hindered by poor mechanical properties. Herein, a double network (DN) ionogel crafted from 1-Ethyl-3-methylimidazolium chloride ([Emim]Cl), acrylamide (AM), and polyvinyl alcohol (PVA) was constructed. Tensile strength, fracture elongation, and conductivity can be adjusted across a wide range, enabling researchers to fabricate the material to meet specific needs. With adjustable mechanical properties, such as tensile strength (0.06-5.30 MPa) and fracture elongation (363-1373%), this ionogel possesses both robustness and flexibility. This ionogel exhibits a bi-modal response to temperature and strain, making it an ideal candidate for strain sensor applications. It also functions as a flexible strain sensor that can detect physiological signals in real time, opening doors to personalized health monitoring and disease management. Moreover, these gels' ability to decode the intricate movements of sign language paves the way for improved communication accessibility for the deaf and hard-of-hearing community. This DN ionogel lays the foundation for a future in which e-skins and wearable sensors will seamlessly integrate into our lives, revolutionizing healthcare, human-machine interaction, and beyond.


Subject(s)
Sign Language , Humans , Polyvinyl Alcohol/chemistry , Monitoring, Physiologic , Wearable Electronic Devices , Gels/chemistry , Imidazoles/chemistry , Biosensing Techniques , Acrylamide , Tensile Strength
5.
Am Ann Deaf ; 168(5): 274-295, 2024.
Article in English | MEDLINE | ID: mdl-38766939

ABSTRACT

Extant research on learners who are d/Deaf or hard of hearing with disabilities who come from Asian immigrant families is extremely sparse. The authors conducted an intrinsic case study of a deaf student with autism who comes from a Korean immigrant family. To acquire a comprehensive understanding of language and communication characteristics, they analyzed (a) interview data of three administrators who worked with the student and family and (b) school documents/reports issued to the parents. Themes are reported across the three components of the tri-focus framework (Siegel-Causey & Bashinski, 1997): the learner, partner, and environment. Implications for practitioners who work with these learners and their families are discussed, including (a) compiling an individualized language and communication profile that encompasses the framework; (b) utilizing culturally and linguistically responsive practices with the family; (c) practicing interprofessional collaboration; and (d) modifying physical and social environments to increase accessibility.


Subject(s)
Autism Spectrum Disorder , Deafness , Emigrants and Immigrants , Humans , Autism Spectrum Disorder/psychology , Autism Spectrum Disorder/ethnology , Emigrants and Immigrants/psychology , Deafness/psychology , Deafness/rehabilitation , Deafness/ethnology , Male , Communication , Persons With Hearing Impairments/psychology , Education of Hearing Disabled , Child , Republic of Korea , Female , Communication Barriers , Sign Language , Social Environment , Language
6.
Am Ann Deaf ; 168(5): 296-310, 2024.
Article in English | MEDLINE | ID: mdl-38766940

ABSTRACT

This article describes the current landscape of teaching literacy to Filipino Deaf students in a multilingual, multi-cultural classroom amid the pandemic. The article highlights the uniqueness of Filipino Deaf students as multilingual learners in a multi-cultural classroom and the lack of literature and research on Deaf multilingualism both locally and globally. Moreover, the article focuses on the role of Deaf teachers in teaching Filipino Deaf students, especially in their literacy development. The steps being done to ensure that the curriculum is inclusive of Deaf learners who use Filipino Sign Language (FSL), teacher preparation and materials development, and the challenges in the shift to distance learning amid the COVID-19 pandemic are also discussed. Future directions and recommendations include review of curriculum and adaptation, enhancement of teacher preparation, promotion of collaborative teaching and research efforts, and the production of more appropriate and accessible instructional materials for Deaf students.


Subject(s)
COVID-19 , Curriculum , Education of Hearing Disabled , Literacy , Multilingualism , Persons With Hearing Impairments , Sign Language , Humans , COVID-19/epidemiology , Philippines/ethnology , Education of Hearing Disabled/methods , Persons With Hearing Impairments/psychology , Deafness/psychology , SARS-CoV-2 , Child , Education, Distance , Pandemics , Students/psychology
7.
PLoS One ; 19(5): e0304040, 2024.
Article in English | MEDLINE | ID: mdl-38814896

ABSTRACT

This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in their phonetic characteristics. Earlier research on spoken and sign language interaction has revealed that head nods vary in the form of the movement. However, most claims about the phonetic properties of head nods have been based on manual annotation without reference to naturalistic text types and the head nods produced by the addressee have been largely ignored. There is a lack of detailed information about the phonetic properties of the addressee's head nods and their interaction with manual cues in DGS as well as in other sign languages, and the existence of a form-function relationship of head nods remains uncertain. We hypothesize that head nods functioning in the context of affirmation differ from those signaling feedback in their form and the co-occurrence with manual items. To test the hypothesis, we apply OpenPose, a computer vision toolkit, to extract head nod measurements from video recordings and examine head nods in terms of their duration, amplitude and velocity. We describe the basic phonetic properties of head nods in DGS and their interaction with manual items in naturalistic corpus data. Our results show that phonetic properties of affirmative nods differ from those of feedback nods. Feedback nods appear to be on average slower in production and smaller in amplitude than affirmation nods, and they are commonly produced without a co-occurring manual element. We attribute the variations in phonetic properties to the distinct roles these cues fulfill in turn-taking system. This research underlines the importance of non-manual cues in shaping the turn-taking system of sign languages, establishing the links between such research fields as sign language linguistics, conversational analysis, quantitative linguistics and computer vision.


Subject(s)
Phonetics , Sign Language , Humans , Germany , Male , Head/physiology , Female , Language , Head Movements/physiology
8.
Brain Lang ; 253: 105416, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38703524

ABSTRACT

Geometry has been identified as a cognitive domain where deaf individuals exhibit relative strength, yet the neural mechanisms underlying geometry processing in this population remain poorly understood. This fMRI study aimed to investigate the neural correlates of geometry processing in deaf and hearing individuals. Twenty-two adult deaf signers and 25 hearing non-signers completed a geometry decision task. We found no group differences in performance, while there were some differences in parietal activation. As expected, the posterior superior parietal lobule (SPL) was recruited for both groups. The anterior SPL was significantly more activated in the deaf group, and the inferior parietal lobule was significantly more deactivated in the hearing group. In conclusion, despite similar performance across groups, there were differences in the recruitment of parietal regions. These differences may reflect inherent differences in brain organization due to different early sensory and linguistic experiences.


Subject(s)
Brain Mapping , Deafness , Magnetic Resonance Imaging , Parietal Lobe , Sign Language , Humans , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiology , Male , Adult , Female , Deafness/physiopathology , Deafness/diagnostic imaging , Young Adult , Middle Aged
9.
Sensors (Basel) ; 24(10)2024 May 14.
Article in English | MEDLINE | ID: mdl-38793964

ABSTRACT

Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features.


Subject(s)
Deep Learning , Sign Language , Humans , Saudi Arabia , Mobile Applications , Deafness/physiopathology , Persons With Hearing Impairments
10.
PLoS One ; 19(4): e0298479, 2024.
Article in English | MEDLINE | ID: mdl-38625906

ABSTRACT

OBJECTIVES: (i) To identify peer reviewed publications reporting the mental and/or physical health outcomes of Deaf adults who are sign language users and to synthesise evidence; (ii) If data available, to analyse how the health of the adult Deaf population compares to that of the general population; (iii) to evaluate the quality of evidence in the identified publications; (iv) to identify limitations of the current evidence base and suggest directions for future research. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, PsychINFO, and Web of Science. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: The inclusion criteria were Deaf adult populations who used a signed language, all study types, including methods-focused papers which also contain results in relation to health outcomes of Deaf signing populations. Full-text articles, published in peer-review journals were searched up to 13th June 2023, published in English or a signed language such as ASL (American Sign Language). DATA EXTRACTION: Supported by the Rayyan systematic review software, two authors independently reviewed identified publications at each screening stage (primary and secondary). A third reviewer was consulted to settle any disagreements. Comprehensive data extraction included research design, study sample, methodology, findings, and a quality assessment. RESULTS: Of the 35 included studies, the majority (25 out of 35) concerned mental health outcomes. The findings from this review highlighted the inequalities in health and mental health outcomes for Deaf signing populations in comparison with the general population, gaps in the range of conditions studied in relation to Deaf people, and the poor quality of available data. CONCLUSIONS: Population sample definition and consistency of standards of reporting of health outcomes for Deaf people who use sign language should be improved. Further research on health outcomes not previously reported is needed to gain better understanding of Deaf people's state of health.


Subject(s)
Outcome Assessment, Health Care , Sign Language , Adult , Humans
11.
PLoS One ; 19(4): e0298699, 2024.
Article in English | MEDLINE | ID: mdl-38574042

ABSTRACT

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.


Subject(s)
Random Forest , Sign Language , Humans , Pattern Recognition, Automated/methods , Gestures , Upper Extremity
12.
Brain Lang ; 252: 105413, 2024 May.
Article in English | MEDLINE | ID: mdl-38608511

ABSTRACT

Sign languages (SLs) are expressed through different bodily actions, ranging from re-enactment of physical events (constructed action, CA) to sequences of lexical signs with internal structure (plain telling, PT). Despite the prevalence of CA in signed interactions and its significance for SL comprehension, its neural dynamics remain unexplored. We examined the processing of different types of CA (subtle, reduced, and overt) and PT in 35 adult deaf or hearing native signers. The electroencephalographic-based processing of signed sentences with incongruent targets was recorded. Attenuated N300 and early N400 were observed for CA in deaf but not in hearing signers. No differences were found between sentences with CA types in all signers, suggesting a continuum from PT to overt CA. Deaf signers focused more on body movements; hearing signers on faces. We conclude that CA is processed less effortlessly than PT, arguably because of its strong focus on bodily actions.


Subject(s)
Comprehension , Deafness , Electroencephalography , Sign Language , Humans , Comprehension/physiology , Adult , Male , Female , Deafness/physiopathology , Young Adult , Brain/physiology , Evoked Potentials/physiology
14.
Sensors (Basel) ; 24(5)2024 Feb 24.
Article in English | MEDLINE | ID: mdl-38475008

ABSTRACT

Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE.


Subject(s)
Deep Learning , Humans , Sign Language , Hearing , Communication
16.
BMJ ; 384: 2615, 2024 02 28.
Article in English | MEDLINE | ID: mdl-38418094

Subject(s)
Deafness , Sign Language , Humans
18.
Science ; 383(6682): 519-523, 2024 Feb 02.
Article in English | MEDLINE | ID: mdl-38301028

ABSTRACT

Sign languages are naturally occurring languages. As such, their emergence and spread reflect the histories of their communities. However, limitations in historical recordkeeping and linguistic documentation have hindered the diachronic analysis of sign languages. In this work, we used computational phylogenetic methods to study family structure among 19 sign languages from deaf communities worldwide. We used phonologically coded lexical data from contemporary languages to infer relatedness and suggest that these methods can help study regular form changes in sign languages. The inferred trees are consistent in key respects with known historical information but challenge certain assumed groupings and surpass analyses made available by traditional methods. Moreover, the phylogenetic inferences are not reducible to geographic distribution but do affirm the importance of geopolitical forces in the histories of human languages.


Subject(s)
Language , Linguistics , Sign Language , Humans , Language/history , Linguistics/classification , Linguistics/history , Phylogeny
19.
Sensors (Basel) ; 24(3)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38339542

ABSTRACT

Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.


Subject(s)
Pattern Recognition, Automated , Sign Language , Humans , Japan , Pattern Recognition, Automated/methods , Hand , Algorithms , Gestures
20.
J Biomech ; 165: 112011, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38382174

ABSTRACT

Prior studies suggest that native (born to at least one deaf or signing parent) and non-native signers have different musculoskeletal health outcomes from signing, but the individual and combined biomechanical factors driving these differences are not fully understood. Such group differences in signing may be explained by the five biomechanical factors of American Sign Language that have been previously identified: ballistic signing, hand and wrist deviations, work envelope, muscle tension, and "micro" rests. Prior work used motion capture and surface electromyography to collect joint kinematics and muscle activations, respectively, from ten native and thirteen non-native signers as they signed for 7.5 min. Each factor was individually compared between groups. A factor analysis was used to determine the relative contributions of each biomechanical factor between signing groups. No significant differences were found between groups for ballistic signing, hand and wrist deviations, work envelope volume, excursions from recommended work envelope, muscle tension, or "micro" rests. Factor analysis revealed that "micro" rests had the strongest contribution for both groups, while hand and wrist deviations had the weakest contribution. Muscle tension and work envelope had stronger contributions for native compared to non-native signers, while ballistic signing had a stronger contribution for non-native compared to native signers. Using a factor analysis enabled discernment of relative contributions of biomechanical variables across native and non-native signers that could not be detected through isolated analysis of individual measures. Differences in the contributions of these factors may help explain the differences in signing across native and non-native signers.


Subject(s)
Hand , Sign Language , Humans , United States , Upper Extremity , Wrist , Factor Analysis, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...