Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
1.
Disabil Rehabil Assist Technol ; : 1-11, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38962994

ABSTRACT

Purpose: Visual impairment poses significant challenges in daily life, especially when navigating unfamiliar environments, resulting in inequalities and reduced quality of life. This study aimed to gain an in-depth understanding of the needs and perspectives of visually impaired people in sports-related contexts through surveys and focus groups, and to understand whether their needs are being met by current technological solutions.Materials and methods: To accomplish this, opinions gathered from focus groups and interviews were compared to the technological solutions found in the literature. Since many unmet needs were identified, participants from associations and organizations were asked to identify key characteristics for the development of a robot guide. The results underscored the paramount importance of an easy-to-use guide that offers accurate and personalized assistance. Participants expressed a strong desire for advanced features such as object recognition and navigation in complex environments, as well as adaptability to the user's speed while providing the necessary safety features to ensure a high level of autonomy.Results: This research serves as a bridge between technological advances and the needs of the visually impaired, contributing to a more accessible and inclusive society. By addressing the unique challenges faced by the visually impaired individuals and tailoring technology to meet their needs, this study takes a significant step toward reducing disparities and improving the independence and quality of life for this community.Conclusions: As technology continues to advance, it has the potential to be a powerful tool in breaking down barriers and fostering a world where everyone, regardless of their visual ability, can navigate the world with confidence and ease.


Inclusive design: Recognizing the importance of incorporating the unique requirements and perspectives of visually impaired individuals can guide the development of rehabilitation technology and services, ensuring they effectively support daily activities and active participation in sports and physical pursuits.Tailored-assistive technology: Understanding the specific needs of visually impaired individuals with regards to assistive technology, such as dependable robotic guides and essential features, can inform the design and customization of rehabilitation aids to enhance mobility and independence.Promising technologies: Exploring promising technologies like Aira, Be My Eyes, RoboCart, and Wayband can inspire the integration of these innovations into rehabilitation programs, facilitating better orientation, mobility, and accessibility for individuals with visual impairments.Continued research and development: Emphasizing the necessity for ongoing research and development efforts underscores the importance of advancing rehabilitation solutions that effectively address the distinct needs of visually impaired individuals, particularly in navigating unfamiliar environments.

2.
Life (Basel) ; 14(3)2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38541681

ABSTRACT

The ability of individuals with visual impairment to recognize an obstacle by hearing is called "obstacle sense". This ability is facilitated while they are moving, though the exact reason remains unknown. This study aims to clarify which acoustical factors may contribute to obstacle sense, especially obstacle distance perception. First, we conducted a comparative experiment regarding obstacle distance localization by individuals who are blind (N = 5, five men with blindness aged 22-42 (average: 29.8)) while they were standing and walking. The results indicate that the localized distance was more accurate while walking than while standing. Subsequently, the head rotation angle while walking and acoustic characteristics with respect to obstacle distance and head rotation angle were investigated. The peaks of the absolute head rotation angle during walking ranged from 2.78° to 11.11° (average: 6.55°, S.D.: 2.05°). Regarding acoustic characteristics, acoustic coloration occurred, and spectral interaural differences and interaural intensity differences were observed in the blind participants (N = 4, four men including two blind and two control sighted persons aged 25-38 (average: 30.8)). To determine which acoustic factors contribute, we examined the threshold of changes for interaural differences in time (ITD) and intensity (IID) (N = 11, seven men and four women with blindness aged 21-35 (average: 27.4)), as well as coloration (ICD) (N = 6, seven men and a woman with blindness aged 21-38 (average: 29.9))-depending on the head rotation. Notably, ITD and IID thresholds were 86.2 µs and 1.28 dB; the corresponding head rotation angles were 23.5° and 9.17°, respectively. The angle of the ICD threshold was 6.30° on average. Consequently, IID might be a contributing factor and ICD can be utilized as the cue facilitating the obstacle distance perception while walking.

3.
Disabil Rehabil Assist Technol ; : 1-16, 2024 Mar 12.
Article in English | MEDLINE | ID: mdl-38469665

ABSTRACT

PURPOSE: Visually impaired people (VIP) find it challenging to understand and gain awareness of their surroundings. Most activities require the use of the auditory or tactile senses. As such, assistive systems which are capable of aiding visually impaired people to understand, navigate and form a mental representation of their environment are extensively being studied and developed. The aim of this paper is to provide insight regarding the characteristics, as well as the advantages and drawbacks of different types of sonification strategies in assistive systems, to assess their suitability for certain use-cases. MATERIALS AND METHODS: To this end, we reviewed a sizeable number of assistive solutions for VIP which provide a form of auditory feedback to the user, encountered in different scientific databases (Scopus, IEEE Xplore, ACM and Google Scholar) through direct searches and cross-referencing. RESULTS: We classified these solutions based on the aural information they provide to the VIP - alerts, guidance and information about their environment, be it spatial or semantic. Our intention is not to provide an exhaustive review, but to select representative implementations from recent literature that highlight the particularities of each sonification approach. CONCLUSIONS: Thus, anyone who is intent on developing an assistive solution will be able to choose the desired sonification class, being aware of the advantages/disadvantages and at the same time having a fairly wide selection of articles from the representative class.


The motivation behind this paper is to provide an overview of sonification strategies in the context of assistive systems for the visually impaired people.Whilst surveys and reviews which provide in-depth insights into assistive technologies and sonification exist, papers which provide a combined view of these topics are rather lacking.The analysis of the selected papers provides insight regarding the characteristics of different types of sonification strategies in assistive systems for visually impaired people and their suitability for certain use-cases.

4.
Cogitare Enferm. (Online) ; 29: e92082, 2024. graf
Article in Portuguese | LILACS-Express | LILACS, BDENF - Nursing | ID: biblio-1534257

ABSTRACT

RESUMO Objetivo: Desenvolver tecnologias educacionais sobre pré-natal com e para mulheres deficientes visuais. Método: Estudo metodológico com interface participativa e abordagem qualitativa. Realizado em uma Unidade Técnica Especializada no município de Belém, Pará, Brasil. A produção de dados ocorreu entre agosto e setembro de 2021 com seis mulheres. Utilizou-se o DOSVOX como recurso de comunicação para que as participantes respondessem quatro instrumentos com vistas ao desenvolvimento das tecnologias. A análise foi de conteúdo temático. Resultados: As mulheres com deficiência visual querem respeito à sua autonomia, inclusão e informação dos profissionais. As tecnologias produzidas apontam as demandas específicas de mulheres com deficiência visual e a importância de preservar a autonomia durante a realização do pré-natal. Conclusão: Tecnologias produzidas de forma participativa apontam perspectivas e necessidades específicas das mulheres sobre o pré-natal e poderão subsidiar tanto o agir dos enfermeiros nas consultas como favorecer mulheres com deficiência visual durante o pré-natal.


ABSTRACT Objective: To develop educational technologies on prenatal care with and for visually impaired women. Method: A methodological study with a participatory interface and qualitative approach. It was carried out at a Specialized Technical Unit in the municipality of Belém, Pará, Brazil. Data production took place between August and September 2021 with six women. DOSVOX was used as a communication resource for the participants to answer four instruments with a view to developing the technologies. The analysis was of the thematic content type. Results: Women with visual impairment want respect for their autonomy, inclusion, and information from the professionals. The technologies produced point to the specific demands of visually impaired women and to the importance of preserving autonomy during prenatal care. Conclusion: Technologies produced in a participatory way point out women's specific perspectives and needs regarding prenatal care and may support both the nurses' actions in consultations and favor women with visual impairment during prenatal care.


RESUMEN Objetivo: Desarrollar tecnologías educativas sobre el control prenatal con y para mujeres con discapacidad visual. Método: Estudio metodológico con interfaz participativa y enfoque cualitativo. Realizado en una Unidad Técnica Especializada de la ciudad de Belém, Pará, Brasil. La producción de datos se realizó entre agosto y septiembre de 2021 con seis mujeres. Se utilizó DOSVOX como recurso de comunicación para que las participantes respondieran a cuatro instrumentos a fin de desarrollar las tecnologías. El análisis fue de contenido temático. Resultados: Las mujeres con discapacidad visual quieren que se respete su autonomía, inclusión e información por parte de los profesionales. Las tecnologías creadas ponen de manifiesto las demandas específicas de las mujeres con discapacidad visual y la importancia de preservar la autonomía durante el control prenatal. Conclusión: Las tecnologías creadas de manera participativa señalan las perspectivas y necesidades específicas de las mujeres con respecto al cuidado prenatal y pueden ayudar a los enfermeros en las consultas y a las mujeres con discapacidad visual durante el control prenatal.

5.
Disabil Rehabil Assist Technol ; : 1-13, 2023 Nov 29.
Article in English | MEDLINE | ID: mdl-38018463

ABSTRACT

PURPOSE: In vocational education and training of computer literacy as part of vocational rehabilitation, learners often work on problem-solving exercises as self-study assignments, and check if their answers are correct. Sighted learners can get information on their incorrect answers by comparing their answers with the correct answers. However, learners with visual impairments largely depend on their teachers for getting this feedback. To remove this dependence, we designed a self-checking system for learners with visual impairments to verify the correctness of their answers. In this paper, we report the results of a usability study to evaluate whether learners with visual impairments can self-check spreadsheet problem-solving exercises using our system in a teacherless environment. METHODS: Usability evaluation experiment was conducted using 2 × 2 crossover design with people with visual impairments (n = 11). The participants checked their answers (detected and corrected errors) after working on problem-solving exercises in two ways: (i) manually; and (ii) using our system. The system usability was evaluated by measuring Detection-And-Correction (DAC) ratio as effectiveness, time taken and the number of steps required for DAC as efficiency, and System Usability Scale score as satisfaction. RESULTS AND CONCLUSIONS: The results show that all the participants could complete the DAC task by using our system, and the time required for DAC task was significantly reduced by using our system as compared to by checking manually. Our system enables learners with visual impairments to self-check problem-solving exercises answers. However, to increase the user satisfaction, the number of required keystrokes needs to be decreased.


Vocational rehabilitation for learners with visual impairments to improve their computer literacy is becoming increasingly important.Learners with visual impairments have the potential to acquire computer literacy in a teacherless environment by using simple assistive software like our self-checking system.Simple assistive software for learners with visual impairments like our self-checking system may have a positive effect not only on learners with visual impairments but also on sighted people.Moreover, our system reduces the teaching load of the teachers so that they can be more effective in helping learners with visual impairments.

6.
Sensors (Basel) ; 23(13)2023 Jun 26.
Article in English | MEDLINE | ID: mdl-37447778

ABSTRACT

There are many visually impaired people globally, and it is important to support their ability to walk independently. Acoustic signals and escort zones have been installed on pedestrian crossings for the visually impaired people to walk safely; however, pedestrian accidents, including those involving the visually impaired, continue to occur. Therefore, to realize safe walking for the visually impaired on pedestrian crossings, we present an automatic sensing method for pedestrian crossings using images from cameras attached to them. Because the white rectangular stripes that mark pedestrian crossings are aligned, we focused on the edges of these rectangular stripes and proposed a novel pedestrian crossing sensing method based on the dispersion of the slope of a straight line in Hough space. Our proposed method possesses unique characteristics that allow it to effectively handle challenging scenarios that traditional methods struggle with. It excels at detecting crosswalks even in low-light conditions during nighttime when illumination levels may vary. Moreover, it can detect crosswalks even when certain areas are partially obscured by objects or obstructions. By minimizing computational costs, our method achieves high real-time performance, ensuring efficient and timely crosswalk detection in real-world environments. Specifically, our proposed method demonstrates an impressive accuracy rate of 98.47%. Additionally, the algorithm can be executed at almost real-time speeds (approximately 10.5 fps) using a Jetson Nano small-type computer, showcasing its suitability as a wearable device.


Subject(s)
Pedestrians , Visually Impaired Persons , Humans , Accidents, Traffic , Safety , Algorithms , Walking
7.
Sensors (Basel) ; 23(11)2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37299996

ABSTRACT

Visually impaired people seek social integration, yet their mobility is restricted. They need a personal navigation system that can provide privacy and increase their confidence for better life quality. In this paper, based on deep learning and neural architecture search (NAS), we propose an intelligent navigation assistance system for visually impaired people. The deep learning model has achieved significant success through well-designed architecture. Subsequently, NAS has proved to be a promising technique for automatically searching for the optimal architecture and reducing human efforts for architecture design. However, this new technique requires extensive computation, limiting its wide use. Due to its high computation requirement, NAS has been less investigated for computer vision tasks, especially object detection. Therefore, we propose a fast NAS to search for an object detection framework by considering efficiency. The NAS will be used to explore the feature pyramid network and the prediction stage for an anchor-free object detection model. The proposed NAS is based on a tailored reinforcement learning technique. The searched model was evaluated on a combination of the Coco dataset and the Indoor Object Detection and Recognition (IODR) dataset. The resulting model outperformed the original model by 2.6% in average precision (AP) with acceptable computation complexity. The achieved results proved the efficiency of the proposed NAS for custom object detection.


Subject(s)
Deep Learning , Self-Help Devices , Sensory Aids , Visually Impaired Persons , Humans
8.
Polymers (Basel) ; 15(9)2023 May 03.
Article in English | MEDLINE | ID: mdl-37177326

ABSTRACT

We have conducted research on how tactile content is created for visually impaired individuals. From the data collected, an experiment was developed and applied. It investigated alternative materials to serve as a basis for the use of 3D printing to reduce production costs. It also evaluated the adherence of different values of width, height, and angles of the contour lines, as well as different geometric shapes and top/bottom fill patterns on these materials. The results show it is possible to use cellulose-based materials weighing between 120 g/m2 and 180 g/m2 to support the prints instead of making a base for the information, with gains up to 40 times in production time and up to 29 times in the consumption of materials if there is no need to fold the manufactured content. Based on visually impaired every-day activities such as locating and following a line (exploration), discerning different textures (tactile discrimination), identifying figures (picture comprehension), and locating copies of them (spatial comprehension), the ideal line widths for 3D printing adherence regarding tactile content creation were found to be between 0.8 mm and 1.2 mm, while 0.4 mm was the maximum height that did not compromise adherence. When bending the 3D printed material on the surface, we found that lines with angles between 0° and 20° from the bending direction could keep their adherence as well. The shapes must receive a small rounding at the corners and preferably align themselves with the mentioned angles. The top/bottom fill patterns did not affect adhesion. The infill can be used as a texture generator and should be adjusted to densities of 10% to 50%, or 10% to 90% when combined with other textures. In the first case, users were able to perceive differences in the tactile content whenever a single infill pattern was used. In the latter, combining two infill patterns leads to a more discriminating surface, resulting in a higher number of textures to be used in tactile content production (analogous to the number of colors used in an image for a person with no visual impairment).

9.
Sensors (Basel) ; 23(8)2023 Apr 17.
Article in English | MEDLINE | ID: mdl-37112374

ABSTRACT

In this work, we developed a prototype that adopted sound-based systems for localization of visually impaired individuals. The system was implemented based on a wireless ultrasound network, which helped the blind and visually impaired to navigate and maneuver autonomously. Ultrasonic-based systems use high-frequency sound waves to detect obstacles in the environment and provide location information to the user. Voice recognition and long short-term memory (LSTM) techniques were used to design the algorithms. The Dijkstra algorithm was also used to determine the shortest distance between two places. Assistive hardware tools, which included an ultrasonic sensor network, a global positioning system (GPS), and a digital compass, were utilized to implement this method. For indoor evaluation, three nodes were localized on the doors of different rooms inside the house, including the kitchen, bathroom, and bedroom. The coordinates (interactive latitude and longitude points) of four outdoor areas (mosque, laundry, supermarket, and home) were identified and stored in a microcomputer's memory to evaluate the outdoor settings. The results showed that the root mean square error for indoor settings after 45 trials is about 0.192. In addition, the Dijkstra algorithm determined that the shortest distance between two places was within an accuracy of 97%.


Subject(s)
Self-Help Devices , Visually Impaired Persons , Humans , Geographic Information Systems , Ultrasonography , Algorithms
10.
Sensors (Basel) ; 23(3)2023 Jan 17.
Article in English | MEDLINE | ID: mdl-36772117

ABSTRACT

Current artificial intelligence systems for determining a person's emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image's lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face's features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset.


Subject(s)
Deep Learning , Facial Recognition , Humans , Artificial Intelligence , Emotions , Neural Networks, Computer , Facial Expression
11.
Bioengineering (Basel) ; 9(12)2022 Dec 02.
Article in English | MEDLINE | ID: mdl-36550958

ABSTRACT

Using a phosphene has been discussed as a means of informing the visually impaired of the position of an obstacle. Obstacles underfoot have a risk, so it is necessary to inform the visually impaired. A previous study clarified a method of presenting phosphene in three directions in the lower vision; however, the simultaneous presentation of these phosphenes has not been discussed. Another study discussing the effect of electrical interference when stimulating the eyeball with multiple electrodes indicated that it is important to select appropriate stimulation factors to avoid this effect. However, when the stimulation electrodes are arranged remarkably close, there is a high possibility that the stimulus factor presented in the previous study will not apply. In this study, a method for simultaneously presenting phosphenes in the lower vision is presented. The electrode arrangements reported in the previous study to present phosphene in the lower field of vision are used, and the difficulty in the simultaneous presentation of multiple phosphenes in the lower vision is the focus. In this paper, the method of designing the stimulation factors is discussed numerically when the electrodes are arranged remarkably close. As a result, it is shown that stimulation factors different from the previous research were appropriate depending on the distance between the electrodes.

12.
Sensors (Basel) ; 22(14)2022 Jul 12.
Article in English | MEDLINE | ID: mdl-35890881

ABSTRACT

Visually impaired people face many challenges that limit their ability to perform daily tasks and interact with the surrounding world. Navigating around places is one of the biggest challenges that face visually impaired people, especially those with complete loss of vision. As the Internet of Things (IoT) concept starts to play a major role in smart cities applications, visually impaired people can be one of the benefitted clients. In this paper, we propose a smart IoT-based mobile sensors unit that can be attached to an off-the-shelf cane, hereafter a smart cane, to facilitate independent movement for visually impaired people. The proposed mobile sensors unit consists of a six-axis accelerometer/gyro, ultrasonic sensors, GPS sensor, cameras, a digital motion processor and a single credit-card-sized single-board microcomputer. The unit is used to collect information about the cane user and the surrounding obstacles while on the move. An embedded machine learning algorithm is developed and stored in the microcomputer memory to identify the detected obstacles and alarm the user about their nature. In addition, in case of emergencies such as a cane fall, the unit alerts the cane user and their guardian. Moreover, a mobile application is developed to be used by the guardian to track the cane user via Google Maps using a mobile handset to ensure safety. To validate the system, a prototype was developed and tested.


Subject(s)
Internet of Things , Sensory Aids , Visually Impaired Persons , Canes , Humans , Machine Learning
13.
Sensors (Basel) ; 22(12)2022 Jun 16.
Article in English | MEDLINE | ID: mdl-35746319

ABSTRACT

Nowadays, improving the traffic safety of visually impaired people is a topic of widespread concern. To help avoid the risks and hazards of road traffic in their daily life, we propose a wearable device using object detection techniques and a novel tactile display made from shape-memory alloy (SMA) actuators. After detecting obstacles in real-time, the tactile display attached to a user's hands presents different tactile sensations to show the position of the obstacles. To implement the computation-consuming object detection algorithm in a low-memory mobile device, we introduced a slimming compression method to reduce 90% of the redundant structures of the neural network. We also designed a particular driving circuit board that can efficiently drive the SMA-based tactile displays. In addition, we also conducted several experiments to verify our wearable assistive device's performance. The results of the experiments showed that the subject was able to recognize the left or right position of a stationary obstacle with 96% accuracy and also successfully avoided collisions with moving obstacles by using the wearable assistive device.


Subject(s)
Pedestrians , Self-Help Devices , Visually Impaired Persons , Wearable Electronic Devices , Blindness , Humans , Touch
14.
Front Hum Neurosci ; 16: 1058093, 2022.
Article in English | MEDLINE | ID: mdl-36776219

ABSTRACT

Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.

15.
Texto & contexto enferm ; 31: e20210236, 2022. tab, graf
Article in English | LILACS-Express | LILACS, BDENF - Nursing | ID: biblio-1390494

ABSTRACT

ABSTRACT Objective: to investigate scientific evidence about existing health education technologies for people with visual impairment. Method: integrative review performed in MEDLINE/pubmed, CINAHL, LILACS databases, via Virtual Health Library, Web of Science, Scopus and Cochrane Library, in November 2021. Results: 18 articles were identified, of which eight were published in nursing journals. Regarding the countries that were research sites, ten studies were published in Brazil and the others in countries such as the United States, Iran, India, Turkey and Portugal. The most addressed themes of the technologies were sexual and reproductive health and oral health. The others were about breastfeeding, occupational health, hypertension, diabetes and drugs. Regarding the types of accessibility resources used in the technologies, the use of audio, through text or CD, prevailed in ten studies, and resources that explored the tactile sense through anatomical didactic prototypes, educational manuals with embossed figures and different textures, in nine articles. Other accessibility features were audio description, technologies mediated by the use of the Internet and/or computer, and braille printed materials. Methodological studies predominated and, in fourteen studies, the application of technology with visual impaired people occurred. Conclusion: the studies showed adequacy and feasibility regarding the health education technologies developed for people with visual impairment, because they offer knowledge about the proposed themes and equal access to educational materials for this group.


Objetivo: investigar la evidencia científica sobre tecnologías existentes y/o tecnologías que se utilizan para la educación en salud de personas con discapacidad visual. Método: revisión integradora realizada en las bases de datos MEDLINE/PubMed, CINAHL, LILACS, vía Biblioteca Virtual en Salud, Web of Science, Scopus y Cochrane Library, en noviembre de 2021. Resultados: se identificaron 18 artículos, de los cuales ocho estudios fueron publicados en revistas de enfermería. En cuanto a los países que fueron sitios de investigación, diez estudios fueron publicados en Brasil y los demás en países como Estados Unidos, Irán, India, Turquía y Portugal. Los temas más abordados por las tecnologías fueron la salud sexual y reproductiva y la salud bucal. Los otros eran sobre lactancia materna, salud ocupacional, presión arterial alta, diabetes y drogas. En cuanto a los tipos de recursos de accesibilidad utilizados en las tecnologías, predominó en diez estudios el uso de audio, a través de texto o CD, y recursos que exploraron el sentido táctil de los invidentes, a través de prototipos didácticos anatómicos, manuales didácticos con figuras en alto relieve y diferentes texturas, en nueve artículos. Otros recursos de accesibilidad fueron la audiodescripción, las tecnologías mediadas por el uso de internet y/o la computadora y los materiales impresos en braille. Predominaron los estudios metodológicos y, en catorce estudios, la tecnología se aplicó a personas con discapacidad visual. Conclusión: los estudios demostraron la idoneidad y factibilidad de las tecnologías desarrolladas para la educación en salud de personas con discapacidad visual, ya que ofrecen conocimiento sobre los temas propuestos y acceso equitativo a los materiales educativos para este grupo.


RESUMO Objetivo: investigar as evidências científicas acerca das tecnologias existentes e/ou que são utilizadas para educação em saúde de pessoas com deficiência visual. Método: revisão integrativa realizada nas bases de dados MEDLINE/PubMed, CINAHL, LILACS, via Biblioteca Virtual em Saúde, Web of Science, Scopus e Cochrane Library, em novembro de 2021. Resultados: identificaram-se 18 artigos, dos quais oito estudos foram publicados em periódicos de enfermagem. Acerca dos países que foram locais de pesquisa, dez estudos foram publicados no Brasil e os demais em países como Estados Unidos, Irã, Índia, Turquia e Portugal. Os temas mais abordados pelas tecnologias foram saúde sexual e reprodutiva e saúde bucal. Os demais versaram sobre amamentação, saúde ocupacional, hipertensão arterial, diabetes e drogas. Quanto aos tipos de recursos de acessibilidade empregados nas tecnologias, prevaleceu o uso do áudio, através de texto ou CD, em dez estudos, e de recursos que exploraram o sentido tátil do cego, por meio de protótipos didáticos anatômicos, manuais educativos com figuras em alto relevo e texturas diferentes, em nove artigos. Outros recursos de acessibilidade foram audiodescrição, tecnologias mediadas pelo uso da internet e/ou do computador e materiais impressos em Braille. Predominaram estudos metodológicos e, em quatorze estudos, ocorreu a aplicação da tecnologia com as pessoas com deficiência visual. Conclusão: os estudos mostraram adequabilidade e viabilidade das tecnologias desenvolvidas para educação em saúde de pessoas com deficiência visual, por oferecerem conhecimento sobre os temas propostos e igualdade de acesso a materiais educativos para este grupo.

16.
Micromachines (Basel) ; 12(9)2021 Sep 07.
Article in English | MEDLINE | ID: mdl-34577725

ABSTRACT

In this article, a new design of a wearable navigation support system for blind and visually impaired people (BVIP) is proposed. The proposed navigation system relies primarily on sensors, real-time processing boards, a fuzzy logic-based decision support system, and a user interface. It uses sensor data as inputs and provides the desired safety orientation to the BVIP. The user is informed about the decision based on a mixed voice-haptic interface. The navigation aid system contains two wearable obstacle detection systems managed by an embedded controller. The control system adopts the Robot Operating System (ROS) architecture supported by the Beagle Bone Black master board that meets the real-time constraints. The data acquisition and obstacle avoidance are carried out by several nodes managed by the ROS to finally deliver a mixed haptic-voice message for guidance of the BVIP. A fuzzy logic-based decision support system was implemented to help BVIP to choose a safe direction. The system has been applied to blindfolded persons and visually impaired persons. Both types of users found the system promising and pointed out its potential to become a good navigation aid in the future.

17.
Sensors (Basel) ; 21(10)2021 May 20.
Article in English | MEDLINE | ID: mdl-34065360

ABSTRACT

Scene sonification is a powerful technique to help Visually Impaired People (VIP) understand their surroundings. Existing methods usually perform sonification on the entire images of the surrounding scene acquired by a standard camera or on the priori static obstacles acquired by image processing algorithms on the RGB image of the surrounding scene. However, if all the information in the scene are delivered to VIP simultaneously, it will cause information redundancy. In fact, biological vision is more sensitive to moving objects in the scene than static objects, which is also the original intention of the event-based camera. In this paper, we propose a real-time sonification framework to help VIP understand the moving objects in the scene. First, we capture the events in the scene using an event-based camera and cluster them into multiple moving objects without relying on any prior knowledge. Then, sonification based on MIDI is enabled on these objects synchronously. Finally, we conduct comprehensive experiments on the scene video with sonification audio attended by 20 VIP and 20 Sighted People (SP). The results show that our method allows both participants to clearly distinguish the number, size, motion speed, and motion trajectories of multiple objects. The results show that our method is more comfortable to hear than existing methods in terms of aesthetics.


Subject(s)
Algorithms , Visually Impaired Persons , Humans , Image Processing, Computer-Assisted , Motion
18.
Sensors (Basel) ; 21(9)2021 Apr 29.
Article in English | MEDLINE | ID: mdl-33946857

ABSTRACT

Blind and Visually impaired people (BVIP) face a range of practical difficulties when undertaking outdoor journeys as pedestrians. Over the past decade, a variety of assistive devices have been researched and developed to help BVIP navigate more safely and independently. In addition, research in overlapping domains are addressing the problem of automatic environment interpretation using computer vision and machine learning, particularly deep learning, approaches. Our aim in this article is to present a comprehensive review of research directly in, or relevant to, assistive outdoor navigation for BVIP. We breakdown the navigation area into a series of navigation phases and tasks. We then use this structure for our systematic review of research, analysing articles, methods, datasets and current limitations by task. We also provide an overview of commercial and non-commercial navigation applications targeted at BVIP. Our review contributes to the body of knowledge by providing a comprehensive, structured analysis of work in the domain, including the state of the art, and guidance on future directions. It will support both researchers and other stakeholders in the domain to establish an informed view of research progress.


Subject(s)
Self-Help Devices , Sensory Aids , Visually Impaired Persons , Blindness , Humans , Machine Learning
19.
Sensors (Basel) ; 21(4)2021 Feb 23.
Article in English | MEDLINE | ID: mdl-33672146

ABSTRACT

Wearable auxiliary devices for visually impaired people are highly attractive research topics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand the environment. In this paper, we proposed a wearable navigation device for the visually impaired by integrating the semantic visual SLAM (Simultaneous Localization And Mapping) and the newly launched powerful mobile computing platform. This system uses an Image-Depth (RGB-D) camera based on structured light as the sensor, as the control center. We also focused on the technology that combines SLAM technology with the extraction of semantic information from the environment. It ensures that the computing platform understands the surrounding environment in real-time and can feed it back to the visually impaired in the form of voice broadcast. Finally, we tested the performance of the proposed semantic visual SLAM system on this device. The results indicate that the system can run in real-time on a wearable navigation device with sufficient accuracy.


Subject(s)
Visually Impaired Persons , Wearable Electronic Devices , Humans , Semantics
20.
Front Psychol ; 12: 731693, 2021.
Article in English | MEDLINE | ID: mdl-35069313

ABSTRACT

Visually impaired people have unique perceptions of and usage requirements for various urban spaces. Therefore, understanding these perceptions can help create reasonable layouts and construct urban infrastructure. This study recruited 26 visually impaired volunteers to evaluate 24 sound environments regarding clarity, comfort, safety, vitality, and depression. This data was collected in seven different types of urban spaces. An independent sample non-parametric test was used to determine the significance of the differences between environmental evaluation results for each evaluation dimension and to summarize the compositions of sound and space elements in the positive and negative influence spaces. The results suggested that visually impaired people (1) feel comfort, safety, and clarity in parks, residential communities, and shopping streets; (2) have negative perceptions of vegetable markets, bus stops, hospitals, and urban departments; (3) feel anxious when traffic sounds, horn sounds, manhole cover sounds, and construction sounds occur; and (4) prefer spaces away from traffic, with fewer and slower vehicles, with a suitable space scale, and moderate crowd density. These results provide a reference for the future design of activity venues (i.e., residential communities, vegetable markets, bus stops, parks, shopping streets, hospitals, and urban functional departments) and the planning of accessibility systems for visually impaired urban residents.

SELECTION OF CITATIONS
SEARCH DETAIL