Your browser doesn't support javascript.
loading
Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired.
Bayat, Nasrin; Kim, Jong-Hwan; Choudhury, Renoa; Kadhim, Ibrahim F; Al-Mashhadani, Zubaidah; Aldritz Dela Virgen, Mark; Latorre, Reuben; De La Paz, Ricardo; Park, Joon-Hyuk.
Afiliación
  • Bayat N; Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Kim JH; AI R&D Center, Korea Military Academy, Seoul 01805, Republic of Korea.
  • Choudhury R; Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Kadhim IF; Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Al-Mashhadani Z; Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Aldritz Dela Virgen M; Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Latorre R; Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • De La Paz R; Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
  • Park JH; Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
J Imaging ; 9(8)2023 Aug 15.
Article en En | MEDLINE | ID: mdl-37623693
This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the user. Semantic segmentation and the algorithms developed in this work provide a means to generate a trajectory vector of all identified objects from the vision transformer and to detect objects that are likely to intersect with the user's walking path. Audio and vibrotactile feedback modules are integrated to convey collision warning through multimodal feedback. The dataset used to create the model was captured from both indoor and outdoor settings under different weather conditions at different times across multiple days, resulting in 27,867 photos consisting of 24 different classes. Classification results showed good performance (95% accuracy), supporting the efficacy and reliability of the proposed model. The design and control methods of the multimodal feedback modules for collision warning are also presented, while the experimental validation concerning their usability and efficiency stands as an upcoming endeavor. The demonstrated performance of the vision transformer and the presented algorithms in conjunction with the multimodal feedback modules show promising prospects of its feasibility and applicability for the navigation assistance of individuals with vision impairment.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Diagnostic_studies / Prognostic_studies / Risk_factors_studies Idioma: En Revista: J Imaging Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Diagnostic_studies / Prognostic_studies / Risk_factors_studies Idioma: En Revista: J Imaging Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Suiza