ABSTRACT
Using a unique triangulation of a mixed-methods approach combining qualitative and observational techniques, this research investigated international student perceptions of the usability, interactivity and inclusiveness of a university website. The research was guided by the activity theory. Qualitative data were analysed to understand international student perceptions of usability and interactivity in relation to their intentions to use the university website. Additionally, findings established the significance of making university websites more inclusive as international students continue to face increasing uncertainties owing to the COVID-19 pandemic and racial inequalities in the USA and worldwide. Observational methods provided methodological and data triangulation. This research offers guidance for future research on higher education digital learning tools based on integrated theoretical mixed methods and also provides managerial implications for academic institutions in the design of student-centred and inclusive websites. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
ABSTRACT
The impact of COVID-19 on shopping behaviour preferences has resulted in the accelerated adoption of e-commerce and increased traffic of first-time e-commerce shoppers worldwide. This study compared experienced versus inexperienced mobile consumers' shopping experiences on smartphones. A mixed-methods research, combining mobile eye-tracking technology and interviews, was employed. The comparison of experienced and inexperienced users showed significant differences in regards to time spent on various stages of the shopping journey, used elements of the website and problem areas encountered. Inexperienced users have higher expectations towards fashion retailer's website. Mobile consumers' prior experience using retailers' digital shopping platform is a key parameter in user experience research and participants' recruitment. The findings of this research have managerial and methodological implications and can be used in understanding the behaviour differences between current and potential customers, and in developing personalised shopping experiences on smartphones by feeding these into retailers' digital analytics database and marketing strategy.
ABSTRACT
Interacting with computer systems with speech is more natural than conventional interaction methods. It is also more accessible since it does not require precise selection of small targets or rely entirely on visual elements like virtual keys and buttons. Speech also enables contactless interaction, which is of particular interest when touching public devices is to be avoided, such as the recent COVID-19 pandemic situation. However, speech is unreliable in noisy places and can compromise users' privacy and security when in public. Image-based silent speech, which primarily converts tongue and lip movements into text, can mitigate many of these challenges. Since it does not rely on acoustic features, users can silently speak without vocalizing the words. It has also been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples' perceptions regarding using it, their anticipated performance of silent speech input, and their approach to avoiding potential misrecognition errors. Besides, existing silent speech recognition models are slow and error prone, or use stationary, external devices that are not scalable. In this dissertation, we attempt to address these issues. Towards this, we first conduct a user study to explore users' attitudes towards silent speech with a particular focus on social acceptance. Results show that people perceive silent speech as more socially acceptable than speech input but are concerned about input recognition, privacy, and security issues. We then conduct a second study examining users' error tolerance with speech and silent speech input methods. Results reveal that users are willing to tolerate more errors with silent speech input than speech input as it offers a higher degree of privacy and security. We conduct another study to identify a suitable method for providing real-time feedback on silent speech input. Results show that users find an feedback method effective and significantly more private and secure than a commonly used video feedback method. In light of these findings, which establish silent speech as an acceptable and desirable mode of interaction, we take a step forward to address the technological limitations of existing image-based silent speech recognition models to make them more usable and reliable on computer systems. Towards this, first, we develop LipType, an optimized version of LipNet for improved speed and accuracy. We then develop an independent repair model that processes video input for poor lighting conditions, when applicable, and corrects potential errors in output for increased accuracy. We then test this model with LipType and other speech and silent speech recognizers to demonstrate its effectiveness. In an evaluation, the model reduced word error rate by 57% compared to the state-of-the-art without compromising the overall computation time. However, we identify that the model is still susceptible to failure due to the variability of user characteristics. A person's speaking rate, for instance, is a fundamental user characteristic that can influence speech recognition performance due to the variation in acoustic properties of human speech production. We formally investigate the effects of speaking rate on silent speech recognition. Results revealed that native users speak about 8% faster than non-native users, but both groups slow down at comparable rates (34-40%) when interacting with silent speech, mostly to increase its accuracy rates. A follow-up experiment confirms that slowing down does improve the accuracy of silent speech recognition. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
ABSTRACT
The use of videoconferencing is on the rise after COVID-19, being common to look at the screen and see someone typing. A side-channel attack may be launched to infer the text written from the face image. In this paper, we analyse the feasibility of such an attack, being the first proposal which work with a complete keyset (50 keys) and natural texts. We use different scenarios, lighting conditions and natural texts to increase realism. Our study involves 30 participants, who typed 49,365 keystrokes. We characterize the effect of lighting, gender, age and use of glasses. Our results show that on average 13.71% of keystrokes are revealed without error, and up to 31.8%, 52.5% and 61.2% are guessed with a maximum error of 1, 2 and 3 keys, respectively. © 2022 IEEE.
ABSTRACT
The COVID-19 pandemic has interrupted education institutions in over 150 nations, affecting billions of students. Many governments have forced a transition in higher education from in-person to remote learning. After this abrupt, worldwide transition away from the classroom, some question whether online education will continue to grow in acceptance in post-pandemic times. However, new technology, such as the brain-computer interface and eye-tracking, have the potential to improve the remote learning environment, which currently faces several obstacles and deficiencies. Cognitive brain computer interfaces can help us develop a better understanding of brain functions, allowing for the development of more effective learning methodologies and the enhancement of brain-based skills. We carried out a systematic literature review of research on the use of brain computer interfaces and eye-tracking to measure students' cognitive skills during online learning. We found that, because many experimental tasks depend on recorded rather than real-time video, students don't have direct and real-time interaction with their teacher. Further, we found no evidence in any of the reviewed papers for brain-to-brain synchronization during remote learning. This points to a potentially fruitful future application of brain computer interfaces in education, investigating whether the brains of student-teacher pairs who interact with the same course content have increasingly similar brain patterns.
ABSTRACT
Misinformation is an important topic in the Information Retrieval (IR) context and has implications for both system-centered and user-centered IR. While it has been established that the performance in discerning misinformation is affected by a person's cognitive load, the variation in cognitive load in judging the veracity of news is less understood. To understand the variation in cognitive load imposed by reading news headlines related to COVID-19 claims, within the context of a fact-checking system, we conducted a within-subject, lab-based, quasi-experiment (N=40) with eye-tracking. Our results suggest that examining true claims imposed a higher cognitive load on participants when news headlines provided incorrect evidence for a claim and were inconsistent with the person's prior beliefs. In contrast, checking false claims imposed a higher cognitive load when the news headlines provided correct evidence for a claim and were consistent with the participants' prior beliefs. However, changing beliefs after examining a claim did not have a significant relationship with cognitive load while reading the news headlines. The results illustrate that reading news headlines related to true and false claims in the fact-checking context impose different levels of cognitive load. Our findings suggest that user engagement with tools for discerning misinformation needs to account for the possible variation in the mental effort involved in different information contexts. © 2023 ACM.
ABSTRACT
The raging trend of COVID‐19 in the world has become more and more serious since 2019, causing large‐scale human deaths and affecting production and life. Generally speaking, the methods of detecting COVID‐19 mainly include the evaluation of human disease characterization, clinical examination and medical imaging. Among them, CT and X‐ray screening is conducive to doctors and patients' families to observe and diagnose the severity and development of the COVID‐19 more intuitively. Manual diagnosis of medical images leads to low the efficiency, and long‐term tired gaze will decline the diagnosis accuracy. Therefore, a fully automated method is needed to assist processing and analysing medical images. Deep learning methods can rapidly help differentiate COVID‐19 from other pneumonia‐related diseases or healthy subjects. However, due to the limited labelled images and the monotony of models and data, the learning results are biased, resulting in inaccurate auxiliary diagnosis. To address these issues, a hybrid model: deep channel‐attention correlative capsule network, for channel‐attention based spatial feature extraction, correlative feature extraction, and fused feature classification is proposed. Experiments are validated on X‐ray and CT image datasets, and the results outperform a large number of existing state‐of‐the‐art studies. [ABSTRACT FROM AUTHOR] Copyright of IET Image Processing (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
ABSTRACT
In the aftermath of the COVID-19 pandemic, the educational system is increasingly incorporating twenty-first-century skills, such as online learning, that require learners to demonstrate cognitive flexibility. Cognitive flexibility is the ability to quickly reconfigure our minds to meet the task demands. This study investigates the degree of cognitive flexibility of the wholistic-intermediate-analytic dimensions, by classifying patterns of Eye Movements (EM) and behavioural data. Using the E-CSA-W/A test, 113 participants were classified based on their tendency towards a particular style (wholistic/intermediate/analytic). Results indicate that wholistics and intermediates demonstrated greater cognitive flexibility in adapting to the task requirements than the analytics. Analytics were slower at completing the test and made more transitions between Areas of Interest than the other groups. Finally, while the behavioural data demonstrate quantitative differences between the groups, EM provides qualitative information regarding the cognitive process that leads to the response. Theoretical, methodological, and practical contributions are discussed.Copyright © 2022 Informa UK Limited, trading as Taylor & Francis Group.
ABSTRACT
• Observers fixated longer on the mouth and torso of speakers when those were deceptive. • Observers fixated longer on the hands of the speakers when those were honest. • When assessing veracity, unexpectedly, observers fixated on the mouth the most compared to the eyes, torso, or hands of the speakers. • Longer fixations on the mouth and torso of the speakers were associated with less credible assessment of the speakers. • Longer gaze fixations on the torso and left hand of the speakers worsened deception detection accuracy. Throughout the early part of this century, and especially during the peak of the global pandemic of 2020, the world has come to rely increasingly on computer-mediated communication (CMC). The study of computer-based media and their role in mediating communication has long been a part of the academic study of information systems. Unfortunately, human communication, regardless of the medium over which it occurs, involves deception. Despite the growing reliance on CMC for communication, a limited amount of work has considered deception and its detection in mediated environments. The study reported here investigates the communication issues associated with cue restrictions in CMC, specifically videoconferencing, and with how these restrictions affect deception detection success. We employed eye tracking technology to analyze the visual behavior of veracity judges and how it influenced their assessments. We found that the visual foci of the judges varied as a result of the message veracity. Judges fixated longer on the mouth and torso of speakers when messages were deceptive and focused longer on the hands of the speakers when messages were truthful. We also found that fixating longer on the mouth and torso of the speakers was associated with less credible assessment of the speakers. Last, longer gaze fixations on the torso and left hand of the speakers resulted in less accurate deception detection performance. [ABSTRACT FROM AUTHOR] Copyright of International Journal of Human-Computer Studies is the property of Academic Press Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
ABSTRACT
This article uses risk communication theory and cognitive load theory to analyse the stress experienced by interpreters involved in crisis communication within Covid-19 medical scenarios. It considers the nature of stress both from psychological (mental) and physiological perspectives, exploring the relationship between the level of cognitive load, interpreters' stress, and the quality of interpreting in crisis communication. This research identifies the strategies used by interpreters when operating in pandemic working environments and compares their cognitive load and physiological stress changes within and outside contexts of crisis communication. We hypothesize that interpreters experience greater psychological stress and an increased cognitive load which adversely affect their interpreting in crises compared to normal situations. To test this hypothesis, an experiment combined eye-tracking technology with Heart Rate and Galvanic Skin Response technology. 25 novice interpreters interpreted simulated medical scenarios for a Covid-19 patient and a diabetes patient respectively. This is one of the first studies to apply the multimodal approach of eye-tracking, HR, and GSR technology to record the physiological stress and mental status of interpreters. We advocate more systematic interdisciplinary research concerning interpreters' stress in crisis communication, and outline recommendations for future crisis interpreting training and for individual professionals involved in crisis management. © 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
ABSTRACT
Objective: An alarming proportion (>30%) of patients affected by SARS-CoV-2 (COVID-19) continue to experience neurological symptoms, including headache, dizziness, smell and/or taste abnormalities, and impaired consciousness (brain fog), after recovery from the acute infection. These symptoms are self-reported and vary from patient to patient, making it difficult to accurately diagnose and initiate a proper treatment course. Objective measures to identify and quantify neural deficits underlying the symptom profiles are lacking. This study tested the hypothesis that oculomotor, vestibular, reaction time, and cognitive (OVRT-C) testing using eye-tracking can objectively identify and measure functional neural deficits post COVID-19 infection. Methods: Subjects diagnosed with COVID-19 (n = 77) were tested post-infection with a battery of 20 OVRT-C tests delivered on a portable eye-tracking device (Neurolign Dx100). Data from 14 tests were compared to previously collected normative data from subjects with similar demographics. Post-COVID subjects were also administered the Neurobehavioral Symptom Inventory (NSI) for symptom evaluation. Results: A significant percentage of post COVID-19 patients (up to 86%) scored outside the norms in 12 out of 14 tests, with smooth pursuit and optokinetic responses being most severely affected. A multivariate model constructed using stepwise logistic regression identified 6 metrics as significant indicators of post-COVID patients. The area under the receiver operating characteristic curve (AUC) was 0.89, the estimated specificity was 98% (with cutoff value of 0.5) and the sensitivity was 88%. There were moderate but significant correlations between NSI domain key variables and OVRT-C tests. Conclusions: This study demonstrates the feasibility of OVRT-C testing to provide objective measures of neural deficits in people recovering from COVID-19 infection. Such testing may serve as an efficient tool for identifying hidden neurological deficits post COVID-19, screening patients at risk of developing long COVID, and may help guide rehabilitation and treatment strategies.
ABSTRACT
Due to hygienic concerns, the COVID-19 has accelerated the use of touchless technology, which has expedited the transition to Zero User Interface. This Interface (UI) refers to a controlled user interface that enables interaction with technology using gestures, voice, eye tracking and biometrics like contactless fingerprints and facial recognition. These control interfaces incorporate interaction techniques like speech and gestures. The advancement of touchless interaction with hand gesture interfaces is the main topic of this study, these interfaces are specialized programs that track and predict hand gestures to give alternative controls and interaction techniques. The hand gesture interface consisting of four main layers: Hand Gesture Interface, Mapping Gesture, Action, The Input Simulator, and Graphical User Interface. In addition to our interface, we employed the new algorithm for according to the hand-type classification. By doing trials on a gastronomy application, we confirm our methodology. With five volunteers, we conducted a small-scale user study to evaluate and test the hand gesture interface. User feedback indicates that the hand gesture interface is simple to be used. © 2022 IEEE.
ABSTRACT
E-Commerce is the current day lifeline for many. After the COVID-19 pandemic the popularity of the E-Commerce increased rapidly people are now more accustomed to it. It is very important to analyze the impact of a E-Commerce web page on the customer in order to understand the customer behaviour. Where the person looks on a E commerce web page means a lot. It is challenging to analyze visual events while tracking them. Additionally, interaction patterns in significant regions of an interactive platform can be found using eye tracking data. Attention modelling for applications has become a new area of study in computer vision. Without temporal information, existing models are unable to capture the dynamic aspects of the actual attention process in free-viewing applications.To solve this problem, a solution based on an application-based saccadic model to simulate human visual dynamics while viewing applications is proposed. Eye fixes and saccades can be used to interpret a person's intention or goal in a certain circumstance. To track the user's vision, we have used pupil saccade fixation mechanism in the proposed methodology. We collect the pupil imprints based on the view, analyze the data from view samples, and recommend the optimum location to display an advertisement in any e-commerce websites. The system proves to be a better option to traditional methods as it gives 94% accuracy. © 2022 IEEE.
ABSTRACT
Two years on with Covid-19, touchless technology has evolved from a device that symbolizes luxury to something that is necessary. Eye tracker is one type of touchless technologies that uses user's gaze to interact with computer without touching the screen. Development of spontaneous gazebased interaction is progressing very rapidly. Researchers have developed various object selection methods without prior gazeto-screen calibration. Recently, the conventional approach of setting threshold was developed as a gaze-based object selection method. However, the use of threshold values is considered non-adaptive and requires additional data pre-processing to handle noises. To overcome this problem, deep learning is used as an object selection method for spontaneous gaze-based interaction. Deep learning does not require any data preprocessing method to achieve accurate object selection results. Out of five deep learning algorithms that were evaluated, LSTM (Long Short-Term Memory) and BiLSTM (Bidirectional Long Short-Term Memory) networks achieved comparable accuracy of 95.17 pm 0.95% and 95.15 pm 1.17%, respectively. In future, our research is promising for development of real-time object selection technique for touchless public display. © 2022 IEEE.
ABSTRACT
The Covid-19 outbreak has caused disruptions in the education sector, making remote education the dominant mode for lecture delivery. The lack of visual feedback and physical interaction makes it very hard for teachers to measure the engagement level of students during lectures. This paper proposes a time-bounded window operation to extract statistical features from raw gaze data, captured in a remote teaching experiment and link them with the student's attention level. Feature selection or dimensionality reduction is performed to reduce the convergence time and overcome the problem of over-fitting. Recursive feature elimination (RFE) and SelectFromModel (SFM) are used with different machine learning (ML) algorithms, and a subset of optimal feature space is obtained based on the feature scores. The model trained using the optimal feature subset showed significant improvement in accuracy and computational complexity. For instance, a support vector classifier (SVC) led 2.39% improvement in accuracy along with approximately 66% reduction in convergence time. © 2022 IEEE.
ABSTRACT
In the midst of the COVID-19 pandemic, the use of non-face-to-face information and communication technology (ICT) such as kiosks has increased. While kiosks are useful overall, those who do not adapt well to these technologies experience technostress. The two most serious technostressors are inclusion and overload issues, which indicate a sense of inferiority due to a perceived inability to use ICT well and a sense of being overwhelmed by too much information, respectively. This study investigated the different effects of hybrid technostress-induced by both inclusion and overload issues-on the cognitive load among low-stress and high-stress people when using kiosks to complete daily life tasks. We developed a 'virtual kiosk test' to evaluate participants' cognitive load with eye tracking features and performance features when ordering burgers, sides, and drinks using the kiosk. Twelve low-stress participants and 13 high-stress participants performed the virtual kiosk test. As a result, regarding eye tracking features, high-stress participants generated a larger number of blinks, a longer scanpath length, a more distracted heatmap, and a more complex gaze plot than low-stress participants. Regarding performance features, high-stress participants took significantly longer to order and made more errors than low-stress participants. A support-vector machine (SVM) using both eye tracking features (i.e., number of blinks, scanpath length) and a performance feature (i.e., time to completion) best differentiated between low-stress and high-stress participants (89% accuracy, 100% sensitivity, 83.3% specificity, 75% precision, 85.7% F1 score). Overall, under technostress, high-stress participants experienced cognitive overload and consequently decreased performance; whereas, low-stress participants felt moderate arousal and improved performance. These varying effects of technostress can be interpreted through the Yerkes-Dodson law. Based on our findings, we proposed an adaptive interface, multimodal interaction, and virtual reality training as three implications for technostress relief in non-face-to-face ICT.
ABSTRACT
The COVID-19 pandemic has severely impacted the tourism and hospitality industries worldwide. Tourism destination marketing has been an heated focus in tourism and hospitality academia, it is widely believed that it can promote the revival of industries in the post-pandemic era. But there is a lack of research on different graphic presentation forms in tourism advertisements. To bridge the gap in the related literature, this study aims at studying the impact of the image and text presentation forms of the scenic spot's name in tourism advertisements on tourists' visit intention to the tourist destination city by combining the theory of constructivism in cognitive psychology, SOR model, and affective-cognitive model to conduct a 2 × 2 between-group experiment. The study found that when the text part contains the scenic spot's name, the tourism advertisement has a significant impact on tourists' perceived advertising effectiveness, destination affective image, and visit intention. The results of eye tracking analysis also showed that fixation points are primarily distributed in the text part. Furthermore, this study explored the chain mediating mechanism of perceived advertising effectiveness and destination affective image and discovered that the impact of the text presentation form on the visit intention can be realized through the mediating effect of perceived advertising effectiveness and destination affective image. This study puts forward some suggestions for the tourism advertising and destination marketing of scenic spots with high-familiarity of destination cities with low-familiarity and improving the image of tourist destination cities.
ABSTRACT
Gaze-based interaction has until now been almost an exclusive prerogative of the assistive field, as it is considered not sufficiently performing compared to traditional communication methods based on keyboards, pointing devices, and touch screens. However, situations such as the one we are experiencing now due to the COVID-19 pandemic highlight the importance of touchless communication, to minimize the spread of the disease. In this paper, as an example of the potential pervasive use of eye tracking technology in public contexts, we propose and study five interfaces for a gaze-controlled scale, to be used in supermarkets to weigh fruits and vegetables. Given the great heterogeneity of potential users, the interaction must be as simple and intuitive as possible and occur without the need for calibration. The experiments carried out confirm that this goal is achievable and show strengths and weaknesses of the five interfaces. © 2022 ACM.
ABSTRACT
The worldwide COVID-19 pandemic leads to the development of stress disorders, increased anxiety in the society. One of the strongest factors leading to the development of anxiety, stress in society during a pandemic is the Mass Media. The mechanisms of stressogenic effects of Mass Media remain not completely clear. The aim of this study was to evaluate age-specific characteristics of gaze behavior related to the perception of anxiety-provoking information. The study was performed basing on the 189 volunteers (164 participants aged between 17 and 22 years old (students, control group), 25 people aged between 59 and 71 aged 59 to 71 (experimental group). We surveyed participants to determine their level of stress, depression, and anxiety and analyzed eye-tracking data during the text perception by using web-tracking (EyePass). Results showed the significant age-related differences in gaze behavior while reading text with negative text elements. Aged adults had shorter median fixation duration. There was no difference between groups in the amount of the fixation. We can assume that except age factor other ones might contribute to our result, namely the occupation of participants, professors at the Institute of Journalism, with developed professional skills (reading pattern, method of information perception) but from other side higher vulnerability to adverse COVID-19 outcomes compared to younger adults. © 2022 IEEE.
ABSTRACT
Virtual Environments (VEs) are on the rise as an instrument in various sectors involving emotional states and educational research. Studies till date have tried to explore the effectiveness of VR in a variety of emotional health interventions, treatment of learning phobias, and providing virtual support to students worldwide. Research has demonstrated that VR immersive environments and VR experiences create a significant impact on the users' psyche. A learning experience is related to the emotional state of the person (O'Regan, K. (2003). Therefore, it would be interesting to study the influence of VR experience on the emotional states of the learners. Students around the globe were already struggling with emotional crises even before the pre-covid situation as reported by multiple agencies but now the situation has turned more grievous. Here comes the need for magnified learning experiences in virtual learning environments (VLEs). This study investigates the impact of two different VR-3D learning environments. It draws a comparison between students' emotional states, VR experience, and VR design elements using neurophysiological tools like Galvanic Skin Response (GSR) and self-reporting questionnaires. In the experiment, participants were asked to go through two different VR learning simulations and their physiological responses were recorded for analysis. The two simulations were differentiated based on space and interaction design elements. The study suggests that well-designed Virtual 3D-Environments in an educational setup can help students in reducing stress levels and ways how we can elicit positive emotions and facilitate a better learning experience. © Proceedings of the 24th International Conference on Engineering and Product Design Education: Disrupt, Innovate, Regenerate and Transform, E and PDE 2022. All rights reserved.