Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Ergon ; 105: 103825, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35777182

ABSTRACT

Automated vehicles (AVs) can perform low-level control tasks but are not always capable of proper decision-making. This paper presents a concept of eye-based maneuver control for AV-pedestrian interaction. Previously, it was unknown whether the AV should conduct a stopping maneuver when the driver looks at the pedestrian or looks away from the pedestrian. A two-agent experiment was conducted using two head-mounted displays with integrated eye-tracking. Seventeen pairs of participants (pedestrian and driver) each interacted in a road crossing scenario. The pedestrians' task was to hold a button when they felt safe to cross the road, and the drivers' task was to direct their gaze according to instructions. Participants completed three 16-trial blocks: (1) Baseline, in which the AV was pre-programmed to yield or not yield, (2) Look to Yield (LTY), in which the AV yielded when the driver looked at the pedestrian, and (3) Look Away to Yield (LATY), in which the AV yielded when the driver did not look at the pedestrian. The driver's eye movements in the LTY and LATY conditions were visualized using a virtual light beam. Crossing performance was assessed based on whether the pedestrian held the button when the AV yielded and released the button when the AV did not yield. Furthermore, the pedestrians' and drivers' acceptance of the mappings was measured through a questionnaire. The results showed that the LTY and LATY mappings yielded better crossing performance than Baseline. Furthermore, the LTY condition was best accepted by drivers and pedestrians. Eye-tracking analyses indicated that the LTY and LATY mappings attracted the pedestrian's attention, while pedestrians still distributed their attention between the AV and a second vehicle approaching from the other direction. In conclusion, LTY control may be a promising means of AV control at intersections before full automation is technologically feasible.

2.
Ergonomics ; 64(11): 1416-1428, 2021 Nov.
Article in English | MEDLINE | ID: mdl-33950791

ABSTRACT

It may be necessary to introduce new modes of communication between automated vehicles (AVs) and pedestrians. This research proposes using the AV's lateral deviation within the lane to communicate if the AV will yield to the pedestrian. In an online experiment, animated video clips depicting an approaching AV were shown to participants. Each of 1104 participants viewed 28 videos twice in random order. The videos differed in deviation magnitude, deviation onset, turn indicator usage, and deviation-yielding mapping. Participants had to press and hold a key as long as they felt safe to cross, and report the perceived intuitiveness of the AV's behaviour after each trial. The results showed that the AV moving towards the pedestrian to indicate yielding and away to indicate continuing driving was more effective than the opposite combination. Furthermore, the turn indicator was regarded as intuitive for signalling that the AV will yield. Practitioner Summary: Future automated vehicles (AVs) may have to communicate with vulnerable road users. Many researchers have explored explicit communication via text messages and led strips on the outside of the AV. The present study examines the viability of implicit communication via the lateral movement of the AV.


Subject(s)
Automobile Driving , Pedestrians , Text Messaging , Accidents, Traffic , Communication , Humans
3.
Appl Ergon ; 95: 103428, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34020096

ABSTRACT

An important question in the development of automated vehicles (AVs) is which driving style AVs should adopt and how other road users perceive them. The current study aimed to determine which AV behaviours contribute to pedestrians' judgements as to whether the vehicle is driving manually or automatically as well as judgements of likeability. We tested five target trajectories of an AV in curves: playback manual driving, two stereotypical automated driving conditions (road centre tendency, lane centre tendency), and two stereotypical manual driving conditions, which slowed down for curves and cut curves. In addition, four braking patterns for approaching a zebra crossing were tested: manual braking, stereotypical automated driving (fixed deceleration), and two variations of stereotypical manual driving (sudden stop, crawling forward). The AV was observed by 24 participants standing on the curb of the road in groups. After each passing of the AV, participants rated whether the car was driven manually or automatically, and the degree to which they liked the AV's behaviour. Results showed that the playback manual trajectory was considered more manual than the other trajectory conditions. The stereotype automated 'road centre tendency' and 'lane centre tendency' trajectories received similar likeability ratings as the playback manual driving. An analysis of written comments showed that curve cutting was a reason to believe the car is driving manually, whereas driving at a constant speed or in the centre was associated with automated driving. The sudden stop was the least likeable way to decelerate, but there was no consensus on whether this behaviour was manual or automated. It is concluded that AVs do not have to drive like a human in order to be liked.


Subject(s)
Automobile Driving , Pedestrians , Accidents, Traffic/prevention & control , Humans
4.
Ergonomics ; 64(6): 793-805, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33306460

ABSTRACT

We examined what pedestrians look at when walking through a parking garage. Thirty-six participants walked a short route in a parking garage while their eye movements and head rotations were recorded with a Tobii Pro Glasses 2 eye-tracker. The participants' fixations were then classified into 14 areas of interest. The results showed that pedestrians often looked at the back (20.0%), side (7.5%), and front (4.2%) of parked cars, and at approaching cars (8.8%). Much attention was also paid to the ground (20.1%). The wheels of cars (6.8%) and the driver in approaching cars (3.2%) received attention as well. In conclusion, this study showed that eye movements are largely functional in the sense that they appear to assist in safe navigation through the parking garage. Pedestrians look at a variety of sides and features of the car, suggesting that displays on future automated cars should be omnidirectionally visible. Practitioner summary: This study measured where pedestrians look when walking through a parking garage. It was found that the back, side, and wheels of cars attract considerable attention. This knowledge may be important for the development of automated cars that feature so-called external human-machine interfaces (eHMIs).


Subject(s)
Pedestrians , Accidents, Traffic , Automobiles , Eye Movements , Eye-Tracking Technology , Humans , Walking
5.
Hum Factors ; 60(8): 1192-1206, 2018 12.
Article in English | MEDLINE | ID: mdl-30036098

ABSTRACT

OBJECTIVE: This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. BACKGROUND: Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. METHOD: Participants ( N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US$0.20 per participant. Results were verified with a local Web-in-lab study ( N = 34). RESULTS: The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. CONCLUSION: Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. APPLICATION: The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.


Subject(s)
Auditory Perception/physiology , Crowdsourcing , Psychomotor Performance/physiology , Psychophysics/methods , Reaction Time/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Time Factors , Young Adult
6.
Appl Ergon ; 62: 204-215, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28411731

ABSTRACT

When a highly automated car reaches its operational limits, it needs to provide a take-over request (TOR) in order for the driver to resume control. The aim of this simulator-based study was to investigate the effects of TOR modality and left/right directionality on drivers' steering behaviour when facing a head-on collision without having received specific instructions regarding the directional nature of the TORs. Twenty-four participants drove three sessions in a highly automated car, each session with a different TOR modality (auditory, vibrotactile, and auditory-vibrotactile). Six TORs were provided per session, warning the participants about a stationary vehicle that had to be avoided by changing lane left or right. Two TORs were issued from the left, two from the right, and two from both the left and the right (i.e., nondirectional). The auditory stimuli were presented via speakers in the simulator (left, right, or both), and the vibrotactile stimuli via a tactile seat (with tactors activated at the left side, right side, or both). The results showed that the multimodal TORs yielded statistically significantly faster steer-touch times than the unimodal vibrotactile TOR, while no statistically significant differences were observed for brake times and lane change times. The unimodal auditory TOR yielded relatively low self-reported usefulness and satisfaction ratings. Almost all drivers overtook the stationary vehicle on the left regardless of the directionality of the TOR, and a post-experiment questionnaire revealed that most participants had not realized that some of the TORs were directional. We conclude that between the three TOR modalities tested, the multimodal approach is preferred. Moreover, our results show that directional auditory and vibrotactile stimuli do not evoke a directional response in uninstructed drivers. More salient and semantically congruent cues, as well as explicit instructions, may be needed to guide a driver into a specific direction during a take-over scenario.


Subject(s)
Acoustic Stimulation , Automobile Driving , Physical Stimulation , Adult , Automobiles , Computer Simulation , Cues , Female , Humans , Male , Man-Machine Systems , Physical Stimulation/methods , Reaction Time , Task Performance and Analysis , Touch , Vibration , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...