Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(17)2023 Aug 25.
Article in English | MEDLINE | ID: mdl-37687892

ABSTRACT

Despite the advancement of advanced driver assistance systems (ADAS) and autonomous driving systems, surpassing the threshold of level 3 of driving automation remains a challenging task. Level 3 of driving automation requires assuming full responsibility for the vehicle's actions, necessitating the acquisition of safer and more interpretable cues. To approach level 3, we propose a novel method for detecting driving vehicles and their brake light status, which is a crucial visual cue relied upon by human drivers. Our proposal consists of two main components. First, we introduce a fast and accurate one-stage brake light status detection network based on YOLOv8. Through transfer learning using a custom dataset, we enable YOLOv8 not only to detect the driving vehicle, but also to determine its brake light status. Furthermore, we present the publicly available custom dataset, which includes over 11,000 forward images along with manual annotations. We evaluate the performance of our proposed method in terms of detection accuracy and inference time on an edge device. The experimental results demonstrate high detection performance with an mAP50 (mean average precision at IoU threshold of 0.50) ranging from 0.766 to 0.793 on the test dataset, along with a short inference time of 133.30 ms on the Jetson Nano device. In conclusion, our proposed method achieves high accuracy and fast inference time in detecting brake light status. This contribution effectively improves safety, interpretability, and comfortability by providing valuable input information for ADAS and autonomous driving technologies.

2.
Sensors (Basel) ; 23(3)2023 Jan 28.
Article in English | MEDLINE | ID: mdl-36772481

ABSTRACT

Driver's hands on/off detection is very important in current autonomous vehicles for safety. Several studies have been conducted to create a precise algorithm. Although many studies have proposed various approaches, they have some limitations, such as robustness and reliability. Therefore, we propose a deep learning model that utilizes in-vehicle data. We also established a data collection system, which collects in-vehicle data that are auto-labeled for efficient and reliable data acquisition. For a robust system, we devised a confidence logic that prevents outliers' sway. To evaluate our model in more detail, we suggested a new metric to explain the events, considering state transitions. In addition, we conducted an extensive experiment on the new drivers to demonstrate our model's generalization ability. We verified that the proposed system achieved a better performance than in previous studies, by resolving their drawbacks. Our model detected hands on/off transitions in 0.37 s on average, with an accuracy of 95.7%.

3.
Sensors (Basel) ; 22(23)2022 Nov 29.
Article in English | MEDLINE | ID: mdl-36502010

ABSTRACT

Recently, research using point clouds has been increasing with the development of 3D scanner technology. According to this trend, the demand for high-quality point clouds is increasing, but there is still a problem with the high cost of obtaining high-quality point clouds. Therefore, with the recent remarkable development of deep learning, point cloud up-sampling research, which uses deep learning to generate high-quality point clouds from low-quality point clouds, is one of the fields attracting considerable attention. This paper proposes a new point cloud up-sampling method called Point cloud Up-sampling via Multi-scale Features Attention (PU-MFA). Inspired by prior studies that reported good performance at generating high-quality dense point set using the multi-scale features or attention mechanisms, PU-MFA merges the two through a U-Net structure. In addition, PU-MFA adaptively uses multi-scale features to refine the global features effectively. The PU-MFA was compared with other state-of-the-art methods in various evaluation metrics through various experiments using the PU-GAN dataset, which is a synthetic point cloud dataset, and the KITTI dataset, which is the real-scanned point cloud dataset. In various experimental results, PU-MFA showed superior performance of generating high-quality dense point set in quantitative and qualitative evaluation compared to other state-of-the-art methods, proving the effectiveness of the proposed method. The attention map of PU-MFA was also visualized to show the effect of multi-scale features.


Subject(s)
Benchmarking , Technology
4.
Sensors (Basel) ; 22(12)2022 Jun 10.
Article in English | MEDLINE | ID: mdl-35746182

ABSTRACT

As vehicles provide various services to drivers, research on driver emotion recognition has been expanding. However, current driver emotion datasets are limited by inconsistencies in collected data and inferred emotional state annotations by others. To overcome this limitation, we propose a data collection system that collects multimodal datasets during real-world driving. The proposed system includes a self-reportable HMI application into which a driver directly inputs their current emotion state. Data collection was completed without any accidents for over 122 h of real-world driving using the system, which also considers the minimization of behavioral and cognitive disturbances. To demonstrate the validity of our collected dataset, we also provide case studies for statistical analysis, driver face detection, and personalized driver emotion recognition. The proposed data collection system enables the construction of reliable large-scale datasets on real-world driving and facilitates research on driver emotion recognition. The proposed system is avaliable on GitHub.


Subject(s)
Automobile Driving , Accidents, Traffic/prevention & control , Automobile Driving/psychology , Data Collection , Emotions
5.
Sensors (Basel) ; 21(6)2021 Mar 19.
Article in English | MEDLINE | ID: mdl-33808922

ABSTRACT

In intelligent vehicles, it is essential to monitor the driver's condition; however, recognizing the driver's emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver's emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver's real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers' real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver's real emotional state. Hence, we categorized the driver's emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver's induced emotion while driving situation.


Subject(s)
Automobile Driving , Deep Learning , Emotions , Facial Expression , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...