Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(16)2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39204832

RESUMEN

Camera-based object detection is integral to advanced driver assistance systems (ADAS) and autonomous vehicle research, and RGB cameras remain indispensable for their spatial resolution and color information. This study investigates exposure time optimization for such cameras, considering image quality in dynamic ADAS scenarios. Exposure time, the period during which the camera sensor is exposed to light, directly influences the amount of information captured. In dynamic scenarios, such as those encountered in typical driving scenarios, optimizing exposure time becomes challenging due to the inherent trade-off between Signal-to-Noise Ratio (SNR) and motion blur, i.e., extending exposure time to maximize information capture increases SNR, but also increases the risk of motion blur and overexposure, particularly in low-light conditions where objects may not be fully illuminated. The study introduces a comprehensive methodology for exposure time optimization under various lighting conditions, examining its impact on image quality and computer vision performance. Traditional image quality metrics show a poor correlation with computer vision performance, highlighting the need for newer metrics that demonstrate improved correlation. The research presented in this paper offers guidance into the enhancement of single-exposure camera-based systems for automotive applications. By addressing the balance between exposure time, image quality, and computer vision performance, the findings provide a road map for optimizing camera settings for ADAS and autonomous driving technologies, contributing to safety and performance advancements in the automotive landscape.

2.
J Imaging ; 9(12)2023 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-38132678

RESUMEN

In advanced driver assistance systems (ADAS) or autonomous vehicle research, acquiring semantic information about the surrounding environment generally relies heavily on camera-based object detection. Image signal processors (ISPs) in cameras are generally tuned for human perception. In most cases, ISP parameters are selected subjectively and the resulting image differs depending on the individual who tuned it. While the installation of cameras on cars started as a means of providing a view of the vehicle's environment to the driver, cameras are increasingly becoming part of safety-critical object detection systems for ADAS. Deep learning-based object detection has become prominent, but the effect of varying the ISP parameters has an unknown performance impact. In this study, we analyze the performance of 14 popular object detection models in the context of changes in the ISP parameters. We consider eight ISP blocks: demosaicing, gamma, denoising, edge enhancement, local tone mapping, saturation, contrast, and hue angle. We investigate two raw datasets, PASCALRAW and a custom raw dataset collected from an advanced driver assistance system (ADAS) perspective. We found that varying from a default ISP degrades the object detection performance and that the models differ in sensitivity to varying ISP parameters. Finally, we propose a novel methodology that increases object detection model robustness via ISP variation data augmentation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA