TY - GEN
T1 - Classification of Traffic Signaling Motion in Automotive Applications Using FMCW Radar
AU - Biswas, Sabyasachi
AU - Bartlett, Benjamin
AU - Ball, John E.
AU - Gurbuz, Ali C.
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Advanced driver-assisted system (ADAS) typically includes sensors such as Radar, Lidar, or Camera to make vehicles aware of their surroundings. These ADAS systems are presented to a wide variety of situations in traffic, such as upcoming collisions, lane changes, intersections, sudden changes in speed, and other common instances of driving errors. One of the key barriers to automotive autonomy is the inability of self-driving cars to navigate unstructured environments, which typically do not have any traffic lights present or operational for directing traffic. In these circumstances, it is much more common for a person to be tasked with directing vehicles, either by signaling with an appropriate sign or via gesturing. The task of interpreting human body language and gestures by autonomous vehicles in traffic directing scenarios is a great challenge. In this study, we present a new dataset collected of traffic signaling motions using millimeter-wave (mmWave) radar, camera, Lidar and motion-capture system. The dataset is based on those utilized in the US traffic system. Initial classification results from Radar microDoppler (μ-D) signature analysis using basic Convolutional Neural Networks (CNN) demonstrates that deep learning can very accurately (around 92%) classify traffic signaling motions in automotive applications.
AB - Advanced driver-assisted system (ADAS) typically includes sensors such as Radar, Lidar, or Camera to make vehicles aware of their surroundings. These ADAS systems are presented to a wide variety of situations in traffic, such as upcoming collisions, lane changes, intersections, sudden changes in speed, and other common instances of driving errors. One of the key barriers to automotive autonomy is the inability of self-driving cars to navigate unstructured environments, which typically do not have any traffic lights present or operational for directing traffic. In these circumstances, it is much more common for a person to be tasked with directing vehicles, either by signaling with an appropriate sign or via gesturing. The task of interpreting human body language and gestures by autonomous vehicles in traffic directing scenarios is a great challenge. In this study, we present a new dataset collected of traffic signaling motions using millimeter-wave (mmWave) radar, camera, Lidar and motion-capture system. The dataset is based on those utilized in the US traffic system. Initial classification results from Radar microDoppler (μ-D) signature analysis using basic Convolutional Neural Networks (CNN) demonstrates that deep learning can very accurately (around 92%) classify traffic signaling motions in automotive applications.
KW - ADAS
KW - CNN
KW - Micro-Doppler
KW - autonomy
KW - mmWave
KW - traffic gesture classification
UR - http://www.scopus.com/inward/record.url?scp=85163752121&partnerID=8YFLogxK
U2 - 10.1109/RadarConf2351548.2023.10149728
DO - 10.1109/RadarConf2351548.2023.10149728
M3 - Conference contribution
AN - SCOPUS:85163752121
T3 - Proceedings of the IEEE Radar Conference
BT - RadarConf23 - 2023 IEEE Radar Conference, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE Radar Conference, RadarConf23
Y2 - 1 May 2023 through 5 May 2023
ER -