TY - JOUR
T1 - Optimizing Ego Vehicle Trajectory Prediction
T2 - IS and T International Symposium on Electronic Imaging 2024: Autonomous Vehicles and Machines, AVM 2024
AU - Sharma, Sushil
AU - Singh, Aryan
AU - Sistu, Ganesh
AU - Halton, Mark
AU - Eising, Ciarán
N1 - Publisher Copyright:
© 2024, Society for Imaging Science and Technology.
PY - 2024
Y1 - 2024
N2 - Predicting the trajectory of an ego vehicle is a critical component of autonomous driving systems. Current state-of-the-art methods typically rely on Deep Neural Networks (DNNs) and sequential models to process front-view images for future trajectory prediction. However, these approaches often struggle with perspective issues affecting object features in the scene. To address this, we advocate for the use of Bird’s Eye View (BEV) perspectives, which offer unique advantages in capturing spatial relationships and object homogeneity. In our work, we leverage Graph Neural Networks (GNNs) and positional encoding to represent objects in a BEV, achieving competitive performance compared to traditional DNN-based methods. While the BEV-based approach loses some detailed information inherent to front-view images, we balance this by enriching the BEV data by representing it as a graph where relationships between the objects in a scene are captured effectively.
AB - Predicting the trajectory of an ego vehicle is a critical component of autonomous driving systems. Current state-of-the-art methods typically rely on Deep Neural Networks (DNNs) and sequential models to process front-view images for future trajectory prediction. However, these approaches often struggle with perspective issues affecting object features in the scene. To address this, we advocate for the use of Bird’s Eye View (BEV) perspectives, which offer unique advantages in capturing spatial relationships and object homogeneity. In our work, we leverage Graph Neural Networks (GNNs) and positional encoding to represent objects in a BEV, achieving competitive performance compared to traditional DNN-based methods. While the BEV-based approach loses some detailed information inherent to front-view images, we balance this by enriching the BEV data by representing it as a graph where relationships between the objects in a scene are captured effectively.
UR - http://www.scopus.com/inward/record.url?scp=85185538499&partnerID=8YFLogxK
U2 - 10.2352/EI.2024.36.17.AVM-115
DO - 10.2352/EI.2024.36.17.AVM-115
M3 - Conference article
AN - SCOPUS:85185538499
SN - 2470-1173
VL - 36
JO - IS and T International Symposium on Electronic Imaging Science and Technology
JF - IS and T International Symposium on Electronic Imaging Science and Technology
IS - 17
M1 - 115
Y2 - 21 January 2024 through 25 January 2024
ER -