TY - JOUR
T1 - DriveSpace
T2 - 2019 Autonomous Vehicles and Machines Conference, AVM 2019
AU - Hughes, Ciarán
AU - Chandra, Sunil
AU - Sistu, Ganesh
AU - Horgan, Jonathan
AU - Deegan, Brian
AU - Chennupati, Sumanth
AU - Yogamani, Senthil
N1 - Publisher Copyright:
© 2019, Society for Imaging Science and Technology
PY - 2019/1/13
Y1 - 2019/1/13
N2 - Free space is an essential component of any autonomous driving system. It describes the region, which is typically the road surface, around the vehicle which is free from obstacles. However, in practice, free space should not solely describe the area where a vehicle can plan a trajectory. For instance, in a single lane road with two way traffic the opposite lane should not be included as an area where the vehicle can plan a driving path although it will be detected as free space. In this paper, we introduce a new conceptual representation called DriveSpace which corresponds to semantic understanding and context of the scene. We formulate it based on combination of dense 3d reconstruction and semantic segmentation. We use a graphical model approach to fuse and learn the drivable area. As the drivable region is highly dependent on the situation and dynamics of other objects, it remains a bit subjective. We analyze various scenarios of DriveSpace and propose a general method to detect all scenarios. As it is a new concept, there are no datasets available for development and test, however, we are working on creating the same to show quantitative results of the proposed method.
AB - Free space is an essential component of any autonomous driving system. It describes the region, which is typically the road surface, around the vehicle which is free from obstacles. However, in practice, free space should not solely describe the area where a vehicle can plan a trajectory. For instance, in a single lane road with two way traffic the opposite lane should not be included as an area where the vehicle can plan a driving path although it will be detected as free space. In this paper, we introduce a new conceptual representation called DriveSpace which corresponds to semantic understanding and context of the scene. We formulate it based on combination of dense 3d reconstruction and semantic segmentation. We use a graphical model approach to fuse and learn the drivable area. As the drivable region is highly dependent on the situation and dynamics of other objects, it remains a bit subjective. We analyze various scenarios of DriveSpace and propose a general method to detect all scenarios. As it is a new concept, there are no datasets available for development and test, however, we are working on creating the same to show quantitative results of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=85080028379&partnerID=8YFLogxK
U2 - 10.2352/ISSN.2470-1173.2019.15.AVM-042
DO - 10.2352/ISSN.2470-1173.2019.15.AVM-042
M3 - Conference article
AN - SCOPUS:85080028379
SN - 2470-1173
VL - 2019
JO - IS and T International Symposium on Electronic Imaging Science and Technology
JF - IS and T International Symposium on Electronic Imaging Science and Technology
IS - 15
M1 - AVM-042
Y2 - 13 January 2019 through 17 January 2019
ER -