TY - JOUR
T1 - Low power indirect time of flight near field LiDAR depth correction
AU - Nagiub, Mena
AU - Beuth, Thorsten
AU - Sistu, Ganesh
AU - Gotzig, Heinrich
AU - Eising, Ciarán
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2025
Y1 - 2025
N2 - Indirect time-of-flight (iToF) LiDAR sensors have their technical challenges. They are a viable alternative to direct time-of-flight (dToF) LiDAR sensors and cameras for near-field applications, particularly in autonomous vehicles and mobile robots. iToF LiDAR sensors face issues with depth range ambiguity. Many modern Deep Neural Network (DNN) computer vision models that run on graphics processing units can effectively address this problem. However, GPUs' high electrical energy consumption poses a significant challenge when integrated with battery-based embedded systems. Nonetheless, viable alternatives can provide sufficient computational power while maintaining a low electric power profile. One such option is the low-power ARM processor architecture, which is supported by single-instruction, multiple-data (SIMD) accelerators. In this paper, we go deep into the system architecture differences between GPUs and CPUs-integrated SIMD, exploring the potential use cases for running DNN models, identify which models are suitable for which architectures, propose methods for running the depth correction algorithm on such low-power platforms, discuss the challenges of these methods, and suggest ways to overcome them, thereby enabling the use of sensors in low-power applications. We pose several questions about the embedded deployment of DNN models and offer recommendations to address them. We were able to achieve a real-time depth correction rate at 10 frames per second at acceptable accuracy, with no need for GPUs on low-power ARM architecture Cortex-A76.
AB - Indirect time-of-flight (iToF) LiDAR sensors have their technical challenges. They are a viable alternative to direct time-of-flight (dToF) LiDAR sensors and cameras for near-field applications, particularly in autonomous vehicles and mobile robots. iToF LiDAR sensors face issues with depth range ambiguity. Many modern Deep Neural Network (DNN) computer vision models that run on graphics processing units can effectively address this problem. However, GPUs' high electrical energy consumption poses a significant challenge when integrated with battery-based embedded systems. Nonetheless, viable alternatives can provide sufficient computational power while maintaining a low electric power profile. One such option is the low-power ARM processor architecture, which is supported by single-instruction, multiple-data (SIMD) accelerators. In this paper, we go deep into the system architecture differences between GPUs and CPUs-integrated SIMD, exploring the potential use cases for running DNN models, identify which models are suitable for which architectures, propose methods for running the depth correction algorithm on such low-power platforms, discuss the challenges of these methods, and suggest ways to overcome them, thereby enabling the use of sensors in low-power applications. We pose several questions about the embedded deployment of DNN models and offer recommendations to address them. We were able to achieve a real-time depth correction rate at 10 frames per second at acceptable accuracy, with no need for GPUs on low-power ARM architecture Cortex-A76.
KW - ambiguity
KW - depth correction
KW - electric vehicles
KW - GPU
KW - iToF
KW - LiDAR
KW - low power computing
KW - SIMD
UR - https://www.scopus.com/pages/publications/105017183506
U2 - 10.1109/OJVT.2025.3613141
DO - 10.1109/OJVT.2025.3613141
M3 - Article
AN - SCOPUS:105017183506
SN - 2644-1330
VL - 6
SP - 2736
EP - 2760
JO - IEEE Open Journal of Vehicular Technology
JF - IEEE Open Journal of Vehicular Technology
ER -