Convolutional Neural Network Based Forecasting Pedestrian Trajectory with Sequence Modelling
Keywords:
Convolutional Neural Network, Trajectory Prediction, Spatial-Temporal, Scene Semantics, Social DynamicsAbstract
Predicting the future path of pedestrians is a vital task for enhance the safety and reliability of autonomous technologies, include self-driving vehicle, intelligent surveillance platforms and robotics. The complex and dynamic nature of human movement, which is influenced by social interactions and environmental context, is often considered a significant challenge to accurate forecasting. Nowadays deep learning - based prediction methods have demonstrated better performance to conventional approaches for various trajectory data. These approaches face challenges in improving accuracy, efficiency, and dependability. In this work, an optimized Convolutional Neural Network (CNN) is proposed for forecasting human future trajectories. A compact deep learning-based CNN architecture is developed for the extraction of spatial-temporal dependencies, while scene semantics and social dynamics are incorporated to enhance contextual understanding. Computational efficiency has been maintained throughout the model design to ensure suitability for real-time applications. The model has been demonstrated to perform well compared to state-of-the-art techniques even though its light weight in nature. Moreover, faster inference has been achieved, facilitating its adoption in real-time environments. The proposed optimized CNN model has exhibited superior performance, highlighting its effectiveness even with reduced computational demands. ADE and FDE scores of 1.0 and 1.28, respectively, have been recorded. In addition, reduced inference time was observed, further confirming the model’s suitability for real-time pedestrian trajectory forecasting. These findings confirm that accurate and reliable predictions can be delivered through the proposed model, while maintaining a computationally efficient and lightweight architecture.











