Turn-Aware LSTM Model for Vehicle Trajectory Forecasting

Xingnan Zhou, Ciprian Alecsandru, Saman Bashbaghi, Yunseo Jeong, Ye Chen
Concordia University, Montreal · Ericsson AI Hub Canada
Published — Advances in Transportation Studies (ATS), Vol. LXVIII, pp. 381–396, April 2026

Abstract

Accurate trajectory prediction is essential for autonomous driving safety at intersections. Existing deep learning models often overlook turning behaviors, leading to curvature misestimation. This study proposes a Turn-Aware LSTM network that explicitly encodes maneuver type (left, right, straight) through cumulative heading-change features and one-hot indicators. Evaluated on UAV-captured intersection trajectories in Montreal, the model reduces FDE by 15–20% for turning maneuvers compared to a vanilla LSTM, while maintaining real-time inference at ~2.5 ms per trajectory.
15–20%
FDE Reduction (Turns)
2.5 ms
Inference Time
30 fps
UAV Video Data
3
Prediction Horizons

Study Site & Data Collection

Vehicle trajectories were collected using a DJI drone hovering at 80 meters above a four-arm signalized intersection in Châteauguay, Montreal, QC. Video was captured at 30 fps during peak hours, then stabilized using a Fourier-Mellin transform to correct UAV motion artifacts.

Drone positioned 80 meters above the intersection
Drone positioned 80m above the study intersection with YOLOv8 detections
Real-time detection and tracking
Real-time YOLOv8 detection with Deep SORT tracking IDs

Vehicles were detected using YOLOv8 retrained on 18,000 custom-labeled images, and tracked across frames with Deep SORT. The dataset includes passenger vehicles, trucks, and buses, split into 70% training, 15% validation, and 15% test sets.

Method

End-to-end pipeline
End-to-end pipeline: UAV video → YOLOv8 detection → Deep SORT tracking → Turn-Aware LSTM forecasting

Turn-Aware Feature Encoding

The key insight is that turning maneuvers are the hardest to predict, yet standard LSTMs have no explicit representation of maneuver intent. We address this with:

Trajectory visualization by maneuver type
Vehicle trajectories color-coded by maneuver type: left turns (orange), right turns (blue), straight (green)

Encoder–Decoder Architecture

The Turn-Aware LSTM uses a 2-layer stacked encoder (128 hidden units) to compress the observed trajectory, then a decoder LSTM generates future positions autoregressively. The turn features are concatenated with kinematic features at the input, providing explicit maneuver context throughout the encoding process.

Results

Models were evaluated at 1s, 2s, and 3s horizons (30, 60, 90 frames) against three baselines: Constant Velocity (CV), Vanilla LSTM, and a Tiny Transformer.

Overall Performance

ADE over prediction horizons
Average Displacement Error (ADE) across prediction horizons
FDE over prediction horizons
Final Displacement Error (FDE) across prediction horizons

The Tiny Transformer achieves the lowest overall errors, while the Turn-Aware LSTM consistently outperforms the Vanilla LSTM at all horizons — closing the gap toward the Transformer while adding negligible inference cost.

Per-Maneuver Breakdown

The targeted benefit of turn encoding is most visible in maneuver-specific analysis:

ADE per maneuver
ADE by maneuver type at 1s, 2s, 3s horizons
FDE per maneuver
FDE by maneuver type at 1s, 2s, 3s horizons
Maneuver Metric Vanilla LSTM Turn-Aware LSTM Improvement
Right Turn FDE @ 3s ~0.51 m ~0.42 m ~18%
Left Turn FDE @ 3s ~0.35 m ~0.28 m ~20%
Straight FDE @ 3s ~0.18 m ~0.17 m ~6%

Turn encoding provides the largest gains precisely where prediction is hardest: turning maneuvers.

Computational Efficiency

Model Inference Time Hardware
Constant Velocity < 1 ms CPU
Vanilla LSTM ~2.3 ms RTX 4090
Turn-Aware LSTM ~2.5 ms RTX 4090
Tiny Transformer ~4.8 ms RTX 4090

The turn-aware features add only ~0.2 ms overhead vs. the vanilla LSTM, making the model fully suitable for real-time autonomous driving applications.

Code & Data

The processed trajectory datasets, maneuver annotations, and model code are publicly available:

GitHub: Turn-Aware-LSTM_SUPP

Note: Raw video data cannot be publicly released due to privacy and data-sharing restrictions.

Acknowledgments

This work was funded by Ericsson — Global Artificial Intelligence Accelerator (GAIA) AI Hub Canada in Montréal through the Mitacs Accelerate Program.

Citation

@article{zhou2026turnaware,
  title={Turn-Aware LSTM Model for Vehicle Trajectory Forecasting},
  author={Zhou, Xingnan and Alecsandru, Ciprian and Bashbaghi, Saman and Jeong, Yunseo and Chen, Ye},
  journal={Advances in Transportation Studies},
  volume={LXVIII},
  pages={381--396},
  year={2026}
}