Total: 1
End-to-end (E2E) autonomous driving must maintain global consistency while preserving local precision. However, existing E2E approaches rarely achieve both goals simultaneously. Therefore, we propose a multimodal coarse-to-local transformer (MC2L-Transformer), which is composed of a hierarchical transformer architecture. Multimodal inputs are fused into a shared embedding, and global waypoints are produced. Local refinement is then utilized to capture fine interactions around the vehicle. Furthermore, a temporal encoder summarizes recent context, and navigation target and velocity are embedded to guide route- and speed-aware decoding. We evaluate in CARLA, and the results show lower collision and off-route rates even under sudden events. These results indicate that combining a coarse-to-local hierarchical transformer with a lightweight temporal context provides a practical step toward reliable E2E autonomous driving.