Position Overview:
In this role, you will be responsible for the end-to-end design and development of autonomous driving frameworks. You will integrate mainstream perception, prediction, and planning technologies into a unified modeling system to support autonomous driving tasks across both urban and highway scenarios.
Responsibilities:
- Lead the design and implementation of both one-stage (sensor-to-control) and two-stage (e.g., perception-planning decoupled) end-to-end models. Define model architecture, training pipelines, and optimization strategies for robust planning outputs.
- Drive the deployment and performance tuning of models on embedded systems, including inference acceleration, post-processing, efficient integration, system stability checks, and on-road testing.
- Build pure vision-based end-to-end modeling capabilities, achieving multi-task integration such as BEV perception, static/dynamic occupancy inference, and trajectory prediction.
- Deliver production-ready models tailored for elevated highways and urban roads, ensuring scalable deployment and continuous advancement toward full autonomy.
Qualification/ Requirements:
- Master's or Ph.D. degree in Computer Science, Artificial Intelligence, Robotics, or a related field.
- Solid understanding and experience in autonomous driving systems, especially in end-to-end deep learning modelling.
- Hands-on experience in developing and deploying planning or control modules using deep learning techniques.
- Strong coding skills in C/C++ and Python, with experience in real-time inference deployment and performance optimization.
- Familiarity with BEV-based modeling, occupancy prediction, and multi-task learning frameworks.
- Experience with system-level integration and testing on real vehicles is a plus.
- Strong problem-solving skills and ability to adapt quickly to complex real-world driving scenarios.
- Excellent communication and teamwork abilities, with a proactive attitude toward innovation and delivery.