
Search by job, company or skills
Responsibilities:
Assist in building and refining perception algorithms for 3D object detection, tracking, segmentation, and scene understanding
Analyze sensor data (e.g., images, video, lidar, radar) to improve model accuracy and robustness
Support data preprocessing, labeling, augmentation, and validation workflows
Conduct experiments to benchmark perception models and propose improvements
Collaborate with cross-functional teams (Behaviors, Actions, Platforms teams) to integrate perception outputs into downstream decision-making systems
Document findings, tools, and results in clear and reproducible formats
Required Skills:
A Masters/PhD degrees in Computer Science, Machine Learning, Robotics or a related field
Proficient at Python and/or C++
Coursework or project experience in computer vision, machine learning, and data processing
Familiarity with popular deep learning frameworks such as PyTorch or TensorFlow
Experience with image/video processing tools and libraries
Preferred Skills:
Experience with 3D perception, point cloud processing, or multi-sensor fusion
Exposure to robotics, autonomous systems, or real-time perception pipelines
Knowledge of evaluation frameworks and datasets
Project or internship experience on VLM, VLA and other foundation models for robotics and self-driving.
Job ID: 146961995