About the Role
Join us to build real-world AI perception systems for autonomous outdoor robots operating in complex human environments. You will help develop perception capabilities that enable robots to understand their surroundings and operate safely, robustly, and intelligently in dynamic real-world settings.
This role sits at the intersection of robotics, computer vision, 3D perception, and edge AI deployment. You will work closely with robotics, software, and mechatronics engineers to translate advanced perception methods into reliable production-ready robotic systems.
Key Responsibilities
- Develop and optimize real-time perception pipelines for autonomous robots, including object detection, tracking, segmentation, and scene understanding
- Build and deploy AI perception modules on embedded and edge computing platforms with strong focus on real-time performance and robustness
- Integrate perception outputs into ROS 2-based robotic systems for navigation, safety, and mission-level behaviors
- Design and improve multi-sensor perception workflows involving cameras, LiDAR, and other onboard sensors
- Support perception model development across data collection, training, evaluation, deployment, and field validation
- Work on perception-related system tasks such as sensor calibration, synchronization, projection, and 3D geometric processing
- Validate and benchmark algorithms in simulation and real-world robot deployments
- Contribute to scalable perception software infrastructure, tools, and engineering practices
Requirements
Education
- Bachelor's degree in Computer Science, Robotics, Computer Engineering, Electrical/ Electronics Engineering, or a related field
- Master's or PhD is advantageous
Experience
- Experience developing and deploying perception modules for autonomous robots or intelligent robotic systems
- Experience taking perception solutions from prototype to real-world deployment
- Experience with multi-sensor systems such as cameras, LiDAR, IMU, or GPS
- Experience deploying AI or perception workloads on embedded or edge compute platforms
- Experience in robotics, autonomous systems, computer vision, or related applied AI domains
Technical Skills
- Strong knowledge of robotic perception algorithms such as object detection, tracking, segmentation, 3D perception, or sensor fusion
- Strong hands-on experience with deep learning and deployment tools such as PyTorch, ONNX, TensorRT, or similar
- Strong C++ and Python programming skills
- Experience with ROS 2, Nav2 and Linux-based development
- Familiarity with camera calibration, geometric vision, point cloud processing, and coordinate transforms
- Strong debugging, benchmarking, optimization, and system integration skills
- Good understanding of how perception interfaces with planning, control, navigation, or safety systems in autonomous robots
Nice to Have
- Experience with NVIDIA Jetson / AGX, CUDA, or GPU-accelerated perception pipelines
- Experience with simulation tools such as Gazebo or NVIDIA Isaac Sim
- Experience with multi-camera systems, visual localization, or 3D scene understanding
- Experience with MLOps, model lifecycle tooling, or scalable data / evaluation pipelines
- Experience with outdoor robotics, autonomous mobile robots, or field robotics
Why Join Us
- Build perception systems for real autonomous robots deployed in real-world outdoor environments
- Work on applied AI and robotics problems with direct product impact
- Help shape the perception capability of a growing robotics team
- Collaborate across software, robotics, and hardware to bring advanced autonomy into production