At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
At Serve Robotics, we’re committed to developing reliable, cutting-edge sidewalk autonomy software. We’re looking for a driven engineer to join our prediction pipeline team, where you’ll apply advanced machine learning techniques to refine agent prediction models using point cloud based (LiDAR/Radar) and camera data. You’ll also lead efforts to incorporate a geometric-based module for redundancy, ensuring robust and dependable performance across the prediction pipeline.
Develop and maintain prediction pipelines that integrate both lidar and camera inputs and integrates with the planner.
Research and implement state-of-the-art algorithms for 3D perception, tracking and prediction. Well versed with latest research in prediction and how it connects to end-to-end models.
Do ablation studies and parameter sweeps using simulations to thoroughly validate safety and trust implications in predicting high risk factor object semantics accurately.
Curate diverse, real-world datasets for tracking and prediction pipeline to train and evaluate new models.
Analyze performance metrics and optimize models for reliability, efficiency and deployability.
Collaborate with Behavior Planning to define SLAs and integrate prediction outputs; partner cross-functionally to evaluate new sensors' impact on prediction performance.
Master’s degree in Computer Science, Robotics, Electrical Engineering, or a related field with 3+ years of industry experience in building perception and prediction modules for robotics / AV stack.
Hands-on experience with machine learning frameworks (TensorFlow, PyTorch, etc.).
Exposure to state-of-the-art research or publications in perception and prediction approaches.
Strong background in sensor fusion (especially lidar and camera) and computer vision related to 3D perception and spatio-temporal tracking.
Proven track record of building ML pipelines and deploying models in production.
Proficient in Python and C++, Someone who has a high bar to write production quality code.
Experience with large-scale real-world dataset curation and management.
In-depth knowledge of simulation environments or real-time testing in robotics.
Experience working with foundational models, including end-to-end architectures, Vision-Language Models (VLM), and Vision-Language-Action (VLA) models.
Open source project contributor.
Experience with GCP or AWS, Kubernetes and Docker.
3Pillar
Qdrant
Atlassian
GPTZero
G2 Web Services