Learning a semantic and geometric understanding of the world from visual data is the core of our autonomy system. We're pushing the boundaries of what's possible with real-time deep networks to accelerate progress in intelligent mobile robots.
If you're interested in leveraging massive amounts of structured video data to solve open problems in object detection and tracking, motion prediction, depth estimation, and total scene understanding, we would love to hear from you.
- Design and implement real-time tracking, classification, and detection models to equip drones with total scene understanding
- Predict trajectories for objects based on semantic understanding of the scene
- Implement deep learning based models for robustly estimating depth and optical flow
- Refine and optimize models for low-latency on embedded hardware
- Real experience building, training, and deploying deep neural networks
- Ability to contextualize and keep up-to-date with recent literature
- Solid software engineering foundation and a commitment to writing clean, well architected code
- High proficiency with Python or C++, preferably both
- Experience with state-of-the-art in object detection, multi-object tracking, and instance segmentation
- Experience working on high-efficiency deep networks for real-time embedded systems
- Strong fundamental knowledge of linear algebra and projective geometry
- Data curation, annotation, and management experience
- TensorFlow, TensorFlow Lite