Sr. Software Engineer, Data Platform in Sunnyvale, CA

$100K - $150K(Ladders Estimates)

Blue River Technology   •  

Sunnyvale, CA 94086

Industry: Enterprise Technology


5 - 7 years

Posted 56 days ago

About Blue River

Blue River Technology serves the agricultural industry by designing and building advanced farm machines that utilize computer vision and machine learning to enable farmers to understand and manage every plant. These machines help farmers to improve profitability, protects the environment by reducing pesticide use, and captures valuable plant-by-plant data. Blue River is a pioneer in the agricultural robotics space and has developed the See & Spray precision sprayer, which applies pesticide only where needed, and can reduce pesticide use 90%.

John Deere & Company, with over 180 years of experience in designing, manufacturing, and distributing innovative products to farmers, acquired Blue River Technology in the fall of 2017 as an independently-run subsidiary. In partnership with John Deere, Blue River has expanded rapidly and together both companies see many opportunities to apply advanced computer vision, machine learning, and robotics technologies to other areas in agriculture beyond spraying.

Blue River is based in Sunnyvale, CA and has over 100 team members with diverse experience including: computer vision, machine learning, systems software, autonomous vehicles, and precision agriculture. Our working environment is fast paced and highly collaborative, and employees are excited to use their talents to improve food production and protect the environment.

Position Summary

We're seeking a talented Sr. Software Engineer specializing in big data applications to join our team. In this role, you will be collaborating closelywith roboticists, researchers and Software Engineers in defining the data infrastructure for computer vision, robotics system introspection, and machine learning applications. Data is critical to the success of our product and in this role, you'll have the opportunity to build our data pipeline ensuring distribution across our entire organization.

A well qualified candidate for this highly visible role will thrive in the journey from prototype to product and understand that scale and reproducibility dictates execution speed. On a daily basis you'll have bandwidth, indices, and compute/memory trade-off top of mind. You'll bring expertise to our team with the knowledge that query optimization alone won't ever solve an ill-specified metadata problem. This role will also require a continuous improvement mindset coupled with intellectual curiosity.

Position Description

As our Data Platform SME, you'll collaborate with a team of engineers to design, prototype and build the data platform for Blue River technologies. You will own various parts of our data storage, transport and processing components from strategies through to implementation. You will be using a combination of the latest data warehouse technologies available inside and outside of the AWS ecosystem. The tools you develop will enable researchers and roboticists to monitor, analyze and introspect our robotics and perception systems. You will be our domain expert in ensuring the constant flow of data from robots to our engineers and research scientists.

Role Responsibilities:

  • Own Blue River's core data pipeline for our growing arrays of sensor data, ranging from speedometer to LiDAR
  • Design and architect data models and schema to satisfy the growing and evolving needs across all of Blue River
  • Develop tools in support for self-service data operations: ETL, exploration, inference
  • Work side by side with roboticists, software engineers and research scientists in transforming prototypes into efficient, reliable products.

Required Professional Skills & Experience:

  • Bachelor's Degree in Computer Science or quantitative disciplines
  • 5+ years building production level software, high reliability system preferred
  • Proficiencies with Python and Scala
  • Proficiencies with ETL related technologies: Spark/MapReduce, Arrow, Airflow, and ML Flow
  • Experience with Hadoop related technologies, such as HDFS, Hive, Pig
  • Experience AWS ecosystem: e.g., EMR, Redshift, DynamoDB
  • Experience with configuration/deployment tools such as Ansible and Terraform
  • Experience with advanced query optimization
  • Strong verbal and written communicator

Preferred Skills & Experience:

  • Strong under-the-hood understanding of databases, such as MongoDB, and Postgres SQL
  • Experience working with Jupyter Notebook and its associated ecosystem.
  • Experience working with nD image processing systems
  • Experience productizing ML frameworks: e.g., Tensorflow, Pytorch
  • Masters or PhD in quantitative disciplines.

Valid Through: 2019-10-21