$80K — $100K *
About The Team
Our teams build an open platform to move data across hundreds services, transform data into a usable form and load it into the system for making insightful decisions. The platform enables all data scientists and engineers at Uber to produce data daily and scales for a large number of data workflows running in multi-tenancy with great security and isolation. We also build pre-cooked workflows and ETL frameworks and provide dev tools and ecosystems to build, test, deploy and monitor for advanced data engineers.
Know more from our engineer blog - Managing Uber's Data Workflows at Scale
About The Role/What You'll Do
We are looking for a strong engineer to join the Data Workflow Platform Team to develop the platform, be more reliable and scalable, tackle challenges of large scale orchestration, scheduling and distribution services, implement high demand workflow frameworks and solutions, build dev tools and ecosystems for data scientists and engineers.
You Will Have a Chance To
• Design and implement platform services, frameworks and ecosystems
• Build a scalable, reliable, operable and performant big data workflow platform for Uber's data scientists/engineers, AI/ML engineers, and operations folks.
• Drive efficiency and reliability improvements through design and automation: performance, scaling, observability, and monitoring
• Support your fellow teammates, review the team's technical design, code, and documentation
• 2+ years of software backend/data engineering experience, including familiarity with design, planning, implementation, maintenance, and documentation
• Strong problem solving and coding skills in one or more object-oriented programming languages (e.g. Python, Go, Java, C++) and the eagerness to learn more
• Experience with large-scale distributed storage and database systems (SQL or NoSQL, e.g. MySQL, Cassandra, Hadoop)
• Bachelor's Degree in Computer Science or related field
• Deep understanding on big dataarchitecture and hands on building pipelines/frameworks/services (e.g. Hadoop, Hive, Hdfs, Kafka, Presto etc.)
• Demonstrated experience working collaboratively in cross-functional teams
• Passion for learning new technologies/domains and for challenging the status quo
You feel ownership over everything you touch. You pride yourself on efficient monitoring, strong documentation, and proper test coverage and you call something \"done\" only when these are in place
• You believe that you can achieve more on a team - that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement and you help others by returning the favor.
Valid through: 6/7/2021
$100K — $150K