The opportunity:
As an engineer for the Data Engineering Infrastructure team, you will design, build, scale, and evolve our data engineering platform, services and tooling. Your work will have a critical impact on all areas of Datadog's business: powering core data pipelines, supporting detailed internal analytics, calculating customer usage, securing our platform, and much more.
You will:
- Build a big data platform-as-a-service (PaaS) for Spark, Luigi, Airflow, Kubernetes, and other open-source technologies on AWS and GCP
- Engage other teams at Datadog about their use of the platform to ensure we’re always building the right thing
- Use Datadog products to provide observability for our engineers so they can easily debug, scale, and tune their Spark jobs and data pipelines
- Join a tightly knit team solving hard problems the right way
Requirements:
- You have a BS/MS/PhD in a scientific field or equivalent experience
- You have experience contributing to a software engineering team
- You have experience with a mix of backend programming, operations, and working with data
- You value code simplicity and performance
- You want to work in a fast, high-growth startup environment that respects its engineers and customers
Bonus points:
- You have production experience running and operating state-of-the-art data processing frameworks, technologies, and platforms
- You have built and operated data pipelines for real customers in production systems
- You’ve built applications that run on AWS or GCP