$80K — $100K *
1. Build and scale critical data pipelines for use-cases like performance monitoring, cost analysis, capacity planning, and other analytical scenarios.
2. Design and develop visualizations, monitors, and alerting systems to catch system issues and data anomalies, build automation to handle these issues intelligently.
3. Work with performance engineers and product engineering teams to analyze data from billions of mobile clients and hundreds of thousands of servers and systems to provide insightful and valuable information on an ongoing basis.
1. 3+ years of experience with Data Warehouse and open-source Big Data technologies
2. Experience building infrastructure to support real-time or offline data pipelines processing petabytes of data
3. Experience with Spark, Hive, Hadoop, SQL, Kafka, Parquet, HDFS, or HBase
4. Proficiency in multiple systems languages (Sca
Valid through: 7/8/2020