Sr. Hadoop DevOps Engineer

Apple   •  

Santa Clara, CA

Industry: Business Services


5 - 7 years

Posted 397 days ago

Sr. Hadoop DevOps Engineer

  • Job Number: 112934644
  • Santa Clara Valley, California, United States
  • Posted: 30-Aug-2017

Job Summary

At Apple, we work every day to create products that enrich people’s lives. Our Advertising Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Our technology and services power advertising in Apple News and Search Ads in App Store. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. We are looking to hire outstanding individuals to join our team of Site Reliability Engineers. Candidates should be able to troubleshoot various type of Network, System and Application related issues. Our team provides infrastructure and support by building and maintaining Hadoop clusters and application stack. We work in a high intensity environment with our end users and have very aggressive project delivery timelines. Great communication skills with Project Managers and engineers is an absolute must.

Key Qualifications

  • Sound knowledge of UNIX and TCP/IP network fundamentals
  • Expertise with Hadoop and its ecosystem Hive, Pig, Spark, HDFS, HBase, Oozie, Sqoop, Flume, Zookeeper etc.
  • 5+ years managing clustered services, distributed systems, production data stores
  • 3+ years experience administering and operating Hadoop clusters
  • Cloudera CHD4 /CDH5 cluster management and capacity planning experience
  • Ability to code well in at least one language (Shell, Ruby, Python, Java, Perl, Go)
  • Ability to rapidly learn new software languages, frameworks and APIs quickly
  • Sharp and tenacious troubleshooting skills
  • Experience scripting for automation and config management (Chef, Puppet)
  • Multi-datacenter deployment experience a plus


• Design and implement scalable data platforms for our customer facing services • Deploy and scale Hadoop infrastructure • Hadoop / HDFS maintenance and operations • Data cluster monitoring and troubleshooting • Hadoop capacity planning • OS integration and application installation • Create runbooks for Offshore teams • Partner with program management, network engineering, site reliability operations, and other related groups • Willingness to participate in a 24x7 on-call rotation for escalations


Bachelor's degree in Computer Science or equivalent is required. Master's degree preferred.