Hadoop Operations Engineer

  •  

Sunnyvale, CA

Industry: IT Consulting/Services

  •  

8 - 10 years

Posted 255 days ago

  by    Sulfi Ali

This job is no longer available.

Requirements:

Hadoop Operation Engineer 

(ticket support and solutions)

3-4 junior engineers (Level 1 - know how hadoop works/look under the hood, understanding and exposure. 1year exposure in spark)

2-3 senior level engineers (Level 2 - hands on and development experience, 50/5o ops and dev)

Job Summary:

We are seeking a solid Hadoop engineer focused on operations to administer/scale our multi-petabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop.

Key Qualifications:

Responsible for maintaining and scaling production Hadoop, Kafka, and Spark clusters. 

Deep understanding of Hadoop/ Spark stack and hands on experience in resolving issues with Hadoop/Spark Jobs

Responsible for the implementation and ongoing administration of Hadoopinfrastructure including monitoring, tuning and troubleshooting. 

Provide hardware architectural guidance, plan and estimate cluster capacity, and create roadmaps for the Hadoop cluster deployment.

Able to support Shift plan with some odd hours coverage on a weekly basis

Triage production issues when they occur with other operational teams. 

Conduct ongoing maintenance across our large scale deployments across the world.

Write automation code for managing large Big Data clusters 

Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services 

Participate in the occasional on-call rotation supporting theinfrastructure. 

Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause. 

Job Description:

? Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more). 

? Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code. 

? Overall 8+ years with at least 5+ years of Hadoop/ Spark debugging experience in production, in medium to large clusters. - Senior Resources

? Overall 4+ years with at least 2+ years of Hadoop/ Spark debugging experience. - Junior Resources

? Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick. 

?Experience with Configuration Management and automation. 

? Organized, focused on building, improving, resolving and delivering. 

? Good communicator in and across teams.

20-30+ tickets a week. will go up to 50-70 tickets due to growth and usage.

shifts: 7am-7pm overlap and rotation

$80K - $140K