Within the Big Data environment, you have a deep understanding of the complex nature of Hadoop. You feel as comfortable building simple data flows as you do digging into Hadoop source code to understand the subtle and obscure problems that can arise in this environment. You work with people, technical and non-technical alike, to understand their Big Data needs and to help them understand what they're really trying to achieve. You ' re a company resource, providing best practices, guidelines, and feedback on internal tools working with Hadoop. You have your finger on the pulse of the cluster, understanding when it ' s not working right and diving in to diagnose the problem before it becomes systemic.
You have a cool head under pressure. When a technical fire occurs, you understand that putting it out should always avoid collateral damage. When you cause a fire (as everyone inevitably does), you take responsibility for it and work with the team to figure out the right way to put that fire out. You believe blaming is a waste of time when something goes wrong, you figure out why it happened and how to prevent it from happening again in the future. Better yet, you look for how things went right in the first place and improve upon those.
- The design, care, and feeding of our multi-petabyte Big Data environments built upon technologies in the Hadoop Ecosystem
- Day-to-day troubleshooting of problems and performance issues in our clusters Investigate and characterize non-trivial performance issues in various environments
- Work with Systems and Network Engineers to evaluate new and different types of hardware to improve performance or capacity
- Deep understanding of system architecture and ability to validate system configurations from hardware layer to Hadoop Application layer
- Working closely with with developers, engineering and operation teams, jointly on key deliverables,evaluate their Hadoop use cases, provide feedback and design guidance
- Work simultaneously on multiple projects competing for your time and understand how to prioritize accordingly
- Be part of the On-call Rotation
- Willingness to mentor and teach people around you
As a member of this team, you seek out feedback on your designs and ideas and provide the same to others.
You constantly ask 'What am I missing?' and 'How will this NOT work?' You don't shy away from what you don't know; you readily admit that you don't know everything, and use every resource available to learn what you need to know.
- Bachelor's degree in Computer Science or a closely related computer technical field and 5+ years of Hadoop Administration experience
- Intimate and extensive knowledge of Linux Administration and Engineering.
- We use CentOS/Red Hat Enterprise Linux (RHEL), you should too
- Experience in running things on bare metal, on private cloud, in the public cloud, or hybrids like AWS, OpenStack, and GCP
- Experience in designing, implementing and administering large (200 nodes - 1000 nodes), highly available Hadoop clusters secured with Kerberos, preferably using the Cloudera Hadoop distribution
- In-depth knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce, Hive, Presto, Spark and HBase
- Understanding system’s capacity and bottlenecks, basics of memory, CPU, OS, storage, and networks
- An advanced background with common automation tools such as Puppet
- An advanced background with a higher level scripting language, such as Python or Ruby
- Must have experience with monitoring tools used in the Hadoop ecosystem such as Nagios, Cloudera Manager
- Experience with modern data pipelines, data streaming, and real time analytics using tools such as Apache Kafka, Spark Streaming, ElasticSearch, or similar tools
- Experience with configuration management and orchestration tools (e.g. Chef, Puppet, Ansible, Bosh, Terraform) is a plus
- Experience with containerization and related technologies (e.g. Docker, Kubernetes) is a plus
Indeed provides a variety of benefits that help us focus on our mission of helping people get jobs.