Job ID: 11331
Buchanan Technologies is currently seeking an experienced Hadoop Administrator to join an emerging data technologies team in a large enterprise environment, for a full-time, direct hire career opportunity in the Dallas/Ft. Worth, TX area!
Deployment of big data platform technologies activities includes identifying and enabling opportunities for automation of fast integration and deployment of applications. Operation of big data platform technologies activities include monitoring and troubleshooting incidents, enabling security policies, managing data storage and compute resources. Responsibility also includes coding, testing, and documentation of new or modified automation for deployment and monitoring. This role participates along with team counterparts to architect an end-to-end framework developed on a group of core data technologies. Other aspects of the role include developing standards and processes for big data platforms in support of projects and initiatives.
- Manage Hadoop and Spark cluster environments, on bare-metal and container infrastructure, including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.
- Work with data engineering related groups in the support of deployment of Hadoop and Spark jobs.
- Work with IT Operations and Information Security Operations with monitoring and troubleshooting of incidents to maintain service levels.
- Work with Information Security Vulnerability Management and vendors to remediate known impacting vulnerabilities.
- Contribute to the evolving distributed systems architecture to meet changing requirements for scaling, reliability, performance, manageability, and cost.
- Reportutilization and performance metrics to user communities
- Contributes to planning and implementation of new/upgraded hardware and software releases.
- Responsible for monitoring the Linux, Hadoop, and Spark communities and vendors and report on important defects, feature changes, and or enhancements to the team.
- Research and recommend innovative, and where possible, automated approaches for administration tasks. Identify approaches to efficiencies in resource utilization, provide economies of scale, and simplify support issues.
- Excellent knowledge of Linux, AIX, or other Unix flavors
- Deep understanding of Hadoop and Spark cluster security, networking connectivity and IO throughput along with other factors that affect distributed system performance
- Strong working knowledge of disaster recovery, incident management, and security best practices
- Working knowledge of containers (e.g., docker) and major orchestrators (e.g., Mesos, Kubernetes, Docker Datacenter)
- Working knowledge of automation tools (e.g., Puppet, Chef, Ansible)
- Working knowledge of software defined networking
- Working knowledge of parcel based upgrades with Hadoop (i.e., Cloudera)
- Working knowledge of hardening Hadoop with Kerberos, TLS, and HDFS encryption
- Ability to quickly perform critical analysis and use creative approaches for solving complex problems
- Excellent written and verbal communication skills
- 5+ years hands-on experience with supporting Linux production environments
- 3+ years hands-on experience with supporting Hadoop and/or Spark ecosystem technologies in production
- 3+ years hands-on experience with scripting with Bash, Perl, Ruby, or Python
- 2+ years hands-on development / administration experience on HBase, Solr, Kafka and Hue
- Experienced with networking infrastructure including VLAN and firewalls
- Proven track record with Red Hat Enterprise Linux administration
- Proven track record with Cloudera Distribution of Hadoop administration
- Proven track record with troubleshooting YARN jobs
- Proven track record with HBase Administration to include tuning
- Proven track record with Apache Spark development and or administration is preferred
- Experience with Bluedata administration preferred