Cyber Security Analyst

Cybercore Technologies   •  

Baltimore, MD

Industry: Business Services

  •  

8 - 10 years

Posted 68 days ago

This job is no longer available.

Description:

  • Provide support for the installation, configuration, tuning, and management of large Hadoop (Apache Accumulo) clusters that support data intensive computing.
  • Includes installation and maintenance of tools required for the cloud environment, as well as monitoring and sustainment of the cloud environment, to include the identification and resolution of issues, problems, and trouble tickets related to the same
  • Configure, troubleshoot and manage large Hadoop clusters at a scale from 200-2400 nodes
  • Identify and run software tools to manage and monitor large compute clusters in an efficient manner
  • Identify and resolve hardware and software problems related to the Hadoop / Apache Accumulo cluster environment
  • Install, configure, and maintain tools needed for Hadoop / Apache Accumulo cluster environment

Required Experience:

  • Within the last ten (10) years, a minimum of seven (7) years experience managing and monitoring large Hadoop clusters (>200 nodes)
  • Within the last ten (10) years, a minimum of five (5) years experience writing software scripts using the following scripting languages: Perl, Python, and Ruby for software automation
  • Within the last ten (10) years, a minimum of five (5) years experience implementing and providing technical support for multi-platform, multi-system networks, including those composed of CISCO and UNIX or LINUX-based hardware platforms, to include the diagnosis and resolution of issues
  • Within the ten (10) years, a minimum of five (5) years experience implementing network solutions for complex, high performance systems composed of UNIX or LINUX-based hardware platforms.
  • A minimum of two (2) years experience utilizing OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, Apache Accumulo, and/or Big Table
  • A minimum of three (3) years experience with the Hadoop Distributed File System (HDFS)

Desired Experience:

  • Demonstrated experience provisioning and sustaining network infrastructure, to include the development, operation, and management of networks operating in a secure PKI, IPSEC, or VPN enabled environment
  • Demonstrated experience with Puppet and/or Software Management Console (SMC) to provision software loads to compute clusters
  • Demonstrated experience with LDAP protocol configuration and management
  • Demonstrated experience with cluster performance management (e.g. Nagios)
  • Demonstrated experience with peer-to-peer distributed storage networks, peer-to-peer routing and application messaging frameworks
  • Experience with Docker
  • Experience with Kubernetes