Senior Software Engineer, Cloud Engineering

ExtraHop Networks   •  

Seattle, WA

Industry: Technology

  •  

Not Specified years

Posted 177 days ago

This job is no longer available.

Big Data, the cloud, elastic computing, SaaS, AWS, BYOD, HPC, SDN—we do it all. The solutions we build at ExtraHop are transforming the IT industry. From retail websites and point-of-sale systems to financial services and the healthcare industry, ExtraHop helps modern businesses handle the growing demands on their applications and infrastructures.

The cloud engineering team is responsible for ExtraHop’s hosted analytics and anomaly detection services. We’re growing fast and looking for people who love to learn and build things: products, companies, markets, and their careers! Do you want to grow with one of the best teams in the industry? Read on.

Responsibilties

  • Work on a large-scale, in-production, secure cloud service platform, using state-of-the-art container technologies and cutting-edge microservices architecture
  • Take advantage of rich analytics exposed by the ExtraHop stream processing platform
  • Collaborate with a cross-functional team to tackle complex problems in scalability, end-to-end security, continuous delivery, automation, logging / monitoring, and incident response
  • Drive continuous integration and deployment (CI/CD) practices using developer-driven testing, continuous integration, and highly automated test environments
  • Help choose the best technologies to solve problems as they arise
  • Work closely with product managers, data scientists, and operations teams to build new cloud analytic services from early designs to production code

Requirements

  • Solid knowledge of Go, Python, or an equivalent programming language
  • Self-starter with a strong problem solving track record and ability to grow and learn
  • Excellent communicator and collaborator who can iterate quickly

Desired Skills

  • Experience around container and related technologies (Kubernetes / Docker / HashiCorp - Packer, Vault, and Terraform / etcd).
  • Experience of building and scaling distributed, highly available systems
  • Experience with cloud services on AWS or Azure (RDS / S3 / SQS / EC2 / EMR)
  • Experience with data science information processing pipeline (Spark / Presto / SQL / Hadoop / HBASE)