Senior Machine Learning Engineer

Anaplan   •  

San Francisco, CA

Industry: Professional, Scientific & Technical Services

  •  

8 - 10 years

Posted 173 days ago

This job is no longer available.

Originally founded in York, UK and now headquartered in San Francisco, Anaplan has offices all over the world. In fact we may be the most successful company you’ve never heard of. We are on a mission to revolutionize how companies analyze and plan their businesses. And it’s working - we are already trusted by hundreds of Fortune 2000 companies and we’ve only just begun.

 

The Team

We are looking for Machine Learning Evangelists to be part of our rapidly expanding global engineering effort.

 

The Role

As a Senior Machine Learning Engineer you will help disrupt the future of business planning through Machine Learning. This is a big opportunity to help shape the full pipeline by  building a scalable backend solution for automation of data processing. You will help analyze a massive amount of data from Supply Chain, Sales, Marketing, Finance, to IT & Ops to developing predictive models to help our enterprise customers make better business decisions. Unlike many startups, we have many customers and a lot of real business data to tackle real problems.

 

Our preferred tech stack (Skills we are looking for)

  • Python, Java, and R.
  • Hands on experience with Spark, Tensorflow, Hadoop, and Cassandra is a plus to have.
  • SQL databases
  • Kafka or similar distributed messaging services.
  • Spark or similar distributed data processing service.
  • Docker and Mesos.
  • Angular, React, Redux, Node.js or equivalent.
  • Chef, Puppet, Ansible or equivalent.

 

YOU HAVE …

  • 7+ years of software engineering experience.
  • 5+ years Machine Learning experience.
  • 2+ years Deep Learning experience.
  • 6+ years of real-world experience with production systems.
  • 5+ years of JVM based languages including asynchronous programming.
  • Worked/developed in a Linux or Unix environment.
  • Worked in AWS (particularly EMR).
  • Mac/Linux: working knowledge, file system and simple bash scripting.
  • Hadoop: understanding/familiarity of HDFS and map reduce.
  • Spark – how to work with RDDs and Data Frames (with emphasis on data frames) to query and perform data manipulation or similar distributed data processing.
  • Has real hands-on experience developing applications or scripts for a Hadoop environment (Cloudera, Hortonworks, MapR, Apache Hadoop). By that, we mean someone who has written significant code for at least one of these Hadoop distributions.
  • Source Control Management Tool - Git
  • Stream processing technologies and concurrency frameworks.
  • Strong understanding of the nature of distributed development and its pitfalls.
  • BS in Computer Science, Engineering, Technology or related fields. Masters or Ph.d degree is a plus.