Machine Learning Architect
We are growing! Join our “Innovation and Innovations” team and work on transformative and cutting edgetechnologies that produce real results. Our philosophy is think big, start small, act fast – we value open sourcetechnologies, solve challenging and unique problems, and innovate quickly. We work in a very unique onsite-offshore model, and encourage creativity from our architects and engineers every step of the way.
You will be working with various teams including product, userexperience, portals, operations, core, and systems. Our teams are small enough to make fast decisions, our audience and reach is large enough that your work, your voice will have an immediate and tremendous impact. This is a full-time role based out of San Diego,reporting into the SVP, Engineering. You will assist in an organizational transformation from traditional transactionaldatabases and warehousing to cloud data platforms, deep learning and advance analytics domains.
- You will help implement strategies, roadmaps and solutions in the analytics and deep learning space and evangelize the vision to the organization.
- You will be primarily responsible for design, execution and delivery of exploratory concepts, rapid prototypes, and pilot solutions designed to test hypothesis and incubate transformative new capabilities by applying Machine Learning, data mining techniques, doing statistical analysis, and building high quality prediction systems.
- You will be hands on. You will implement and deploy these as an Enterprise-grade technology stack, and be responsible for all aspects of the solution – data pipelines, model generation, and training and inference engines.
- Your solution would encompass the gamut of highly-available data lakes to highly-performant-compute clusters, from storage and networking infrastructure to platforms and micro services.
- You will help build an internal team, including recruiting new members and coaching and mentoring existing ones.
- Master Degree in Machine Learning, Data Mining, Computer Science, Statistical Inference, Mathematical modeling or similar fields with 7-10 years of strong, demonstrable SDLC experience –minimum of 4 of these years should be direct experience in the machine learning, big data space.
- Experience implementing at least two Machine Learning pipelines in production.
- Deep hands-on Technical ability. Excellent understanding of machine learning techniques and algorithms. Deep understanding of statistics and probability.
- Experience with Hortonworks or Cloudera distributions.
- Very strong written and oral communication skills – must able to present complex ideas in an understandable way.
- Prior experience working with the ELK stack including Elasticsearch
- Proficiency with one or more of Python, Java, Scala preferably in a Linux Environment.
- Experience with deep learning frameworks such as MxNet, Caffe/2, SparkML, Gluon, TensorFlow, Theano, Keras, Pandas, NumPy, scikit-learn.
- Experience with distributed computing frameworks, containers and microservices (Yarn, Kubernetes, AWS ECS, Mesos).
- Experience with at least two noSQL variants - Hive, Mongodb, Cassandra, Impala, and expertise with Kafka, MLLib
- Excellent understanding of algorithms and data structures for optimization.
- Prior experience with traditional RDBMS (Oracle, SQL Server, etc.) and/or large scale traditional data warehousing would be good to have.
- Degree requirement may be relaxed if candidate possesses the equivalent applied machine learning experience and expertise.