We're looking for engineers with broad and deep skill sets and experiences to help move us forward. We want self-starters who will bring distinctive expertise but who are excited to step outside their comfort zone and learn new skills. If you're eager to pick up new languages, paradigms, and technologies, we want to talk.
What youll do
Our engineers are expected to wear a number of hats and have the opportunity to touch all parts of the stack. As an engineer, you might work to improve our optimization algorithms, write code using Apache Spark, participate in architectural decisions for new components, or improve performance in collaboration with DevOps. In the same week, you could work on user-facing interfaces and reports with front-end developers, write code to import, process and QC terabytes of new data, and work with analysts and statisticians to ensure the validity of our processes.
What we're looking for:
Bachelors degree in Computer Science or a related field (or 4 additional years of relevant work experience)
A strong understanding of data structures, algorithms, and effective software design
Significant development experience with a major modernlanguage (e.g., Java, Scala, Python, Ruby, C/C++, etc.)
Significant experienceworking with structured and unstructured data at scale and comfort with a variety of different stores (key-value, document, columnar, etc.) as well as traditional RDBMSes and data warehouses
Experience writing unit and functional tests
Comfort with version control systems (e.g., Git, SVN)
Excellent verbal and written communication skills; must work well in an agile, collaborative team environment
Any of the following will make us really excited:
Masters in Computer Science or a related field
Basic understanding of statistics and experience with statistical packages such as R, Matlab, SPSS, etc.
Practical experience with supervised machine learning techniques
Practical experience with Apache Spark
Experience with AWS products (Redshift, EMR, S3, IAM, RDS, etc.)
Experience wrangling terabytes of big, complicated, imperfect data
Strong background with test-driven development