DRIVIN is looking to expand our data team as we continue to grow our data platform. The candidate should have a strong background with Python and SQL. As a member of the data team the main responsibilities are implementing/maintaining ETL jobs, using Python to ingest external data sources into the Data Warehouse, and working closely with the Product and Data Science teams to deliver data in useable formats and to the appropriate data sources.
DRIVIN has a polyglot data model using many cutting-edge data platforms. We are currently using MPP Postgres (Greenplum, Netezza, DBX) as our Data Warehouse, Elastic Search for location based searching, Postgres for transactional data.
This candidate should be a self-starter who is interested in learning new systems/environments and building new solutions.
The candidate should also work closely with the Data Science team to identify interesting data points for use by the Data Science team.
DRIVIN tech stack is very cutting edge. MPP Postgres drives the Data Warehouse, ElasticSearch enables our location based searching/metrics, and Apache Spark is used to train our models. All environments are run on AWS EC2/RDS/S3 and data processing framework is written in Python.
- Implement ETL jobs for various functions
- Support and maintain daily ETL jobs
- Support the development teams by optimizing data access
- Work with data science teams to deliver metrics to consumers.