As a Lead Data Engineer supporting Cox Automotive Enterprise Platforms, and working within our Scaled Agile Framework, you will be responsible for the delivery of strategic, cloud based, analytics data solutions. This role, in partnership with counterpart technology teams, is accountable for the design, development, quality, support, and adoption of production grade data and analytics solutions. A successful Lead Data Engineer is one who thrives as a mentor, collaborator, organizer, and quality evangelist: she/he will lead efforts to optimize our code delivery processes while simultaneously keeping the existing production processes humming.
Technology We Use:AWS, S3, Glue, DMS, EMR, SFDC Einstein Analytics, Python, Oracle, Redshift, Mulesoft, and Splunk
- In partnership with Product Owner and Enterprise Architecture, and as the technical leader of the development team(s), deliver analytics solutions, including collecting data from providers, building transformations and integrations, persisting within repositories, and distributing to consuming systems.
- Working primarily within AWS, deliver event-driven, data processing pipelines, and ensure data sets are captured, designed, and housed effectively (consistently, optimized for cost, ease of support and maintenance).
- Transition Minimally Viable Product (MVP) solutions into operationally hardened systems, including introducing re-useable objects and patterns to drive automation, maintainability and supportability.
- Code quality: ensure team’s delivered code is of high quality. Conduct code reviews and ensure delivery patterns are followed. Coach, mentor, and evangelize team re: Built-In Quality.
- Environment management: ensure lower environments are governed, healthy, “clean” (e.g., QA automation)
- Code deployment management: best practice processes and controls are in place and adopted
- Cloud account management: define, refine, and facilitate adoption of patterns for management of access policies, security patterns, and security policies.
- Operational hardening: increase the system’s fault tolerance and self-recovery capabilities.
- Liaison with counterpart technical and business teams (e.g., Cloud Ops, DBAs, SFDC, Enterprise Data Services, Data Governance, Financial Reporting, and Business Unit technology and analytics teams).
- Platform evolution: advise on applications, service offerings, newly available cloud services, and emerging technologies which may be solutions to immediate and/or future state goals.
- Participate in backlog refinement and request decomposition, including data discovery and data analysis.
- Proactively identify, communicate, and resolve issues and risks that interfere with delivery commitments.
- Self-directed problem solving: research and collaborate with peers to drive technical solutions.
- Rapid response and cross-functional work to resolve technical, procedural, and operational issues.
Required Experience (minimum):
- A minimum of 8years of experience delivering analytics, reporting or business intelligence solutions
- A minimum of 4 years of experience developing in big datatechnologies (cloud based analytics, Hadoop, NoSQL)
- Proficient in SQL and at least one of these programming language: Java, Scala, or Python (preferred)
- Experience designing event-driven, data processing pipelines
- At ease developing data solutions leveraging both databases and file systems via CLI
- Strong, hands-on technical skills and self-directed problem solving
- Experience in working within Scaled Agile (SAFE) and on Agile teams (Scrum)
- Experience with data modeling (normalization, slowly changing, star, data vault)
- Experience with MPP databases (Teradata, Exadata, Netezza, Redshift)
- Foundational understanding of LEAN software development
- Experience administering or developing within SFDC Sales Cloud CRM, SFDC Einstein Analytics
What We Look For (preferred):
- Experience as a technical team lead
- Experience maturing production systems (introducing QA automation, fault tolerance, self-recovery)
- Experience developing within AWS, especially EMR, DMS, Glue, Lambda, Athena, and Redshift
- Experience developing in Spark (Spark Streaming, Dataframes, Datasets)
- Bachelor’s degree in Business, Management, Information Systems, Computer Science, or Engineering