As a Data Engineer, you should be an expert with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise level data warehouse solutions using multiple platforms (RDBMS, Columnar, Cloud). You should be an expert in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The individual is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions. You should be enthusiastic about learning new technologies and be able to implement solutions using them to provide new functionality to the users or to scale the existing platform. Excellent written and verbal communication skills are required as the person will work very closely with diverse teams. Having strong analytical skills is a plus. Above all, you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change.
Our ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes and big data, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics. In this role you will be part of a team of engineers to create world's largest financial data warehouses and BI tools for Amazon's expanding global footprint.
· Design, implement, and support a platform providing secured access to large datasets.
· Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions.
· Model data and metadata to support ad-hoc and pre-built reporting.
· Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
· Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
· Tune application and query performance using profiling tools and SQL.
· Analyze and solve problems at their root, stepping back to understand the broader context.
· Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use.
· Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS.
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets.
· Triage many possible courses of action in a high-ambiguity environment, making use of bothquantitativeanalysis and business judgment.
· Bachelor’s degree in CS or related technical field
· 6+ years experience in dimensional data modeling, ETL development, and Data Warehousing
· Experience with Redshift and/or other distributed computing systems.
· Excellent knowledge of SQL and Linux OS
· SQL performance tuning
· Server management and administration including basic scripting
· Basic DBA tasks
· Solid experience in at least one business intelligence reporting tools
· Master’s degree in Information Systems or a related field.
· Knowledge of Big Data Solutions. Experience with Hadoop, Hive or Pig.
· Experience with Redshift and other AWS services.
· Excellent communication (verbal and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams.
· Experience with Java and Map Reduce frameworks such as Hive/Hadoop.
· Strong organizational and multitasking skills with ability to balance competing priorities.
· An ability to work in a fast-paced environment where continuous innovation is occurring and ambiguity is the norm.
Job ID: 623617