Are you an experienced Big Data Engineer passionate about building scalable and enterprise-level data engineering solutions? This opportunity is a great fit for an experienced Big Data Engineer who enjoys working on large-scale computing and transforming large datasets leveraging AWS distributed systems. In addition to technical expertise, you will invest time to understand the needs of the business, the data behind it, and how to transform information into technical solutions that allow Amazon's Finance leadership to take action.
Key job responsibilities
- Collaborate with finance and business stakeholders to understand requirements and translate them into technical specifications.
- Design and develop scalable big data pipelines to ingest, transform, and publish large volumes of data efficiently.
- Troubleshoot system and data quality issues
- Identify bottlenecks with current architecture and propose efficient and long term solutions
- Provide operational support by participating in team's on call rotations
- Contribute to system documentation and on call runbooks
- Mentor junior engineers and drive the successful implementation of projects.
A day in the life
As a data engineering team that supports the central finance team within Amazon, our primary customer will always be our CFO. Hence a day to day activity of a Sr. big data engineer within this team will mostly revolve around solving data engineering problems for transforming our ever growing financial data. You work with other financial teams to understand their business problems and deliver scalable technical solutions.
About the team
GFT Data Services team is the data engineering team within Corp FP&A. On our team, we enjoy a unique vantage point into everything happening within Amazon. As part of that, this role would be part of a team that is responsible for Company's enterprise-wide financial planning & analytics environment. The data flowing through our platform directly contributes to decision-making by our CFO and all levels of finance leadership. If you're passionate about building tools that enhance productivity, improve financial accuracy, reduce waste, and improve work-life harmony for a large and rapidly growing finance user base, come join us!
BASIC QUALIFICATIONS
- 3+ years of data engineering experience
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience with SQL
PREFERRED QUALIFICATIONS
- Bachelor's degree
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,900/year in our lowest geographic market up to $205,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.