As the world's number 1 job site, our mission is to help people get jobs. We need talented, passionate people working together to make this happen. We are looking to grow our teams with people who share our energy and enthusiasm for creating the best experience for job seekers..
We are a rapidly growing and highly capable engineering team building the most popular job site on the planet. Every month, over 250 million people count on us to help them find jobs, publish their resumes, process their job applications, and connect them to qualified candidates for their job openings. With engineering hubs in Seattle, San Francisco, Austin, Tokyo and Hyderabad, we are improving people's lives all around the world, one job at a time.
The base salary range below represents the low and high end of the Indeed salary range for this position. Actual salaries will vary and may be above or below the range based on various factors including but not limited to location, experience, and performance. The range listed is just one component of Indeed's total compensation package for employees. Other rewards include quarterly bonuses, Long Term Incentive Plan units, an open Paid Time Off policy, and many region-specific benefits.
Austin Base Salary Range: 138,000 - 174,000 USD
As a Data Engineer at Indeed, your role is to deliver the information your business partners need to grow the business. You will assist business teams in drawing insights from our data. You are someone who wants to see the impact of their work and make a difference every day. You know what it takes to deliver quality reporting and business intelligence solutions to the organization. You understand the value and benefit of solid data practices and how to translate that into satisfied business partners.
You are an experienced Software Engineer. You are skilled in extracting, transforming, and loading data. You are able to translate business requests into database design.
You are someone who is passionate about data-driven approaches. You enjoy exploring large data sets and get excited about learning new technologies and learning in a collaborative environment. You are skilled at eliciting requirements from a wide range of different teams.
- Create and manage data sources
- Integrate with diverse APIs
- Contribute to the ongoing development of the data warehouse ecosystem
- Work closely with stakeholders on the data demand side (finance, analysts, and data scientists)
- Work closely with stakeholders on the data supply side (domain experts on source systems of the data)
- Design and build optimized OLAP and Star Schema data structures
- Build self-monitoring, robust, scalable batch and streaming data pipelines for 24/7 global operations.
- Create highly reusable code modules and packages that can be leveraged across the data pipeline
- Develop and maintain data dictionaries for governance of published data sources
- Develop and improve continuous release and testing processes
- Elicit requirements from a wide range of different teams
- Bachelor's degree in computer science, computer engineering, or an engineering discipline
- Strong CS fundamentals, problem-solving skills and software engineering skills
- 2+ years industry experience in software development and/or data engineering
- Experience in a hands-on, data centric role with data engineering, streaming, or warehousing.
- Ability to communicate effectively with stakeholders to define requirements
- A strong ability to understand and organize data from various sources.
- Strong expertise in an object oriented language (preferably Python or Java)
- Strong SQL skills
- Experience in columnar relational data stores and NoSQL technologies
- Experience with big data tools such as Hadoop, Hive, Spark, etc, as well as knowledge of more traditional warehouses.
- Experience delivering data pipelines and managing resulting data stores using managed cloud services (like AWS or Google Cloud Services)
- Ability to identify and resolve performance and data quality issues
- Experience with modern data pipelines, data streaming, and real time analytics using tools such as Apache Kafka, AWS kinesis, Spark Streaming, ElasticSearch, or similar tools.
- Knowledge of machine learning tools and concepts a plus.