Software Development Engineer - Big Data Pipeline

Smartsheet   •  

Bellevue, WA

5 - 7 years

Posted 245 days ago

This job is no longer available.

The Product Intelligence team is looking for a skilled software development engineer with prior experience designing and implementing data pipelines, warehouse and business intelligence systems integrating transactional, analytical, and big data components. Together with the team, you will drive the data platform design and implementation to meet the data needs of the organization. This is a highly visible role where you will be instrumental in designing and building out the data platform and features. The team comprises of data engineers, data analysts and data scientists focused on enabling Smartsheet to perform actions and achieve insights to help continue the high company growth. Here is one short video showing a Smartsheet overview and how you will be on the forefront of data analysis for a growing customer base.

If you like big data challenges working with Apache Spark in an AWS Cloud environment and within a fun and fast growing company, contact us. This role is located in our Bellevue, WA headquarters.


  • Design and code data pipeline features and data processing jobs that encompass innovative business intelligence and analysis to help Smartsheet on its growth trajectory
  • Lead the development of capabilities in all facets from strategic to tactical implementation and from conception to post deployment
  • Ensure smooth ongoing operations of data platform with high availability while making continuous improvements
  • Help key users across the entire organization to understand and consume the data sets and platform for enhanced decision making and analytics
  • Advance the data architecture and platform and ensure adherence to key architectural tenets and best practices
  • Design, implement and maintain data models


  • Love to learn, and must be open-minded and action-oriented
  • 6+ years of development experience with schema design, data architecture, and data pipeline and processing
  • Experience designing and delivering large scale, 24-7, mission critical data pipelines and features with today’s more current big dataarchitectures
  • Must have data engineeringexperience with 1 or more non-SQL languages like Python, Scala or Java
  • Strong data modeling skills (relational, dimensional and flattened). Strong analytical and SQL skills, with attention to detail
  • Deft problem solver and strong collaborator
  • Self-driven and highly dependable in an agile and results-oriented environment
  • Familiarity with both established and emerging data technologies and the ability to evaluate and ascertain their applicability
  • Experience with big datatechnologies (Apache Spark, AWS S3, Hadoop, Apache Parquet) a big plus
  • Expert knowledge of ETL and data integration techniques
  • Legally eligible to work in the U.S. on an ongoing basis
  • BS or MS in Computer Science, or equivalent