The City of Tacoma is seeking a Data Integration Developer responsible for performing data-level integration for operational and analytics applications.
There is a current opening within Tacoma Power's Utility Technology Services department is building a new data engineering practice for Tacoma Public Utilities (TPU). As a Data Integration Developer, you will design and deliver innovative data solutions on Amazon Web Services and the Snowflake data warehouse.
- Develop, support, and maintain data pipelines for utility analytics, including ETL/ELT and streaming pipelines.
- Develop web services, perform ESB configurations, and create messaging and transformation logic.
- Ingest and transform data from source systems into the data lake and data warehouse platforms, leveraging a variety of integration and data engineering patterns, including batch processing, web services, direct database connections, ESB configurations, and third party ETL tool access.
- Collaborate with customers, analysts, architects, and other stakeholders to identify user requirements, assess available data pipeline, data transfer, and data integration technologies and recommend solution options.
- Perform data engineering and data integration design, implementation, and support services, including web services management, registry/repository maintenance, adapter design, transformation development, and implementation support.
- Create detailed solutions documentation for development and maintenance of data pipelines, cloud systems configuration, web services, ESB integration, and other data-related solutions.
- Equivalent combination to: Bachelor's degree in computer science, information systems, engineering or a related field with five years professional experience as a software developer, data engineer, or equivalent data-related position. Has successfully worked in small to medium application development teams.
Knowledge & Skills
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing 'big data' data pipelines, architectures and data sets.
- Experience building ETL/ELT processes supporting data transformation, data structures, metadata, dependency and workload management.
- Knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
- Experience supporting and working with cross-functional teams in a dynamic environment.