We are passionate about data, we are passionate about code, and we are passionate about the product we offer. We are seeking an exceptional and experienced data engineer who shares our passion and obsession with quality. You'll be a core member of our product and engineering team dedicated to helping our clients replace time-consuming, manual processes to reach informed real-time decisions about government markets, competitors, and agency relationships.
We need a skilled and dedicated data nerd to join our team to lead us in uncovering truth and meaning in data. You must be a hands-on engineer with a strong understanding of both data management and governance standards. You must also have strong interpersonal skills to work cross-functionally across internal teams as well as directly with end users and Govini platform SMEs.
As an engineering leader, you will be responsible for assessing and evaluating our current data capabilities, making recommendations to enhance processes and quantifying the benefit as it relates to Govini's growth, strategy and mission.
This is a team member position, working onsite in our Pittsburgh, PA office.
Scope of Responsibilities
- Define and lead Govini's data lifecycle strategy across data acquisition, data ingestion, data cleansing, normalization and linkage.
- Identify data sources, assess their value and quality and estimate the level of effort required to integrate into existing data model, infrastructure and products.
- Ensure key entities within datasets are identified, resolved and linked to existing entities within the current master data repository.
- Apply various techniques to produce solutions to large-scale optimization problems, including data pre-processing, indexing, blocking, field and record comparison and classification.
- Develop, refine and oversee master data management standards, including establishing and enforcing governance procedures and ensuring data integrity across multiple functions. Responsible for owning data quality metrics and meeting defined data accuracy goals according to industry best practices.
- Improve data sharing, increase data repurposing and improve cost efficiency associated with data management efforts.
- Build best practices that help with chain of custody of data so it can be easily traced back to the source for accuracy and consistency.
- Collaborate with Data Science team in the development of predictive models using machine learning, natural language and statistical analysis methods.
- Perform exploratory data analyses, generate and test working hypotheses, prepare and analyze historical data and identify patterns.
- Work directly with users as well as SMEs to establish, create and populate optimate data architectures and structures, as well as articulate techniques and results using non-technical language.
In order to do this job well, you must be a curious and eager problem solver with a hunger for delivering high quality data solutions. You have a passion for great work and nothing less than your best will do. You share our intolerance of mediocrity. You're uber-smart, challenged by figuring things out and produce simple solutions to complex problems. Knowing there are always multiple answers to a problem, you know how to engage in a constructive dialogue to find the best path forward. You're scrappy. We like scrappy.
Candidate must be able to work in the United States without sponsorship.
- Bachelor's degree in Computer Science, Mathematics or related technical field
- Minimum of 3 years direct experience creating sustainable, automated processes for data discovery, curation and synthesis
- 3-5 years Master Data Management experience including data consolidation, linkage, federation and dissemination
- 5-7 years experience with programmatically manipulating data
- Experience with PostgreSQL or similar RDBMS
- Expert at advanced SQL programming
- Strong expertise with scripting languages such as Python, Ruby, Perl
- Experience utilizing open-source technologies such as Linux, PostgreSQL, Elasticsearch
- Proficient usage of common data formats such as CSV, XML, and JSON
- Requires strong analytical ability and attention to detail
- Ability to work independently with little supervision
- A burning desire to tackle hard problems and create sustainable solutions
- Experience using Amazon Web Services
- Experience in or exposure to the nuances of a startup or other entrepreneurial environment