Hadoop Database Development Lead


Columbia, SC

11 - 15 years

Posted 254 days ago

  by    Eric Howell

This job is no longer available.

Hadoop Database Development Lead

Alpharetta, GA

Full Time/Perm/Direct Hire

Our client is looking for candidates with data warehouse and people management experience. Experience with Hadoop technology not mandatory at this time.

Leads database development staff while focusing on the analysis, development and maintenance of the Data Lake / Data Hub as well as the feed to/from all subscriber applications using the Hadoop ecosystem.

It involves development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations.

Development and support will be required for Data Lake / Hub implementation on the Hadoop environment.

Key Accountabilities

  • Leads the database development staff that consists of a mid-sized team of Data Warehouse developers, primarily experienced on MSBI Technology and responsible for

1.development and maintenance of the Hadoop Platform and various associated components for data ingestion, transformation and processing

2.development and maintenance of integrations with various bespoke Financial (Applications and Management Information Systems), Underwriting and Claims Management Systems

  • Support and guide the team in terms of work prioritization
  • Supports in performance reviews and determines professional development needs and opportunities for staff
  • Responsible for mentoring, training, and motivating team members. Must always exhibit a positive, "can do? attitude and serve as a role model to junior team members
  • Working with architects and project staff to deliver high-level or certified (detailed) estimates
  • Design and implement reusable and multi layered Extract Transform Load (ETL) and Extract Load Transform (ELT) Patterns, Integration frameworks as the requirements evolve for the enterprise.
  • Ensure data quality and accuracy by implementing business and technical reconciliations via scripts and data analysis.
  • Develop and support RDBMS objects and code for data profiling, extraction, load and updates
  • Provide and maintain documentation for all developed objects and processes, in strict timelines.
  • Integration testing of team deliverables
  • Develop and run test scripts to ensure the quality of code and integrity
  • Use source code control repository (GITHUB or equivalent for Hadoop ecosystem)
  • Following data / integration design patterns and architecture as prescribed by the Architects. Ensuring adherence of the same by the team members. Working with Architects to enhance the design patterns as per evolving requirements.
  • Supports new initiatives and/or recommendations for database growth and integration
  • Following policies, procedures, controls and processes for the development and testing life-cycle.
  • In addition to the above key responsibilities, you may be required to undertake other duties from time to time as the Company may reasonably require.


- Minimum of Eleven or more years working as a productive member of a development team

- Bachelors degree in Computer Science or other computer-related field

- Experience of leading midsized team of data warehouse / Hadoop / integration developers

- Experience of working both independently and collaboratively with various teams and global stakeholders (Business Analysts / Architects / Support / Business) in an agile approach while working on projects or data quality issues

- Experience in development and support of Data Lakes, using Hadoop Platform and associated technologies such as Hive, Spark, Sqoop

- Demonstrable experience on Hadoop ecosystem (including HDFS, Spark, Sqoop, Flume, Hive, Impala, Map Reduce, Sentry, Navigator)

- Experience on Hadoop data ingestion using ETL tools (e.g. Talend, Pentaho, Informatica) and Hadoop transformation (including MapReduce, Scala)

- Experience working on Unix / Linux environment, as well as Windows environment

- Experience on Java or Scala or Python in addition to exposure to Web Application Servers preferred

- Exposure to NoSQL (HBase) preferred

- Experience in creating Low Level Designs

- Prior experience of analysis and resolution of data quality and integration issues

- Experience in providing and maintaining technical documentation, specifically Data Mapping and Low Level / ETL (or ELT) Design Documentation

- Experience in supervising the work of, and mentoring junior team members, in a project environment

- Experience on Continuous Integration preferred

- Experience on large scale (>1TB raw) data processing, ETL and Stream processing preferred

- Experience in working as SME for enterprise systems or integrations

- Experience of Database queries (SQL) and procedure performance tuning

- Experience in development of complex / enterprise data warehouse implemented over standard RDBMS (preferably using SQL Server Integration Services (SSIS) and SQL Server Objects preferred

- Insurance Domain Experiencepreferred


  • Possess and demonstrate deep knowledge of data warehouse concepts, big data, architecture, various design alternatives, and overall data warehouse strategies.
  • Possess and demonstrate deep knowledge of the Hadoop Ecosystem
  • Knowledge of Type 2 Dimension Data model and data warehouse ETL techniques for historical data
  • Ability to design, architect and code at an Enterprise, Commercial, and Best Practices standard (preferred):

o For BizTalk Server

o For all SQL Server Database Objects

o For T-SQL Query and Stored Procedure Optimization, Table Indexing, and Constraints

o For SQL Server Integration Services (SSIS) Data Integration Tasks

  • Exposure to modern BI Tools like Platfora preferred
  • Knowledge of integration using BizTalk Server preferred

$90K - $130K