Big Data Engineer

PRI Technology  •  New York, NY
$170K - $180K
Posted on 11/22/17 by Lori Sklarski
PRI Technology
New York, NY
e-Commerce
$170K - $180K
Posted on 11/22/17 Lori Sklarski

The Big Data Engineer will be comfortable working across the full spectrum of data engineering disciplines spanning databasearchitecture, database design, and analytic solution implementation. The Senior Big Data Engineer will work as part of an interdisciplinary Agile team closely aligned with business units to develop and deploy descriptive, predictive and prescriptive analytical solutions. The Senior Big Data Engineer will work closely with peers and management to strengthen analytic competency across the enterprise The ideal candidate will be distinguished not only by his or her mastery of data engineering in a cloud-based, multi-tenant environment, but also by high levels of creativity and tenacity

Key Responsibilities include

  • Build scalable databases for the consumption of structured and unstructured data using batch, micro-batch, and streaming data acquisition
  • Ability to parse semi-structured data into structured using regular expressions or scripting
  • Administration of Hadoop Clusters
  • Develop ETL/ELT and data integration routines
  • Develop RESTful web services

Requirements:

  • Bachelor's Degree in Computer Science, Math, Statistics or otherquantitative discipline
  • Very strong knowledge on AWS databases, primarily Redshift
  • Very strong knowledge of Data warehousing concepts
  • Must have Redshift and AWS
  • Very strong knowledge of Data warehousing best practices for optimal performance in an MPP environment
  • Strong knowledge of Chef Infrastructure/Application Automation and CloudFormation
  • Experience with middleware integration such as WS02
  • Working knowledge of at least one ETL tool (talend, Pentaho, SSIS)
  • Proficient with continuous deployment methodology, DevOps and Agile
  • Experience with developing against NoSQL data stores
  • Experience with the setup and administration of Hadoop, and the use of Hive and Pig
  • Programming in at least one language (Python, Java, C/C++)
  • Experience working in a Data Warehousing environment
  • Experience developing ETL
  • Linux experience

Preferred Skills and Experience

  • Experience working in Media or Publishing
  • Experience with Apache Spark
  • Experience with real time streaming AWS Kinesis or Apache Kafka/Storm
Not the right job?
Join Ladders to find it.
With a free Ladders account, you can find the best jobs for you and be found by over 20,0000 recruiters.