Data Engineer - Scala, Hbase , Hive

Confidential Company  •  New York, NY
Salary depends on experience
Posted on 11/03/17 by Mohamed Maideen
Confidential Company
New York, NY
Staffing & Recruiting
Salary depends on experience
Posted on 11/03/17 Mohamed Maideen

Role: Data Engineer

Location: New York City, NY

 

Minimum Qualifications:

·        Proficient in Scala

·        Experienceworking with cloud technologies such as Amazon Web Services

·        Strong understanding of EMR (Elastic Map Reduce) and otherbig datatechnologies (eg. Redshift, Presto, Druid, Mongo, etc.)

·        Experience with C# or Java

·        Strong knowledge of writing Web APIs and consuming RESTful and SOAP APIs.

·        Interest in emerging technologies such as MapReduce, MPP, NoSQL, etc.

·        Strong knowledge of SQL

·        Proficient understanding of code versioning tools such as BitBucket and SVN.

·        Familiarity with continuous integration tools like TeamCity.

·        Bachelor's degree in Computer Science or related field is preferred

 

Description:

Description:

A data engineer who will be responsible for the design, development, implementation and on-going support of data processing systems and larger data platform. The position will be comprised of data processing tasks, data architecture, as well as overall platform development. Must be able to work on multiple projects simultaneously, including both enhancements as well as new project development. The data engineer’s responsibilities will be to design and develop complex ETL flows, and to coordinate with the rest of the team working on different layers of the infrastructure. The candidate must be a self-starter with a sense of urgency and a commitment to quality and professionalism.

 

Responsibilities:

·        Translate application storyboards and use cases into functional applications

·        Design, build, and maintain efficient, reusable, and reliable ETL processes

·        Integrate with 3rd party API’s for data consumption

·        Build complex workflows using open source ETL tools such as Talend

·        Write data processing and transformation routines using Apache Spark (Scala), Hadoop (Hive), Redshift, and Presto

Not the right job?
Join Ladders to find it.
With a free Ladders account, you can find the best jobs for you and be found by over 20,0000 recruiters.