Software Engineer

Capgemini   •  

Virtual / Travel

8 - 10 years

Posted 177 days ago

This job is no longer available.

Job Description:

Job Title: Spark Developer/Architect

Position Type: Permanent/Fulltime 

Duties & Responsibilities:

• Developing and deploying distributed computing Big Data applications using Apache Spark on MapR Hadoop (others hortonworks / Cloudera will work as well)

• Help drive cross team design / development via technical leadership / mentoring

• Work with business partners to develop business rules and business rule execution

• Perform process improvement and re-engineering with an understanding of technical problems and solutions as they relate to the current and future business environment.

• Design and develop innovative solutions for demanding business situations

• Analyze complex distributed production deployments, and make recommendations to optimize performance

Essential skills

• At least 9 years of professional work experience programming in Java or Scala (3+)

• 3 or more years of experience with the Hadoop Stack

• 2+ years of Distributed Computing frameworks such as Apache Spark, Hadoop

• Experience with Elasticsearch, Spark (plus)

• • Strong knowledge of Object Oriented Analysis and Design, Software Design Patterns and Java coding principles

• Core Java development Background is a must

• Familiarity with Agile engineering practices

• Proficiency with MapR Hadoop distribution components and custom packages is a huge plus

• Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase and/or Map/Reduce

• Excellent RDBMS (Oracle, SQL Server) knowledge for development using SQL/PL SQL

• Solid UNIX OS and Shell Scripting skills

• Strong initiative with the ability to identify areas of improvement with little direction

• Team-player excited to work in a fast-paced environment; Agile experience preferred

• Bachelor’s degree in computer science/data processing or equivalent