Big Data Developer

Sun Life Financial   •  

Waterloo, ON

Industry: Accounting, Finance & Insurance

  •  

Less than 5 years

Posted 179 days ago

This job is no longer available.

Job Description:

About the Business

Enterprise Services, Working better together.

Our teams are dedicated to providing the services and technology our business partners need to help customers achieve lifetime financialsecurity. Through innovation and collaboration, we're striving to continually find new and better ways to bring value to Sun Life. Enterprise Services has employees in Canada, Ireland, the U.S. and Asia.   We partner closely with groups and individuals throughout Sun Life Financial to provide products and services that deliver business value.

As an Intermediate Big Data Developer, you will be accountable for development, testing & implementation of data sources into the Enterprise Data Lake.

This position can be based at the Sun Life Office in Atria (Victoria Park and Sheppard) or Waterloo, depending on where a suitable candidate is found.

Responsibilities

  • Responsible for designing, developing, testing, tuning and the deployment of software solutions within the Hadoop Eco system.
  • Designing and implementing product features in collaboration with business and IT stakeholders
  • Designing reusable Java components, frameworks and libraries
  • Working closely with the Architecture group and driving solutions
  • Implementing the data management framework for the Data Lake
  • Supporting the implementation and driving to stable state in production
  • Reviewing code and providing feedback relative to best practices, performance improvements etc.
  • Demonstrating substantial depth of knowledge and experience in a specific area of Big Data and Java development
  • Leverage existing frameworks and standards, while contributing ideas to or resolving issues with current framework owners - where no framework or pattern exists, create them
  • Professionally influence and negotiate with other technical leaders to arrive and implement the most optimum solution considering standards and project constraints

Skills and Experience Requirements:

  • 2+ years of hands on expertise with Big data technologies (HBASE, HIVE, SQOOP, PIG)
  • Proficient understanding of distributed computing principles
  • Experience with Spark and Scala
  • Proficiency with MapReduce, HDFS, Tez
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume and Sqoop
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Experience with Big Data ML toolkits, such as Mahout and SparkML
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Experience with Hortonworks or Cloudera distribution
  • Collaborative personality, able to engage in interactive discussions with the rest of the team
  • Excellent technical analysis/design skills
  • Ability to work with technical and business-oriented teams
  • Track record of delivering projects on time
  • Ability to work with non-technical resources on the team to translate data needs into Big Data solutions using the appropriate tools
  • Skills to develop technical resources for methods, procedures, and standards to use during design, development, and unit testing phases of the project.
  • Excellent communication skills (both written and oral) combined with strong interpersonal skills
  • Strong analytical skills and thought processes combined with the ability to be flexible and work analytically in a problem solving environment
  • Strong attention to detail
  • Strong organizational & multi-tasking skills

JR00001779