Engineer II - Big Data

TD Bank   •  

Toronto, ON

Industry: Financial Services

  •  

Less than 5 years

Posted 268 days ago

This job is no longer available.

180407BR

Department Overview

Position Overview
The Platform Engineer II – Infrastructure and Platform Design is part of the Information Excellence team within Enterprise Information Management. Reporting to the Senior Manager, Infrastructure and Platform Design; this role is responsible for designing, building and testing an automated and resilient Big Datainfrastructure and platform for the Information Excellence program. This position must work proactively and effectively within EIM, ITS and othertechnology and business partners to provide technical direction, support, expertise and best practices in the systems and infrastructure that encompass the Information Excellence platform.

Job Description

Accountabilities

  • Provide expertise regarding systems and infrastructure to various project stakeholders.
  • Develop and document system and infrastructure configurations utilizing the SDLC methodology.
  • Participate in the preparation of system implementation plans and support procedures.
  • Provide ongoing system automation management support to Information Excellence teams and related business partners.
  • Contribute to the on-going development of the team by sharing information, knowledge, expertise and lessons learned on a regular basis
  • Evaluate value added Hadoop tools and/or utilities to enhance Information Excellence services and platform.

Requirements

Academic and Experience Requirements:

  • Post-secondary degree: Computer Science, Engineering or similar degreepreferred.
  • A minimum of 3 to 5years of experience in system administration, information management, system automation and testing.
  • A minimum of 1year of Big Data and Hadoopexperiencepreferred or strong proficiency in Linux shell scripting and system administration.
  • Experience with information technology; data and systems management; knowledge of Unix/Linux especially RHEL is a requirement: Hadoop administration and utilities, Java, virtual environments, configuration and deployment automation; and knowledge of RESTful API-based web services is preferred but not mandatory.
  • Demonstrated history of being self-motivated, energetic, results-driven, and executing with excellence
  • Effective inter-personal skills working well with a fast moving team; able to build and maintain strong relationships with business and technology partners

Additional Information

Academic and Experience Requirements:

  • Post-secondary degree: Computer Science, Engineering or similar degreepreferred.
  • A minimum of 3 to 5years of experience in system administration, information management, system automation and testing.
  • A minimum of 1year of Big Data and Hadoopexperiencepreferred or strong proficiency in Linux shell scripting and system administration.
  • Experience with information technology; data and systems management; knowledge of Unix/Linux especially RHEL is a requirement: Hadoop administration and utilities, Java, virtual environments, configuration and deployment automation; and knowledge of RESTful API-based web services is preferred but not mandatory.
  • Demonstrated history of being self-motivated, energetic, results-driven, and executing with excellence
  • Effective inter-personal skills working well with a fast moving team; able to build and maintain strong relationships with business and technology partners


Competencies and Personal Attributes:

  • Demonstrated ability to work and deliver on multiple complex projects on time.
  • Understanding of Hadoop tools and utilities (HDFS, Pig, Hive, MapReduce, Sqoop, Flume, Spark, Kafka) and CDH.
  • Understanding of Linux/Unix, especially RHEL.
  • Working experience using a scriptinglanguage such as Bash, Python or Perl.
  • Ability to debug/trace Java or Scala code an asset
  • Good understanding and experience on systems automation, scheduling, agile code promotion, system access and proactive system management (DevOps).
  • Familiarity with orchestration workflows and high-level configuration management concepts and implementations.
  • Experience in process analytics and process flow documentation.
  • Knowledge of source code repository systems and data lineage standards. In addition, ability to use revision control systems such as Git.
  • Proficient with operating and/or developing Java applications.
  • Familiarity of hosting models consistent with Google, Amazon, Microsoft, and other next generation technology companies.
  • Experience using RESTful API-based web services and applications.
  • Familiarity with using orchestration systems, and automation tools such as Puppet, Chef, Ansible or Saltstack.
  • Databaseexperience with MySQL, PostgreSQL, DB2 or Oracle.
  • Experience with Cloud infrastructure and Virtual Environments: KVM, Docker or Kubernetes.
  • Familiarity with networking, firewalls and load balancing.
  • Proactive, organized, excellent analytical and problem solving skills.