Big Data Engineer / Lead

Cynet Systems   •  

Chicago, IL

Industry: Technology

  •  

5 - 7 years

Posted 33 days ago

  by    Anne Lee

Job Title: Big Data Engineer / Lead - Direct Hire / Full Time / Perm

Job Location: Chicago, IL / Houston, TX

Job Type: Full Time / Perm / Direct Hire + Benefits

Job Description:

Mandatory Skills:

  • Hive SQL, Linux Shell Scripting, HDFS, Java/Scala, Python, Spark, Kafka. Maybe some Node.js

Role Summary/Purpose:

  • We are looking for a Technical Data Engineer Lead to lead the development of consumer-centric low latency analytic environment leveraging Big Data technologies and transform the legacy systems.

Essential Responsibilities:

  • Lead a development team of big data engineers
  • Implement a big data enterprise data lake, BI and analytics system using Hive, Spark, Hadoop
  • Responsible for design, advanced development, testing oversight and implementation
  • Works closely with program manager, scrummaster, and architects to convey technical impacts to development timeline and risks
  • Coordinate with data engineers and API developers to drive program delivery.
  • Drive technical development and application standards across enterprise data lake

Qualifications/Requirements:

  • BS in computer science or engineering degree, Master degreepreferred
  • 4+ years experienced hands on working experience on Hadoop, Spark, Kafka, MapReduce, HDFS, Hive, Pig, Sqoop, Git and AB Initio.
  • 3+ years development experience with Agile
  • Implementation and leadership experience across the technology stack.
  • Ability to lead and influence across departments and across levels of leaderships
  • Proven ability to organize/manage multiple priorities coupled with the flexibility to quickly adapt to ever-changing business needs.
  • Excellent written and oral communication skills. Adept and presenting complex topics, influencing and executing with timely / actionable follow-through.
  • Strong analytical and problem-solving skills with the ability to convert information into practical deliverables.
  • Uses rigorous logic and methods to solve difficult problems.
  • Experience working in real-time data ingestion is required.
  • Experienced in sourcing and processing structured, semi-structured and unstructured data
  • Experience in Data Cleansing/Transformation, Performance Tuning.
  • Experience in Storm, Kafka, and Flume would be a plus.
  • Hortonworks or Cloudera or MapR Certification would be a plus
  • Basic knowledge of Big Data administration (Ambari).

Desired Characteristics:

  • Extensive experienceworking with data big data platforms
  • Demonstrated experience building strong relationships with senior leadersStrong leadership and influencing skills
  • Outstanding written and verbal skills and the ability to influence and motivate teams