Education/Qualification: Bachelor’s degree in Computer Science or related field, or equivalent education and experience
- Must be a Subject Matter Expert in Spark.
- Proficiency with Big Data processing technologies (Hadoop, Spark, AWS)
- Experience in building data pipelines and analysis tools using Python, PySpark, Scala
- Create Scala/Spark jobs for data transformation and aggregation
- Produce unit tests for Spark transformations and helper methods
- Write Scaladoc-style documentation with all code
- Design data processing pipelines
- Good to have experience with Hadoop / AWS.
- Good to have Spark certification
- Preferred to have Java background.
- Passionate about learning new technologies
- Ability to learn new concepts and software quickly
- Analytical approach to problem-solving; ability to use technology to solve business problems
- Familiarity with database-centric applications
- Ability to communicate effectively in both a technical and non-technical manner
- Ability to work in a fast-paced environment
- Spark / Scala
- Amazon Glue