Big Data Engineer

Maximus   •  

Owings Mills, MD

Industry: Education, Government & Non-Profit

  •  

5 - 7 years

Posted 35 days ago

This job is no longer available.

Job Description Summary

The Data Engineer is an accomplished technical leader and a team player with substantial software engineering experience, preferably with some experience within the healthcare industry. The candidate must have hands-on experience with enterprise level software development, integration and implementation using advanced data science or big data technologies. The ideal candidate will have a good foundation of the Java, Spark, MapReduce, REST API, AWS, S3, PostgreSQL, NoSQL and Graph database technologies.


The candidate must demonstrate a willingness to learn new cutting edge technologies and overcome technical challenges in a fast paced environment. The candidate will provide design, and implementation expertise to a cross-functional software development team. The Data Engineer will play a key role in developing next generation data analytics platform leveraging latest data science technologies, DevOps, cloud computing, and Data Lake / big data technologies.Job


Responsibilities:

•Architect, design, code, and implement next-generation data analytics platform using software engineering best practices in the latest technologies

•Apache Spark, Java, Scala, R

•Graph database technologies like Neo4j, Neptune

•NoSQL technologies like Cassandra, HBase, DynamoDB

•Spark integration with Big Data (Hadoop), Amazon EMR

•Provide software expertise in these areas: Spark based applications, Java application integration, web services, Cloud computing.

•Develop solutions to enable metadata/rules engine driven data analytics application leveraging open source and/or cloud native components.

•Passion for learning, growing, and mentor team members on the fine art of data engineering and advanced data science technologies.

•Develop solutions in a highly collaborative and agile environment.

•All other duties as assigned or directed


Education and Experience:

•Bachelor's degree in computer science or a related field

•At least five (5) to six (6) years of experience with full lifecycle development

•At least four (4) years of combined experience with Java, Scala or R

•At least two (2) years of combined experience in Apache Spark or MapReduce

•Experience in an Agile development team, preferably SAFe

•Education and/or formal training may substitute for experience requirement

•U.S. citizen or legal right to work in the United States without sponsorship


Technical Skills:

•Excellent knowledge of Spark SQL and ORM technologies (JPA2, Hibernate)

•Excellent knowledge of NoSQL and Data Lake best practices

•Excellent knowledge of core java, collection framework, generics and multithreading

•Basic knowledge of Unix/Linux commands especially in processing data files

•Preferred experience with Spring Framework (Boot, Cloud, Security, Data)

•Preferred experience with ATTD and associated technologies (Fitnesse, Junit, Karma/Jasmine)

•Preferred experience with delivering code using Continuous Integration and Continuous Delivery (CI/CD) best practices and DevOps to production

•Preferred experience with AWS cloud technologies (S3, Redshift, Lambda, Glue, QuickSight)