Data Engineer in Weehawken, NJ

$80K - $100K(Ladders Estimates)

UBS   •  

Weehawken, NJ 07086

Industry: Finance & Insurance

  •  

5 - 7 years

Posted 55 days ago

Your role

Are you interested in pursuing your career in Asset Management? Does working in a Engineering business excite you? We're looking for someone to:

-create and implement highly scalable and reliable distributed data architectures

-develop using core Java, Apache Cassandra, Apache Spark, Apache Hadoop & open source technologies, including data distribution networks

-deliver data into the data services layer and API components

-work with business and technical lead to define implementation design & coding of the assigned modules/responsibilities with highest quality

-determine technical approaches to be used and define the appropriate methodologies

-work in a collaborative, multi-site environment to support rapid development and delivery of results and capabilities (i.e. AGILE SDLC)

-communicate technical analyses, recommendations, status, and results to project management team

Your team

You'll be working in Group Technology in the Asset Management Technology team based in Weehawken, NJ

Your expertise

You have:

-4 year bachelors degree or the international equivalent in computer Science, math,physics,engineering or a related field (master's Degree preferred )

-4-5 years of Strong hands-on experience with Apache Cassandra and Cassandra Data Modeling (conceptual, logical and physical data modeling)

-strong experience with at least two programming languages such as Java, Python or Scala

-experience working with distributed systems technologies and distributed computing frameworks

-a strong drive and interest to learn new technologies quickly and work in a fast-paced software development environment

-experience working with distributed systems technologies and distributed computing frameworks

-strong hands-on experience with data ingestion technologies for Big Data Applications including Cassandra or Datastax Data Store, Apache SOLR index table creation

-strong hands-on experience with data extraction and delivery from traditional RDBMS including Oracle, MySQL, SQL Server and with unstructured data ingestion

- experience with working with ETL services including ETL within distributed processing environments

-experience with Apache Spark for distributed programming

-experience developing with at least two programming languages such as Java, Python or Scala



You are:

-capable of working in a collaborative, multi-site environment to support rapid development and delivery of results and capabilities (i.e. AGILE SDLC)

-experienced with Cloud architecture & service like AWS, Azure

-eperienced working with distributed systems technologies and distributed computing frameworks

-well versed with development toolkits like Maven, GIT, SBT, Continues Integration suites, automated deployments, JIRA, Wiki


Valid Through: 2019-10-14