- Job ID: 18014070
Citi is looking for a Developer who will help design and develop an enterprise level data and analytics platform. He/she will be part of a team building a scalable platform used by our clients for discovery, access and analytical processing of data. The platform will support batch and real time analytics and expose the capabilities through a set of APIs. The platform will also provide sandboxing capabilities for the clients to further process the data
–The candidate will play a central role in development of the scalable data platform used for discovery, access and analytics on cross business data
–The members will be part of a global team with presence in the US, Warsaw and many other Citi locations we have.
–Requires good analytical skills in order to filter, prioritize and validate potentially complex material from multiple sources
–Citi also have strict coding and engineering standards to follow from following proper unit testing to continuous integration. The candidate should be familiar with the tools needed here
- Get the chance to know whole business flow of Investment Bank
– - Be responsible for design and development of key components in the big-data analytics platform.
– - Build and manage highly available, secure and scalable big-data clusters
– - Architect, implement, and/or validate integrations with 3rd party applications.
– - Follow Citi’s engineering standards for different phases of software development - Consult on architecture and design, bootstrap, and/or implement strategic projects
- A minimum of 5 years of experience
- Experience of Agile development and scrums
- Banking and securities domain knowledge would be an added advantage
- Programming experience in one or more application or systems languages (Python, Java,Scala, etc).
- Have a significant background in functional programming, preferably Scala.
- Distributed Systems Design Experience - including understanding of distributed systems concepts and principles
- Hadoop Ecosystem of Tools (Spark, Hive, Impala, MapReduce, etc).
- Experience extending and implementing core functionality and libraries in data processing platforms (Hive/Pig UDFs, Spark / Spark SQL, etc)
- Strong understanding of different storage architectures and their appropriate application.
- Database Performance concepts like indices, segmentation, projections, and partitions.
- A commitment to writing understandable, maintainable, and reusable software.
- Willingness to learn new languages and methodologies.
- Experience working with business partners and engineers to gather, understand, and bridge definitions and requirements.
- An innate desire to deliver and a strong sense of accountability for your work.
Qualifications:Bachelor’s degree (in science, computers, information technology or engineering)