The team not only provides the storage solutions for large datasets but also exposes optimal high performance access API's and data visualization tools. We constantly try to break boundaries by applying state of the art technologies in areas of big data, Kafka queues and distributed computing that can support enhanced business analytics and data insights allowing our customers to make savvy financial decisions. You’ll gain a deep understanding of multi-dimensional data modeling and processing, how to architect high-performance low latency backend systems, and intuitive visual design.
We’ll trust you to:
- Define & design large scale distributed system that can be used for interactive as well as batch use cases.
- Evaluate upcoming open source technologies for distributed computing and integrate such that our platform improves and scales better.
- Contribute back into the open source community.
- Interact with infrastructure, business and data teams to produce client requirements for next generation product development.
You’ll need to have:
- 3+ years of experience with application programming, building large scale distributed system, data structures, algorithms and all phases of the software life-cycle.
- Knowledge of programming languages like Python, C++ and/or Java.
- Experience in delivering high performance production quality software to clients.
- Functional understanding of distributed systems architecture.
- An aptitude for analytical problem solving.
- Knowledge of distributed computing platforms like Kafka, Spark etc is a plus.
- You have a BA, BS, MS, PhD in Computer Science, Engineering or relevant experience in a technology field