Our client is a leading global asset management firm that manages some $1.5 trillion in assets for savers and
other clients around the world. We manage investments and develop solutions across the full spectrum of
investment strategies and vehicles: fixed income, equities, commodities, real estate, asset allocation, ETFs, hedge
funds and private equity.
We are looking for a Big Data DevOps Engineer who will be responsible for managing our big data
infrastructure based proprietary trading systems, risk management systems and portfolio management
reporting applications. The DevOps Engineer will be responsible for building enterprise monitoring solutions,
implementing configuration management and runbook automation, standardizing processes and managing big
data applications and underlying infrastructure to help steer scalability and stability improvements early in the
lifecycle of development while ensuring operational best practices are supported. The ideal candidate is an
energetic self-starter and a team player having a passion for software engineering and building automation,
who works well under pressure and has strong communication and technical skills.
? Minimum Bachelor’s degree in computer science or a related field.
? Minimum of 3years’ experience on Big Datatechnologieswith expertize in HDFS, Yarn, Spark,
Hive/Impala, Kafka, Oozie.
? Minimum of 4years of DevOps or development and operations combined experience
? Hands-on experience managing distributed systems and clusters
? Expertise in scriptingtechnologies like Python
? Understanding of C++/Java and microservice/SOA architecture
? Data Mining and Machine Learning experiencewith excellent quantitative analytics skills is a
? Excellent communications skills, possess strong problem solving, analytical, time management
?Experience analyzing and resolving performance, scalability and reliability issues.