Technical Stats about Criteo:
· One of the largest Hadoop clusters in Europe with 40PB of stored data and 3.6PB of data processed every day – comparable to Netflix.
· Analytics Infrastructure comparable in size to Uber or Airbnb.
· Excellent Scalability: 1) 30B HTTP requests and close to 4B unique banners displayed per day; 2) 3M HTTP requests per second handled during peak times; 3) 500B log lines processed per day * 90Gbps of bandwidth, half of it through peering exchanges. We see ~4B cookies/devices per month, corresponding to more than half of the overall Internet population.
What you will be doing:
· Contribute directly to the development of Criteo’s fundamental infrastructure for doing analytics on large data sets.
· Write high quality, maintainable code. Mentor other engineers.
· Build scalable big-data distributed data processing systems. Using Hadoop, MapReduce, Java and Scala, or Spark, develop infrastructure for processing huge amounts of data.
· Collaboratively architect the system design along with others.
· Develop open source projects. Consider becoming a committer. We are big users of open source and want to give back to the community.
What you bring to the role:
· MS degree in Software Engineering or related field
· 8+ years of programming experience in Java, Scala, C++ or C#.
· 5+ years of experience with large scale big-data processing.
· A good understanding of database processing and SQL. Presto experience is a plus.
· 3+ years of experience with Hadoop and its ecosystem.
· Passion for exceptionally high quality code, algorithms, creative problem solving, empowerment, agility, teamwork and superb communication.
· Curiosity and drive - you love solving the hardest problems.