Summary
RecVue has immediate opportunities for extremely talented bigdata developer who want their lines of code to have significant and measurable positive impact for users, the company’s bottom line and the industry. You will be working with a group of world-class engineers to build the breakthrough features our customers will love, adopt and use while keeping our platform stable and scalable. The senior bigdata developer role at RecVue encompasses architecture, design, implementation, and testing to ensure we build products right and release them with high quality.
Responsibilities
- Architect, design, implement, test and deliver highly scalable products
- Responsible for implementing high-quality programs for large scale Spark distributed systems by loading and processing from disparate data sets using appropriate technologies including but not limited to those described in the skills section
- Develop custom batch-oriented and real-time streaming data pipelines working within the Spark ecosystem
- Work closely with a team of engineers, product managers to build new features our customers will love, adopt and use while keeping our trusted platform stable and scalable.
- Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
- Act in a technical leadership capacity: Mentor junior engineers and new team members, and apply technical expertise to challenging programming and design problems
- Present your own designs to internal/external groups and review designs of others.
- Possess a quality mindset, squash bugs with a passion, and work hard to prevent them in the first place through unit testing, test-driven development, version control, continuous integration, and deployment.
Skills & Background
- Bachelor’s degree in Computer Sciences or equivalent field, plus 5+ years of relevant experience.
- 3+ years of experience with Spark and Kafka
- 5+ years of programming expertise in Java, Scala
- Hands-on experience with a variety of big data infrastructures, such as:
- Processing: Spark, Flink, Hadoop
- Messaging: Kafka, Zookeeper
- Storage: Hive, Mongo DB
- Machine Learning: Sagemaker, H2O, Keras
- Expertise in Database technologies such as Oracle and Spark SQL
- Extensive knowledge in using Spark with an ability to tune big data workload
- Track record of being a top performer in current and past roles.
- Excellent interpersonal and communication skills.
- Experience building highly scalable web applications
- Master’s degree in Computer Sciences or equivalent field (Preferred).