We are seeking a detail-oriented candidate with outstanding technical abilities and accomplishments, a strong passion for technology and software craftsmanship, and the willingness to go the extra mile for our customers.
What You’ll Do:
- Contribute to the on-going development of SnapLogic Extreme and Snaplogic ELT
- Be able to quickly debug complex issues in code and figure out the root cause, using tests (unit and integration) to validate the work.
- Cycle between projects in weeks rather than years, while working closely with customers to harden an early stage product as it achieves product market fit.
- Keep on top of emerging trends and technologies in the field of cloud technologies and distributed data processing, including open source products.
What We’re Looking For:
- You have deep experience with Java in a multithreaded environment, with a complete understanding of object-oriented programming and software complexity.
- You have experience building and delivering enterprise SaaS software at scale in public cloud environments (AWS, Azure, GCP, etc.)
- You have experience developing and deploying containerized microservices in public clouds like AWS/Azure.
- You have experience with batch, stream processing using Java, Scala, Spark in a cloud environment (AWS, Azure, GCP, DataBricks, etc.)
- You have experience and expertise with various public cloud technologies and concepts (EC2, EMR, S3, Cloudwatch, SNS, VPC, IAM roles, WASB, ADLS, Databricks etc.)
- You have experience building and debugging complex SQL queries.
- You have a good understanding of ANSI SQL specifications.
- You have experience collaborating with other stakeholders like product management and QA to deliver features in an agile development environment.
- You appreciate the level of code quality required for delivering and maintaining successful enterprise products and you always write code with testability in mind.
- You have excellent communication skills, are comfortable working with little supervision, and have a preference for taking initiative.
- You have a pragmatic and customer-centric view of products and processes.
- Bachelor's or Master’ degree in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
Nice to have:
- Experience working with Cloud Data Warehouse Architectures (Redshift, Snowflake, BigQuery, Azure Synapse etc.)
- Knowledge of Spark internals.
- Experience with Database query optimization.
- Contributions to open source projects like Spark, Hadoop, Flink, etc.
- Interest in AI/ML technologies like MLlib, Scikit, tensorflow etc.