- Work with Big Data/Hadoop architecture and related technologies, Spark – RDDs, Datasets, Dataframes, Spark SQL, and streaming technologies such Spark Streaming and Kafka.
- Work with Java, Python and Scala programming languages to develop enterprise applications.
- Work with databases such as Oracle, MySQL using SQL and PL/SQL programming languages and No SQL databases such as HBase, MongoDB and Cassandra.
- Work with Unix/Linux shell scripting to build and configuring applications.
- Build scalable and resilient applications in private or public cloud environments using cloud technologies.
- Build enterprise applications to enable for logging, monitoring, alerting, operational control and scheduling big data jobs.
Master’s or equivalent in Computer Science/Applications, Information Technology/Systems or Electronics/Electrical Engineering + minimum 1 year of experience as Big Data Engineer, Big Data Analyst, Consultant (in IT) or related.
Must be willing to travel/relocate to anywhere in the US.