Software Engineers (Multiple openings)
Create hadoop applications over different hadoopdistributions using java, Spark, Hbase; Design and implement data masking and encryption algorithms for distributed big data systems; Design distributed platform for large dataset processing using java, C/C++; Implement java code to build hadoop file processing agent; Install, evaluate and research the leading big datatechnologies; Develop API interfaces using Java, Python, Spark; Develop virtualized sandbox environments enabling prospective clients to trial solutions on their own systems; Collect, clean and assemble large amounts of data from public sources or use machine generated data to test the scalability of solutions; Evaluate and improve performance of systems when working on large scale distributed systems processing terabytes of data daily.
40 hours per week
Education & Experience
Requires a Master’s degree in EE, CE, CS or closely related.