Sr. Big Data Engineer 6 - Apps Systems Engineer 6

Wells Fargo   •  

Concord, NC

Industry: Financial Services


11 - 15 years

Posted 369 days ago

Job Description

Data Quality and Governance team under Enterprise Information Management Services has an opening for a Sr. Big data leader (Hadoop/Spark) to be part of an exciting and fast-paced team focused on delivering data quality, governance and analytical capabilities such as data lineage, data management, data quality, metrics and reporting etc. The resource would be responsible for leading in the design and implementation of big data based technologies, and provide services such as APIs for other teams to leverage the offerings built in this space. The resource would have management responsibility to manage couple of full time employees.

While there are hundreds of opportunities to make your mark on this team, here are few other things that you will be doing:

  • Ability to take a project from scoping, requirements through actual launch of the project
  • Comfortable working on multiple deliverables often with tight deadlines
  • Write code, create test data, conduct interfaces and unit tests
  • Provide technical support, advice, and consultation with the issues relating to supported applications
  • Maintain and manage the continuous integration and continuous deployment tools and processes
  • Create and maintain process documentation for programs
  • Assures quality, security and compliance for supported systems and applications
  • Conduct POCs and build solutions on new technologies


  • 10+ years of application development and implementation experience
  • 4+ years of Hadoopexperience
  • 4+ years of Big Dataexperience
  • 2 + years of Core Java experience
  • 2+ years of Python experience
  • 4+ years of databaseexperience
  • 2+ years of RESTful or SOAP web services
  • 4 + years of Core Java experience


  • Excellent verbal, written, and interpersonal communication skills
  • A BS/BA degree or higher
  • Knowledge and understanding of Python, R or SCALA
  • 3 + years of machine learning experience
  • Knowledge and understanding of BI platform and tools such as: SSAS, SSIS, SSRS, and TSQL


  • Prior experience in building data governance solutions on top of data lake is highly preferred
  • 4+ years of experience with either of these: HiveQL, SparkSQL, Drill
  • Experience in writing and implementing map reduce and Spark jobs
  • 2+ Experience with various messaging systems, such as Kafka or RabbitMQ
  • 2+ years of experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming or MapR stream
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL tools in Hadoop like Talend, Informatica, Ab Initio
  • Experience in code/build/deployment tools like git, svn, maven, sbt, jenkins
  • Experience with MapR/Hortonworks /Cloudera, MapR preferred
  • Experience with technologies such as J2EE, spring framework, spring data, XML processing, SOAP, REST, HTTP, JSON, UNIX, Rest API, Elastic search, AngularJS, ReactJS

Job Expectations

  • Flexibility to work in a 24/7 environment, including weekends and holidays
  • Ability to work on call as assigned
  • Ability to travel up to 10% of the time