JCPenney, one of the nation's largest apparel and home furnishings retailers, combines an expansive footprint of more than 850 stores across the United States and Puerto Rico with a powerful e-commerce site, jcpenney.com, to connect shoppers how, when and where they prefer to shop. At every customer touchpoint, she will get her Penney’s worth of a broad assortment of products from an extensive portfolio of private, exclusive and national brands. Powering this shopping experience is the customer service and warrior spirit of over 100,000 associates across the globe, all driving toward the Company’s three strategic priorities of strengthening private brands, becoming a world-class Omnichannelretailer and increasing revenue per customer. We're looking for motivated, talented individuals who can emerge as Warriors in our organization.
We are looking for a Senior Specialist to join our CRM team working on customer marketing initiatives, with a focus on Big Datatechnology. Reporting directly to the Sr. Manager of the CRM Big Data team, the Senior Specialist works on challenging, mission critical projects and will work in conjunction with othertechnology and business partner teams to drive and develop cutting edge data solutions in an agile environment.
- Partner with IT on growing the data infrastructure and pipelines to support data and analytics, including quality, governance, lifecycle management, and compliance.
- Partner with marketing analysts and data scientists to build identify an infrastructure/pipeline to collect and analyze data.
- Be the key business SME on tools for analysts to access and use the platform.
- Analyze large amounts of data and help derive insights and value from them.
- Stay on top of evolving technology (streaming, etc.) to suggest and prototype and implement improvements to the data architecture.
- Mentor consumers of the data and analytics teams.
Core Competencies & Accomplishments:
- Bachelor’s degree in computer science or engineering is required.
- 5+ years of leveraging data science languages such as R and Python.
- 5-7 years of strong system design/development experience in building large scale distributed systems and products.
- 3 to 5 years of hands-on experience in building scalable data pipelines at multi-terabyte scale using big datatechnologies like Kafka, Hadoop, Spark, Hive, Streaming technologies (Storm, Spark Streaming, Kafka Streams etc.) HBase / Cassandra.
- 3+ years of experience using data management tools, including SQL/DBMS and NoSQL technologies like HBASE, Cassandra, Elastic Search.
- The candidate must be able to engage in solving complex problems. Programming problems are a good example.
- Knowledge of machine learning/distributed systems is preferred.