Responsible for many phases of the software development lifecycle including technical research, requirements analysis, high level and technical design, implementation of enhancements to the product, unit testing, debugging, and maintenance.
There are multiple projects that the candidate will be working on, including but not limited to our public facing website, applications for the various units such as Telecom, GUFPA, FACTS, Gas, Electric, and etc.
Ability to integrate with large teams, demonstrating strong verbal and written communication skills; problem-solving skills and critical thinking are essential to effectively conduct the required tasks.
Participate as a member of a small team to release the full potential of Big Data through a combination of platform technology, collective human intelligence and the vast data resources available to our company.
Gather and process raw, structured, semi - structured, and unstructured data at scale, including writing scripts, developing programmatic interfaces against web APIs, scraping web pages, processing twitter feeds, etc.
Assists in defining software architectures Collaborates with leads to explore existing systems, determines areas of complexity, potential risks to successful implementation, learns the applications capabilities.
Work in cross-disciplinary teams with KPMG industry experts to understand client needs and ingest rich data sources such as social media, news, internal/external documents, emails financial data, and operational data.
Will provide programming and analysis to new and existing Fiserv clients during implementations and conversions to ensure accurate and timely implementations and data conversion and to drive overall client satisfaction.
Perform all aspects of software lifecycle from requirements definition through software release including documentation, test planning and execution, software maintenance, architecture development and design, debugging, implementation, and integration.
Use strong technical and analytical expertise to explore and examine data from multiple disparate sources with the goal of discovering patterns and previously hidden insights, which can further the goals of improving organizational data quality and enterprise IT architecture management.