Key responsibilities
- Develop Scala/Spark programs, scripts, and macros for data extraction, transformation and analysis
- Design and implement solutions to meet business requirements
- Support and maintain existing Hadoop applications and related technologies
- Develop and maintain metadata, user access and security controls
- Develop and maintain technical documentation, including data models, process flows and system diagrams
Requirements
- Minimum 3-5 years of experience from Scala/Spark related projects and/or engagements
- Create Scala/Spark jobs for data transformation and aggregation as per the complex business requirements.
- Should be able work in a challenging and agile environment with quick turnaround times and strict deadlines.
- Perform Unit tests of the Scala code
- Raise PR, trigger build and release JAR versions for deployment via Jenkins pipeline
- Should be familiar with CI/CD concepts and the processes
- Peer review the code
- Perform RCA of the bugs raised
- Should have excellent understanding of Hadoop ecosystem
- Should be well versed with below technologies
Preferred
- Relevant certifications (e.g., Scala, Spark, Hadoop, Performance)
- Knowledge of other programming languages (e.g., Python, R)
- Insight to cloud-based solutions such as Snowflake
- Experience in Financial Services, preferrable in the Credit risk domain
Ref. code 1723578
Posted on 03 Oct 2023
Experience level Experienced Professional
When you join Capgemini, you don't just start a new job. You become part of something bigger.
We bring together passionate, skilled people, a tech-driven approach to innovation, and a deep commitment to our clients to help organizations unlock the true value of technology.
As a graduate or an experienced professional, you will be working with the world's leading brands to enhance and transform the way they do business.
#J-18808-Ljbffr