Company:
Ampcus Incorporated
Location: Jersey City
Closing Date: 04/12/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
Role: Scala Consultant
Location: US, Remote
Experience: 8 to 12 years
Job Type: FTE
What You'll Do:
This role is for an strong Scala with medium strong DynamoDB consultant
The identified developer will work with Scala and DynamoDB for performance improvement in Scala spark jobs
Expertise You'll Bring:
Develop and optimize Scala-based Spark jobs to process large datasets efficiently.
Implement best practices in Scala coding to ensure high performance, reliability, and scalability of Spark applications.
Design and implement data models and storage solutions using Amazon DynamoDB to support high-performance data processing tasks.
Identify performance bottlenecks in existing Scala Spark jobs and DynamoDB interactions, and provide actionable solutions.
Optimize Spark job configurations and data partitioning to reduce processing time and resource consumption.
Fine-tune DynamoDB read/write operations, indexing, and data distribution for optimal performance.
Work closely with data engineers, architects, and DevOps teams to integrate Scala Spark jobs into a larger data processing ecosystem.
Collaborate with cross-functional teams to understand business requirements and translate them into scalable technical solutions.
Location: US, Remote
Experience: 8 to 12 years
Job Type: FTE
What You'll Do:
This role is for an strong Scala with medium strong DynamoDB consultant
The identified developer will work with Scala and DynamoDB for performance improvement in Scala spark jobs
Expertise You'll Bring:
Develop and optimize Scala-based Spark jobs to process large datasets efficiently.
Implement best practices in Scala coding to ensure high performance, reliability, and scalability of Spark applications.
Design and implement data models and storage solutions using Amazon DynamoDB to support high-performance data processing tasks.
Identify performance bottlenecks in existing Scala Spark jobs and DynamoDB interactions, and provide actionable solutions.
Optimize Spark job configurations and data partitioning to reduce processing time and resource consumption.
Fine-tune DynamoDB read/write operations, indexing, and data distribution for optimal performance.
Work closely with data engineers, architects, and DevOps teams to integrate Scala Spark jobs into a larger data processing ecosystem.
Collaborate with cross-functional teams to understand business requirements and translate them into scalable technical solutions.
Share this job
Useful Links