Company:
Aloden LLC
Location: Wilmington
Closing Date: 19/10/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
ETL Engineer (Ab Initio / Java Spark)
Location: Wilmington, DE (Hybrid: 3 days onsite, 2 days WFH)
Onsite Requirement: Candidate must be able to work onsite from day one
Experience: 4-5 years minimum
Essential Skills:
ETL Expertise: Strong hands-on experience with Ab Initio (mandatory) or other ETL tools like Informatica, Talend.
Big Data: Proven experience working with big data in a large, complex organization.
Programming: Proficiency in Core Java development. (Scala or Python may be considered if Java experience is lacking).
ETL Pipeline Development: Hands-on experience in designing, building, and maintaining ETL pipelines.
Preferred Skills:
Cloud: Experience with AWS cloud services
Big Data Processing: Knowledge of Hadoop and Spark (especially valuable for legacy ETL migration)
Data Streaming: Familiarity with Kafka
Scripting: Experience with Unix scripting and Python
Responsibilities:
Design, develop, and maintain ETL pipelines primarily using Ab Initio
Actively participate in the migration of legacy ETL processes to a Spark-based framework
Collaborate with data analysts and stakeholders to gather and understand data requirements
Optimize ETL processes for performance and efficiency
Troubleshoot and resolve any issues related to data extraction, transformation, and loading
Ensure data quality and integrity throughout the ETL process
Location: Wilmington, DE (Hybrid: 3 days onsite, 2 days WFH)
Onsite Requirement: Candidate must be able to work onsite from day one
Experience: 4-5 years minimum
Essential Skills:
ETL Expertise: Strong hands-on experience with Ab Initio (mandatory) or other ETL tools like Informatica, Talend.
Big Data: Proven experience working with big data in a large, complex organization.
Programming: Proficiency in Core Java development. (Scala or Python may be considered if Java experience is lacking).
ETL Pipeline Development: Hands-on experience in designing, building, and maintaining ETL pipelines.
Preferred Skills:
Cloud: Experience with AWS cloud services
Big Data Processing: Knowledge of Hadoop and Spark (especially valuable for legacy ETL migration)
Data Streaming: Familiarity with Kafka
Scripting: Experience with Unix scripting and Python
Responsibilities:
Design, develop, and maintain ETL pipelines primarily using Ab Initio
Actively participate in the migration of legacy ETL processes to a Spark-based framework
Collaborate with data analysts and stakeholders to gather and understand data requirements
Optimize ETL processes for performance and efficiency
Troubleshoot and resolve any issues related to data extraction, transformation, and loading
Ensure data quality and integrity throughout the ETL process
Share this job
Useful Links