Company:
Expert In Recruitment Solutions
Location: Tampa
Closing Date: 04/12/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
JO Title Senior Azure Data Engineer (Python/C# )
Location: 100% Remote
TOP SKILLS:
" PySpark or Scala
" Spark SQL
" Delta lakes
* Python/C#---can have worked in the past is fine
PRIMARY DUTIES AND ACCOUNTABILITIES
Item Accountability %
1 Create and maintain optimal data pipeline architecture 20
2 Assemble large, complex data sets that meet functional / non-functional business requirements. 20
3 Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. 20
4 Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies. 20
5 Deliver automation & lean processes to ensure high quality throughput & performance of the entire data & analytics platform.? 10
6 Work with data and analytics experts to strive for greater functionality in our analytics platforms. 10
POSITION SPECIFICATIONS
Minimum: Preferred:
" Experience in building/operating/maintaining fault tolerant and scalable data processing integrations using Azure
" Strong problem-solving skills with emphasis on optimization data pipelines
" Excellent written and verbal communication skills for coordinating across teams
" A drive to learn and master new technologies and techniques
" Experienced in DevOps and Agile " Experience using Docker or Kubernetes is a plus
" Demonstrated capabilities with cloud infrastructures and multi-cloud environments such as Azure, AWS, IBM cloud environments and using CI/CD pipelines.
" Experience with Delta Lakes
" Experienced using Databricks & Apache Spark
" Experienced using Azure Data Factory or Synapse Analytics
Location: 100% Remote
TOP SKILLS:
" PySpark or Scala
" Spark SQL
" Delta lakes
* Python/C#---can have worked in the past is fine
PRIMARY DUTIES AND ACCOUNTABILITIES
Item Accountability %
1 Create and maintain optimal data pipeline architecture 20
2 Assemble large, complex data sets that meet functional / non-functional business requirements. 20
3 Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. 20
4 Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies. 20
5 Deliver automation & lean processes to ensure high quality throughput & performance of the entire data & analytics platform.? 10
6 Work with data and analytics experts to strive for greater functionality in our analytics platforms. 10
POSITION SPECIFICATIONS
Minimum: Preferred:
" Experience in building/operating/maintaining fault tolerant and scalable data processing integrations using Azure
" Strong problem-solving skills with emphasis on optimization data pipelines
" Excellent written and verbal communication skills for coordinating across teams
" A drive to learn and master new technologies and techniques
" Experienced in DevOps and Agile " Experience using Docker or Kubernetes is a plus
" Demonstrated capabilities with cloud infrastructures and multi-cloud environments such as Azure, AWS, IBM cloud environments and using CI/CD pipelines.
" Experience with Delta Lakes
" Experienced using Databricks & Apache Spark
" Experienced using Azure Data Factory or Synapse Analytics
Share this job