Company:
eTeam Inc.
Location: Plano
Closing Date: 04/12/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
Overview:
We are seeking a talented AWS DevOps/MLOps Lead to develop platforms for big data and data science on AWS. As models, apps, and data pipelines are created and operationalized, the bigdata and data science team requires engineers with understanding of cloud native technology to develop, manage, automate, and facilitate the operational capabilities of the big data and data science team.
Required Skills:
Experience in AWS system and network architecture design, with specific focus on AWS Sagemaker and AWS ECS
Experience developing and maintaining ML systems built with open source tools
Experience developing with containers and Kubernetes in cloud computing environments
Experience with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, Argo)
Design the data pipelines and engineering infrastructure to support our clients enterprise machine learning systems at scale
Develop and deploy scalable tools and services for our clients to handle machine learning training and inference
Support model development, with an Client on auditability, versioning, and data security
Experience with data security and privacy solutions such as Denodo, Protegrity, and synthetic data generation.
Ability to develop applications using Python and deploy to AWS Lambda and API Gateway
Ability to develop Jenkins pipelines using the groovy scripting.
. Good understanding in testing frameworks like Py/Test.
Ability to work with AWS services like S3, DynamoDB, Glue, Redshift and RDS
Proficient understanding of Git and version control systems
Familiarity with continuous integration and continuous deployment.
Develop the terraform modules to deploy the standard infrastructure.
Ability to develop the deployment pipelines using the Jenkins, XL Release
Experience in Python boto3 to automate the cloud operations.
Experience in documenting technical solutions and solution diagrams
Good understanding of the simple python applications which can be deployed as a docker container.
Experiencing in creating workflows using AWS step functions
Create the docker images using the custom python libraries.
Required Skills:
AWS (experience mandatory): S3, KMS, IAM, EC2, ECS, BATCH, ECR, Lambda, Data Sync, EFS, IAM Roles, Policies, Cloud Trail, Cost Explorer, ACM, AWS Route53, SNS, SQS, ELB, CloudWatch, Lambda and VPC, Service Catalog
Automation (experience mandatory): Terraform, Python (boto3), serverless, Jenkins (Groovy), NodeJs
Bigdata (Knowledge): Redshift, DynamoDB, Databricks, Glue, and Athena.
Data science (Experience): Sagemaker, Athena, Glue, DynamoDB, Databricks, MWAA (Airflow),
DevOps (experience mandatory): Python, Terraform, Jenkins, GitHub, Make files, and Shell scripting.
Data Virtualization (Knowledge) : Denodo
Data Security (Knowledge): Protegrity
Qualifications:
Bachelor s degree from a reputed institution/university.
14+ years of building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer.
4+ Years of experience in python, groovy, and java programming.
Experience working in the SCRUM Environment.
We are seeking a talented AWS DevOps/MLOps Lead to develop platforms for big data and data science on AWS. As models, apps, and data pipelines are created and operationalized, the bigdata and data science team requires engineers with understanding of cloud native technology to develop, manage, automate, and facilitate the operational capabilities of the big data and data science team.
Required Skills:
Experience in AWS system and network architecture design, with specific focus on AWS Sagemaker and AWS ECS
Experience developing and maintaining ML systems built with open source tools
Experience developing with containers and Kubernetes in cloud computing environments
Experience with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, Argo)
Design the data pipelines and engineering infrastructure to support our clients enterprise machine learning systems at scale
Develop and deploy scalable tools and services for our clients to handle machine learning training and inference
Support model development, with an Client on auditability, versioning, and data security
Experience with data security and privacy solutions such as Denodo, Protegrity, and synthetic data generation.
Ability to develop applications using Python and deploy to AWS Lambda and API Gateway
Ability to develop Jenkins pipelines using the groovy scripting.
. Good understanding in testing frameworks like Py/Test.
Ability to work with AWS services like S3, DynamoDB, Glue, Redshift and RDS
Proficient understanding of Git and version control systems
Familiarity with continuous integration and continuous deployment.
Develop the terraform modules to deploy the standard infrastructure.
Ability to develop the deployment pipelines using the Jenkins, XL Release
Experience in Python boto3 to automate the cloud operations.
Experience in documenting technical solutions and solution diagrams
Good understanding of the simple python applications which can be deployed as a docker container.
Experiencing in creating workflows using AWS step functions
Create the docker images using the custom python libraries.
Required Skills:
AWS (experience mandatory): S3, KMS, IAM, EC2, ECS, BATCH, ECR, Lambda, Data Sync, EFS, IAM Roles, Policies, Cloud Trail, Cost Explorer, ACM, AWS Route53, SNS, SQS, ELB, CloudWatch, Lambda and VPC, Service Catalog
Automation (experience mandatory): Terraform, Python (boto3), serverless, Jenkins (Groovy), NodeJs
Bigdata (Knowledge): Redshift, DynamoDB, Databricks, Glue, and Athena.
Data science (Experience): Sagemaker, Athena, Glue, DynamoDB, Databricks, MWAA (Airflow),
DevOps (experience mandatory): Python, Terraform, Jenkins, GitHub, Make files, and Shell scripting.
Data Virtualization (Knowledge) : Denodo
Data Security (Knowledge): Protegrity
Qualifications:
Bachelor s degree from a reputed institution/university.
14+ years of building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer.
4+ Years of experience in python, groovy, and java programming.
Experience working in the SCRUM Environment.
Share this job