Company:
Han IT Staffing
Location: Iselin
Closing Date: 05/11/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
Job Title: Full Stack Client Engineers
Location: Charlotte, NC or Malvern, PA
Responsibilities:
Integrate AI/Client models with multiple data sources: Ensure seamless data flow in and out of models.
Fine-tune existing models: Optimize performance and adapt models to evolving requirements.
Build and maintain data pipelines: Design and implement ETL processes to support model integration.
Monitor and manage Client models in production: Implement MLOps practices for model monitoring, tracking, and maintenance.
Collaborate with cross-functional teams: Work closely with data scientists, data engineers, and other stakeholders to deliver robust Client solutions.
Drive architecture and engineering best practices: Lead efforts to establish and enforce best practices in building the integration framework.
Technical Skills:
Proficiency in Python and SQL databases: Essential for data manipulation and integration tasks.
Experience with AWS cloud services: Including but not limited to:
SageMaker
Lambda
Glue
S3
IAM
CodeCommit
CodePipeline
Bedrock
Experience with data pipeline and workflow management tools: Such as Apache Airflow or AWS Step Functions.
Understanding of ETL techniques, data modeling, and data warehousing concepts: To build efficient data pipelines.
Familiarity with AI/Client platforms and tools: Including TensorFlow, PyTorch, MLflow, and others.
Knowledge of MLOps practices: Including model monitoring, data drift detection, and pipeline automation.
Experience with Docker and AWS ECR: For containerization of Client applications.
Job Title: Full Stack Client Engineers
Location: Charlotte, NC or Malvern, PA
Responsibilities:
Integrate AI/Client models with multiple data sources: Ensure seamless data flow in and out of models.
Fine-tune existing models: Optimize performance and adapt models to evolving requirements.
Build and maintain data pipelines: Design and implement ETL processes to support model integration.
Monitor and manage Client models in production: Implement MLOps practices for model monitoring, tracking, and maintenance.
Collaborate with cross-functional teams: Work closely with data scientists, data engineers, and other stakeholders to deliver robust Client solutions.
Drive architecture and engineering best practices: Lead efforts to establish and enforce best practices in building the integration framework.
Technical Skills:
Proficiency in Python and SQL databases: Essential for data manipulation and integration tasks.
Experience with AWS cloud services: Including but not limited to:
SageMaker
Lambda
Glue
S3
IAM
CodeCommit
CodePipeline
Bedrock
Experience with data pipeline and workflow management tools: Such as Apache Airflow or AWS Step Functions.
Understanding of ETL techniques, data modeling, and data warehousing concepts: To build efficient data pipelines.
Familiarity with AI/Client platforms and tools: Including TensorFlow, PyTorch, MLflow, and others.
Knowledge of MLOps practices: Including model monitoring, data drift detection, and pipeline automation.
Experience with Docker and AWS ECR: For containerization of Client applications.
Share this job