Senior DevOps Engineer (Data Solutions)-Austin, San Antonio, or Dallas

Company:  H-E-B
Location: Austin
Closing Date: 10/11/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description
Responsibilities:
Since H-E-B Digital Technology's inception, we've been investing heavily in our customers' digital experience, reinventing how they find inspiration from food, how they make food decisions, and how they ultimately get food into their homes. This is an exciting time to join H-E-B Digital--we're using the best available technologies to deliver modern, engaging, reliable, and scalable experiences to meet the needs of our growing audience. If you enjoy taking on new challenges, working in a rapidly changing environment, learning new skills, and applying it all to solve large and impactful business problems, we want you as part of our team.

Our Partners thrive The H-E-B Way. In the Senior DevOps Engineer, Data Solutions job, that means you have a...
HEART FOR PEOPLE... you can organize multiple engineers, negotiate solutions, and provide upward communication
HEAD FOR BUSINESS... you consistently demonstrate and uphold the standards of coding, infrastructure, and process
PASSION FOR RESULTS... you're capable of high-velocity contributions in multiple technical domains

What You'll Do

As a Senior DevOps Engineer, you will be responsible for automating, scaling, supporting, and cost-optimizing our enterprise-scale machine learning pipelines.

You will collaborate closely with data scientists, data engineers, and platform engineers to establish CI/CD and infrastructure-as-code frameworks, develop reusable pipelining libraries, design scalable, highly-fault-tolerant architectures, implement model observation and performance monitoring, and deploy/support enterprise-grade ML solutions. You will optimize machine learning models for production efficiency and cost-optimize the solution to maximize its ROI to the business

You will build MLOps solutions for a wide array of deeply interesting problem domains, including clickstream, product demand, product quality, logistics, pricing, customer satisfaction, manufacturing, store operations, workforce management, energy usage, and much more.

Collaborate with data scientists, data engineers, and platform engineers to design, build, and deploy ML pipeline tools and frameworks for performance, reliability, and optimal cost

Take offline models data scientists build and turn them into a real machine learning production system.

Design, build and maintain the ML CI/CD pipeline and workflows that automates data collection, data analysis, experimentation, model training, model serving and monitoring in production.

Instrument observation and monitoring of pipeline performance, accuracy, and drift to help Data Science plan for and take corrective action.

Support production ML solutions, particularly those that support mission-critical applications such as (url removed) and the HEB App.

Translate models to optimal production code and work with Data Science to validate the results before deployment.

Evaluate third-party tools and frameworks to determine which make sense for HEB to adopt.

Establish and enforce team development standards.

Who You Are

Bachelor’s degree or higher in computer science, engineering, or related quantitative field

5+ years of experience as an MLOps engineer, or

3+ years of experience as a DevOps engineer and 2+ years of experience building, training, and deploying ML models

Solid experience supporting a Data Science or Data Solutions Team

Solid knowledge of model versioning, model deployment, model serving, model monitoring and data versioning.

Hands-on with DevOps (Development Operations), MLOps (ML Operations), or container-based application development (Docker, Kubernetes, etc.)

Solid experience setting up CI/CD pipeline processes, managing versioned data sets, optimizing model code, deploying pipelines, and resolving support incidents.

Expertise designing and implementing data pipelines using modern data engineering approach and tools: Spark, PySpark, Java, Docker, Kafka/Confluence etc.

Solid experience and knowledge of data and ML frameworks: Spark/PySpark, Dask, scikitlearn, scipy, statsmodels, and plotly.

Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.)

Hands-on with Cloud computing technology like AWS, Google Cloud, etc.

Proficiency in python and strong object-oriented design skills

Knowledge of other analytical programming languages such as R, Scala, and SAS.

Familiarity with one or more workflow orchestration frameworks (KubeFlow, Airflow, Argo, etc.)

Knowledge of RESTful API design

A mindset to understand real-world business problems before building technical solutions

A technical mindset and analytical approach

Great attention to detail

Good leadership skills

The sense of ownership and pride in your performance and its impact on the company’s success

Critical thinker and problem-solving skills

DATA3232

Apply Now
Share this job
  • Similar Jobs

  • Staff Software Engineer-Backend (Node.js) Austin, Dallas, San Antonio, TX

    Austin
    View Job
  • Staff Software Engineer-TypeScript (BACKEND) Austin, Dallas, Houston, San Antonio, TX

    Austin
    View Job
  • Sr Software Engineer- JAVA (Backend) Austin, Dallas & San Antonio, TX

    Austin
    View Job
  • Cloud infrastructure Devops Engineer | Austin,TX

    Austin
    View Job
  • Senior Java Developer + AWS-- Austin, TX

    Austin
    View Job
An error has occurred. This application may no longer respond until reloaded. Reload 🗙