k0deHut is hiring a

Data Engineer

Job Overview

  • Posted 2 months ago
  • Full Time
  • Sandton City, Rivonia Road, Sandhurst, Sandton, South Africa
  • 90000

Roles & Responsibilities

We are seeking a talented and experienced Data Engineer to join our MLOps team which drives critical business applications. As a key member of our team, you will play a crucial role in designing, building, testing, deploying, and monitoring end-to-end data pipelines for both batch and streaming use cases. You will work closely with data scientists, actuaries, software engineers, and other data engineers to contribute to architecting our Client’s modern Machine Learning ecosystem.

Areas of responsibility may include but not limited to:

Data Pipeline Development:

Design, build, and maintain ETL pipelines for both batch and streaming use cases.
Optimize and refactor existing ETL pipelines to improve efficiency, scalability, and cost-effectiveness.
Data visualization and report building.
Re-architecting data pipelines for a modern data stack leveraging modern data tools to support actuarial, machine learning, and AI use cases.

Technology Stack:

Utilize expertise in Python and SQL for data pipeline development.
Using Linux and shell scripting for system automation.
Hands-on experience working with Docker and container orchestration tools is advantageous.
Knowledge of Spark is advantageous.

Platforms and Tools:

Experience working with ETL tools such as Azure Data Factory, dbt, Airflow, Step Functions, etc.
Using Databricks, Kafka and Spark Streaming for big data processing across multiple data sources.
Working with both relational and NoSQL databases. Knowledge of and experience with high-performance in-memory databases is advantageous.

DevOps and Automation:

Working with Azure DevOps to automate workflows and collaborate with cross-functional teams.
Familiarity with Terraform for managing infrastructure as code (IaC) is advantageous.
Experience working on other big data platforms could be advantageous.
Create and maintain documentation of processes, technologies, and code bases.

Collaboration:

Collaborate closely with data scientists, actuaries, software engineers, and other data engineers to understand and address their data needs.
Contribute actively to the architecture of our Client’s modern Machine Learning data ecosystem.

Personal Attributes and Skills

Strong proficiency in Python, SQL, and Linux shell scripting.
Experience with Spark is advantageous.
Previous exposure to ETL tools, relational and NoSQL databases and big data platforms, with experience in Databricks and Azure Data Factory being highly beneficial.
Knowledge of DevOps practices and tools, with experience in Azure DevOps being highly beneficial.
Familiarity with Terraform for infrastructure automation.
Ability to collaborate with cross-functional tech teams as well as business/product teams.
Ability to architect data pipelines for advanced analytics use cases.
A willingness to embrace a strong DevOps culture.
Excellent communication skills.
Commitment to excellence and high-quality delivery.
Passion for personal development and growth, with a high learning potential.

Skills Required

  • Machine Learning
  • Python

Find more jobs at k0deHut

There are no results matching your search.

Reset
AISolvesThat © 2024 All rights reserved