Databricks Developer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Strong background in big data technologies and Spark programming., Experience with data lake architectures and cloud platforms, preferably Azure or AWS., Ability to translate business requirements into scalable data solutions., Familiarity with data governance, security, and compliance requirements..

Key responsibilities:

  • Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data bricks and Data Factory.
  • Implement and optimize Spark jobs and data processing workflows in Data bricks.
  • Collaborate with data architects and analysts to deliver scalable solutions based on data requirements.
  • Troubleshoot and resolve issues in production data pipelines.

Workiy Inc. logo
Workiy Inc. Information Technology & Services SME https://www.workiy.com/
11 - 50 Employees
See all jobs

Job description

This is a remote position.

Job Description:

 

Seeking an experienced Data bricks Developer to design, develop, and optimize large-scale data processing pipelines using the Data bricks platform. The ideal candidate will have a strong background in big data technologies, Spark programming, data lake architectures, and cloud platforms (preferably Azure or AWS). This role requires the ability to translate business requirements into scalable data solutions, maintain code quality, and collaborate across data engineering, analytics, and business teams.

Responsibilities:

 

 

·       Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data bricks,Data Factory, and other Azure services.

·       Implement and optimize Spark jobs, data transformations, and data processing workflows in Data bricks.

·       Collaborate with data architects and analysts to understand data requirements and deliver scalable solutions.

·       Optimize Spark jobs for performance and cost efficiency.

·       Integrate data from multiple structured and unstructured sources including APIs, flat files, and relational databases.

·       Use Databricks Notebooks, Jobs, and Workflows for scheduling and automation of data pipelines.

·       Ensure robust documentation and version control for all developed solutions.

·       Support data governance, security, and compliance requirements.

·       Collaborate with DevOps teams to implement CI/CD pipelines for data engineering workloads.

·       Troubleshoot and resolve issues in production data pipelines.

 



Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Game Developer Related jobs