This is a remote position.
Job Description:
Seeking an experienced Data bricks Developer to design, develop, and optimize large-scale data processing pipelines using the Data bricks platform. The ideal candidate will have a strong background in big data technologies, Spark programming, data lake architectures, and cloud platforms (preferably Azure or AWS). This role requires the ability to translate business requirements into scalable data solutions, maintain code quality, and collaborate across data engineering, analytics, and business teams.
Responsibilities:
· Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data bricks,Data Factory, and other Azure services.
· Implement and optimize Spark jobs, data transformations, and data processing workflows in Data bricks.
· Collaborate with data architects and analysts to understand data requirements and deliver scalable solutions.
· Optimize Spark jobs for performance and cost efficiency.
· Integrate data from multiple structured and unstructured sources including APIs, flat files, and relational databases.
· Use Databricks Notebooks, Jobs, and Workflows for scheduling and automation of data pipelines.
· Ensure robust documentation and version control for all developed solutions.
· Support data governance, security, and compliance requirements.
· Collaborate with DevOps teams to implement CI/CD pipelines for data engineering workloads.
· Troubleshoot and resolve issues in production data pipelines.
Voodoo
Outsourcey
Voodoo
Black & Grey HR
Outsystems