Data Engineer (Databricks, Apache, Spark)

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience in data engineering., Proficiency with Databricks and Apache Spark., Strong SQL skills and experience with relational databases., Knowledge of data warehousing concepts and ETL processes..

Key responsibilities:

  • Design, develop, and maintain scalable data pipelines on Databricks using PySpark.
  • Collaborate with data analysts and scientists to understand data requirements.
  • Optimize and troubleshoot existing data pipelines for performance and reliability.
  • Monitor data pipeline performance and conduct necessary maintenance and updates.

Infosys Poland logo
Infosys Poland Large https://www.infosysbpm.com/
1001 - 5000 Employees
See all jobs

Job description

Role: Data Engineer Databricks/Apache Spark

Location: any city in Poland

Type of work: remote ONLY FROM POLAND

Compensation: base salary + financial bonus


Job responsibilities:

  • Design, develop, and maintain scalable data pipelines on Databricks using PySpark.
  • Collaborate with data analysts and scientists to understand data requirements and deliver solutions.
  • Optimize and troubleshoot existing data pipelines for performance and reliability.
  • Ensure data quality and integrity across various data sources.
  • Implement data security and compliance best practices.
  • Monitor data pipeline performance and conduct necessary maintenance and updates.
  • Document data pipeline processes and technical specifications.
  • Should be flexible to work on both Development and L2 Support tasks.


Required skills:

  • 3+/5+ years of experience in data engineering.
  • Proficiency with Databricks and Apache Spark.
  • Strong SQL skills and experience with relational databases.
  • Experience with big data technologies (e.g., Hadoop, Kafka).
  • Knowledge of data warehousing concepts and ETL processes.
  • Experience with CI/CD tools, particularly Jenkins.
  • Excellent problem-solving and analytical skills.
  • Solid understanding of big data fundamentals and experience with Apache Spark.


Preferred skills:

  • Familiarity with cloud platforms (e.g., AWS, Azure).
  • Experience with version control systems (e.g., BitBucket).
  • Understanding of DevOps principles and tools (e.g., CI/CD, Jenkins).
  • Databricks certification is a plus.


We offer:

  • Work in dynamic, international, friendly company
  • Free medical care
  • Financial bonus
  • Benefit Platform
  • Life insurance
  • Language courses

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Analytical Skills
  • Problem Solving
  • Physical Flexibility
  • Collaboration

Data Engineer Related jobs