Senior Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field., 8+ years of hands-on experience in data engineering with a proven track record., Strong expertise in SQL, including performance tuning and complex query development., Proven experience with Databricks or Snowflake in a production environment..

Key responsibilities:

  • Design, build, and maintain robust data pipelines for ingesting and transforming large datasets.
  • Develop and optimize SQL and DBT code to model, transform, and load data into a centralized data warehouse.
  • Collaborate with data scientists, analysts, and business stakeholders to deliver clean, reliable, and well-governed data.
  • Mentor and support junior team members in their technical growth.

Sky Systems, Inc. (SkySys) logo
Sky Systems, Inc. (SkySys) Information Technology & Services Startup https://myskysys.com/
11 - 50 Employees
See all jobs

Job description

Role: Senior Data Engineer
Position Type: Full-Time Contract (40hrs/week)
Contract Duration: Long Term
Work Schedule: 8 hours/day (Mon-Fri)
Work Timezone: US Time
Location: 100% Remote (Candidates can work from anywhere in LATAM Countries)

What You'll Be Doing

We're looking for a highly skilled Senior Data Engineer to lead the development of scalable data pipelines and modern data warehouse solutions. This role will be pivotal in architecting, implementing, and optimizing high-performance data workflows using tools such as SQL, Python, DBT, Databricks, and Snowflake.
You'll partner closely with stakeholders across the organization to understand complex business problems and translate them into robust, data-driven solutions. This is a hands-on role for a data engineering expert who thrives in a collaborative, fast-paced environment.

Key Responsibilities
  • Design, build, and maintain robust data pipelines for ingesting and transforming large datasets from multiple sources.
  • Develop and optimize SQL and DBT code to model, transform, and load data into a centralized data warehouse.
  • Lead the design and implementation of modern data architectures using Databricks or Snowflake.
  • Write efficient, reusable, and scalable Python code to support ETL/ELT workflows.
  • Collaborate with data scientists, analysts, and business stakeholders to deliver clean, reliable, and well-governed data.
  • Advocate for and implement automated testing, monitoring, and deployment pipelines.
  • Champion data quality, governance, and best practices across the team.
  • Solve complex technical challenges across various layers of the data stack.
  • Mentor and support junior team members in their technical growth.
Qualifications
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field.
  • 8+ years of hands-on experience in data engineering, with a proven track record of delivering scalable solutions.
  • Strong expertise in SQL, including performance tuning and complex query development.
  • Proven experience with Databricks or Snowflake in a production environment.
  • Hands-on experience developing data pipelines and ETL/ELT workflows using DBT and Python.
  • Solid understanding of data modeling techniques (e.g., 3NF, dimensional modeling).
  • Experience with cloud platforms such as Azure, AWS, or GCP.
  • Familiarity with infrastructure-as-code tools like Terraform.
  • Passion for data quality, security, privacy, and governance.
  • Strong communication and problem-solving skills, with the ability to work cross-functionally.

Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication
  • Problem Solving

Data Engineer Related jobs