Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

4-6 years of experience in data engineering or a similar role., Extensive hands-on experience with Google Cloud Platform (GCP), especially BigQuery, Vertex AI, and Dataform., Deep experience writing and tuning Spark jobs using PySpark., Strong SQL skills and excellent communication skills in English (C1 preferred)..

Key responsabilities:

  • Design, implement, and optimize cloud-native data pipelines using PySpark and GCP services.
  • Collaborate with technical teams and stakeholders to understand data requirements and deliver reliable solutions.
  • Write efficient, production-ready Spark jobs and ensure they are properly tuned for performance.
  • Support and improve existing pipelines while contributing to new development efforts.

FusionHit logo
FusionHit https://www.fusionhit.com
51 - 200 Employees
See all jobs

Job description

We are looking for a Data Engineer to join our fast-paced, data-driven team at FusionHit! In this role, you’ll design, build, and tune scalable data pipelines using PySpark within the Google Cloud Platform (GCP) ecosystem. You’ll be responsible for both creating new pipelines and improving existing ones, playing a critical role in transforming and delivering high-quality data that powers key business decisions.

Our client operates in the retail sector, using advanced analytics and AI to enhance customer experiences and operational efficiency. This role supports their Vertex AI-powered ML workflows and real-time analytics using BigQuery and Dataform.

This project focuses on large-scale data processing, building cloud-native pipelines, and supporting ML workflows through GCP technologies.

Location: Must reside and have work authorization in Latin America.
 
Availability: Must be available to work with significant overlap with Central Time.
The Ideal Candidate Has:
  • 4-6 years of experience in a data engineering or similar role.
  • Extensive hands-on experience with Google Cloud Platform (GCP), especially BigQuery, Vertex AI, and Dataform.
  • Deep experience writing and tuning Spark jobs using PySpark.
  • Proven ability to build and maintain scalable, cloud-native data pipelines.
  • Understanding of how to support machine learning workflows via data pipelines.
  • Strong SQL skills and familiarity with BigQuery analytics (25% of the workload).
  • Excellent collaboration skills, with experience working with both technical teams and stakeholders.
  • Excellent communication skills in English (C1 preferred, strong B2 may be considered).
Nice to Have:
  • Experience with Scala and the Akka toolkit.
  • Familiarity with React (purely optional, no frontend work expected).
  • Previous experience in the retail industry is a plus, but not required.
Key Responsibilities:
  • Design, implement, and optimize cloud-native data pipelines using PySpark and GCP services.
  • Collaborate with technical teams and stakeholders to understand data requirements and deliver reliable solutions.
  • Write efficient, production-ready Spark jobs and ensure they are properly tuned for performance.
  • Support and improve existing pipelines while contributing to new development efforts.
  • Leverage BigQuery for analytical processing and Vertex AI for supporting ML-related pipelines.
  • Maintain high-quality code, participate in code reviews, and contribute to the technical roadmap.
  • Work within a collaborative team environment, replacing a previous engineer and supporting the team’s daily workflows.
Perks of working at FusionHit:
  • Certified as a Great Place to Work, offering a supportive and inclusive work culture.
  • Work from home position
  • Private Medical Insurance.
  • Corporate Access to FusionHit Udemy Account.
  • Personal and Professional Development Courses & Certifications.
  • Flexible Schedule.
  • 3 Sick Days per year.
  • Birthday Off.
  • Extra Days for Special Occasions.
  • Team Building Meal Reimbursement.
  • Equipment Granted.
  • Monthly Recognitions.
  • High Impact Committees.

Are you curious already?

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication

Data Engineer Related jobs