Match score not available

Senior Data Engineer (Python)

Remote: 
Full Remote
Salary: 
8 - 8K yearly
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

3+ years in software development or Big Data, Excellent knowledge of Python, Proficient in PySpark and Spark, Experience with Data Lakes or Warehouses, preferably Snowflake.

Key responsabilities:

  • Build, optimize, and maintain ETL pipelines
  • Analyze requirements and handle coding, testing, debugging, deployment, maintenance

Sigma Software Group logo
Sigma Software Group Large https://www.sigma.software
1001 - 5000 Employees
See all jobs

Job description

Company Description

Sigma Software is looking for an experienced Senior Data Engineer to join our growing engineering team. This opportunity is for you if you want to work with a tightly knit team of Data Engineers solving challenging issues using cutting-edge data collection, transformation, analysis, and monitoring tools in the cloud. 

We build and support high-quality data solutions to process terabytes of data on the AWS cloud-native data platform. You will be involved in the hands-on development of ETL pipelines, requirements analysis, code development and deployment, and continuous improvement of complex data solutions. 

Enthusiastic about tackling complex data challenges and seeing your impact in real-time? Join us! 

CUSTOMER

Our client has top-tier end-to-end technology, a premium marketplace, and best-in-market advisory services that power the advertising businesses of the largest media and entertainment companies, including Fox, NBC Universal, Viacom in the USA, Sky, Channel 4, RTE, and Mediaset in Europe. 

PROJECT

The project focuses on delivering comprehensive ad platforms for publishers, advertisers, and media buyers. We build and support advanced data solutions on AWS to efficiently process large volumes of data. By joining this project, you will help develop, optimize, and maintain data pipelines, ensuring reliability and high performance for global media clients. 

Job Description
  • Build, optimize, and maintain ETL pipelines 
  • Modify existing application code or interfaces, or develop new components 
  • Analyze requirements, participate in design, and handle coding, testing, debugging, deployment, and maintenance 
  • Develop and implement databases, data collection systems, and data analysis strategies for better efficiency and quality 
  • Conduct code and design reviews to maintain a high-quality product 
  • Mentor and guide Junior colleagues and new team members 
  • Collaborate with other Data Engineers and Product Managers to prioritize business needs 
  • Share technical insights with the wider engineering teams 

Qualifications
  • 3+ years of hands-on experience in software development and/or Big Data 
  • Excellent knowledge of Python 
  • Proficiency in PySpark and strong understanding of Spark 
  • Experience in building and maintaining Data Lakes or Data Warehouses (preferably Snowflake) 
  • Strong understanding of ETL frameworks (e.g., dbt) 
  • Good knowledge of AWS (IAM, S3, and Security Groups) 
  • Familiarity with Infra-as-Code (Terraform or similar) 
  • Great communication skills: able to articulate status updates, blockers, and design considerations 

Additional Information

PERSONAL PROFILE

  • Strong problem-solving mindset and willingness to take initiative 
  • Ability to communicate clearly and produce concise technical documentation 
  • Collaborative attitude and willingness to mentor others 
  • Willingness to learn and adopt new data technologies 

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs