Big Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

At least 5 years of experience in data engineering, ETL, and data warehousing., Expertise in Python and PySpark for big data processing., Advanced skills in SQL and PL-SQL, including complex queries and performance tuning., Hands-on experience with Snowflake, Oracle Database, and Unix Shell Scripting..

Key responsibilities:

  • Design, build, and optimize data pipelines for ETL/ELT processes.
  • Develop and maintain complex stored procedures, DWH schemas, and SQL scripts.
  • Implement PySpark-based solutions for large-scale data processing.
  • Collaborate on Snowflake database architecture and integrate data workflows with AWS services.

Sequoia Global Services logo
Sequoia Global Services Startup http://www.sequoia-connect.com
11 - 50 Employees
See all jobs

Job description

Description

Our client is a rapidly growing, automation-led service provider specializing in IT, business process outsourcing (BPO), and consulting services. With a strong focus on digital transformation, cloud solutions, and AI-driven automation, they help businesses optimize operations and enhance customer experiences. Backed by a global workforce of over 32,000 employees, our client fosters a culture of innovation, collaboration, and continuous learning, making it an exciting environment for professionals looking to advance their careers.

Committed to excellence, our client serves 31 Fortune 500 companies across industries such as financial services, healthcare, and manufacturing. Their approach is driven by the Automate Everything, Cloudify Everything, and Transform Customer Experiences strategy, ensuring they stay ahead in an evolving digital landscape. 

As a company that values growth and professional development, our client offers global career opportunities, a dynamic work environment, and exposure to high-impact projects. With 54 offices worldwide and a presence in 39 delivery centers across 28 countries, employees benefit from an international network of expertise and innovation. Their commitment to a 'customer success, first and always' philosophy ensures a rewarding and forward-thinking workplace for driven professionals.

We are currently searching for a Big Data Engineer: 

Responsibilities:

  • Design, build, and optimize data pipelines for ETL/ELT processes in data warehousing and BI projects.
  • Develop and maintain complex stored procedures, DWH schemas, and SQL/PL-SQL scripts.
  • Implement PySpark-based solutions for large-scale data processing and transformation.
  • Collaborate on Snowflake database architecture, performance tuning, and troubleshooting.
  • Integrate data workflows with AWS services (S3, Lambda) and orchestration tools (Jenkins, GitHub).
  • Manage JIRA workflows for task tracking and Agile project delivery.

Requirements:

  • 5+ years of experience in data engineering, ETL, and data warehousing.
  • Expertise in Python/PySpark for big data processing.
  • Advanced SQL/PL-SQL skills (complex queries, stored procedures, performance tuning).
  • Hands-on experience with SnowflakeOracle Database, and Unix Shell Scripting.
  • Familiarity with AWS cloud services (S3, Lambda).
  • Proficiency in CI/CD tools (GitHub, Jenkins).
  • Strong analytical skills and ability to troubleshoot data pipeline issues.

Desired:

  • Experience with Kafka for real-time data streaming.
  • Knowledge of Netezza DBInformatica, or Talend.
  • Basic understanding of data governance and workflow automation.


Languages

  • Advanced Oral English.
  • Native Spanish.

Note:

  • Fully remote


If you meet these qualifications and are pursuing new challenges, start your application on our website to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/


Requirements

Requirements:

  • 5+ years of experience in data engineering, ETL, and data warehousing.
  • Expertise in Python/PySpark for big data processing.
  • Advanced SQL/PL-SQL skills (complex queries, stored procedures, performance tuning).
  • Hands-on experience with SnowflakeOracle Database, and Unix Shell Scripting.
  • Familiarity with AWS cloud services (S3, Lambda).
  • Proficiency in CI/CD tools (GitHub, Jenkins).
  • Strong analytical skills and ability to troubleshoot data pipeline issues.



Required profile

Experience

Spoken language(s):
EnglishSpanish
Check out the description to know which languages are mandatory.

Other Skills

  • Troubleshooting (Problem Solving)
  • Analytical Skills

Data Engineer Related jobs