DBT Data Engineer- Remote

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor’s or Master’s degree in Information Technology, Bioinformatics, Computer Science, or a related field., 6 to 7 years of experience in data engineering with a focus on ETL data pipelines., Proficiency in Python, SQL, and AWS services like Glue and Redshift., Strong problem-solving abilities and excellent communication skills..

Key responsibilities:

  • Design, implement, and manage ETL data pipelines for commercial and scientific data.
  • Utilize AWS services to process and store data efficiently.
  • Develop and maintain data transformation pipelines using Python and SQL.
  • Collaborate with cross-functional teams to support data-driven decision-making.

Cognisol logo
Cognisol Scaleup http://www.cognisolglobal.com/
11 - 50 Employees
See all jobs

Job description

Key skillset-DBT,Python,SQL,AWS,pYSPARK
Years of Exp- 6 to 7 Years
Work Mode-Work From Home(Candidate should be available for 1st week of Onboarding @ Chennai Loc)
Shift Time-UK Shift time

Notice: Immediate to 15 days only

Placement Type: Contractual Position

 
Key Responsibilities:
  • Data Pipeline Development: Design, implement, and manage ETL data pipelines that ingest vast amounts of commercial and scientific data from various sources into cloud platforms like AWS.
  • Cloud Integration: Utilize AWS services such as Glue, Step Functions, Redshift, and Lambda to process and store data efficiently.
  • Data Transformation: Develop and maintain accurate data pipelines using Python and SQL, transforming data for aggregations, wrangling, quality control, and calculations.
  • Workflow Automation: Enhance end-to-end workflows with automation tools to accelerate data flow and pipeline management.
  • Collaboration: Work closely with business analysts, data scientists, and cross-functional teams to understand data requirements and develop solutions that support data-driven decision-making.
Qualifications:
  • Educational Background: Bachelor’s or Master’s degree in Information Technology, Bioinformatics, Computer Science, or a related field.
  • Professional Experience: Several years of experience in data engineering, with hands-on expertise in:
    • Developing and managing large-scale ETL data pipelines on AWS.
    • Proficiency in Python and SQL for data pipeline development.
    • Utilizing AWS services such as Glue, Step Functions, Redshift, and Lambda.
    • Familiarity with tools like Docker, Linux Shell Scripting, Pandas, PySpark, and Numpy.
  • Soft Skills: Strong problem-solving abilities, excellent communication skills, and the capacity to work collaboratively in a dynamic environment.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs