Match score not available

AWS Data Engineer_Remote_b_rf

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

8+ years of experience, Expertise in AWS, Spark, Python, SQL, PySpark, ETL.

Key responsabilities:

  • Collaborate with business analysts and stakeholders
  • Design and develop Airflow DAGs for ETL workflows
  • Write PySpark scripts and custom Python functions
  • Monitor and troubleshoot ETL pipelines for smooth operation
  • Stay updated on new data engineering trends
CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See more CodersBrain offers

Job description

About us -Coders Brain is a global leader in its services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients.
Quick Implementation - We offer quick implementation for the new onboarding client.
Experienced Team - We’ve built an elite and diverse team that brings its unique blend of talent, expertise, and experience to make you more successful, ensuring our services are uniquely customized to your specific needs.
One Stop Solution - Coders Brain provides end-to-end solutions for the businesses at an affordable price with uninterrupted and effortless services.
Ease of Use - All of our products are user friendly and scalable across multiple platforms. Our dedicated team at Coders Brain implements keeping the interest of enterprise and users in mind.
Secure - We understand and treat your security with utmost importance. Hence we blend security and scalability in our implementation considering long term impact on business benefit.

Experience:- 8+ years
Location:- Remote
Notice period – Immediate Joiners
Role-AWS Data Engineer

Main Skills Required:- AWS Spark Python SQL PySpark ETL 

Job Description

•Collaborate with business analysts to understand and gather requirements for existing or new ETL pipelines.•Connect with stakeholders daily to discuss project progress and updates.•Work within an Agile process to deliver projects in a timely and efficient manner.•Design and develop Airflow DAGs to schedule and manage ETL workflows.•Transform SQL queries into Spark SQL code for ETL pipelines.•Develop custom Python functions to handle data quality and validation.•Write PySpark scripts to process data and perform transformations.•Perform data validation and ensure data accuracy and completeness by creating automated tests and implementing data validation processes.•Run Spark jobs on AWS EMR cluster using Airflow DAGs.•Monitor and troubleshoot ETL pipelines to ensure smooth operation.•Implement best practices for data engineering, including data modeling, datawarehousing(redshift), and data pipeline architecture.•Collaborate with other members of the data engineering team to improve processes andimplement new technologies.•Stay up to date with emerging trends and technologies in data engineering and suggest ways toimprove the team's efficiency and effectiveness.

Required profile

Experience

Industry :
Management Consulting
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs