Bachelor’s or Master’s degree in Information Technology, Bioinformatics, Computer Science, or a related field., 6 to 7 years of experience in data engineering with a focus on ETL data pipelines., Proficiency in Python, SQL, and AWS services like Glue and Redshift., Strong problem-solving abilities and excellent communication skills..
Key responsibilities:
Design, implement, and manage ETL data pipelines for commercial and scientific data.
Utilize AWS services to process and store data efficiently.
Develop and maintain data transformation pipelines using Python and SQL.
Collaborate with cross-functional teams to support data-driven decision-making.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Cognisol is a distinguished ISO 9001:2015 certified company committed to excellence and innovation. With a focus on delivering cutting-edge services, we pride ourselves on being at the forefront of industry trends and technological advancements. Our commitment to quality management ensures that we consistently meet and exceed our customer expectations, setting us apart as a reliable and forward-thinking partner in the ever-evolving business landscape.
As a trusted partner, we are dedicated to fostering long-term relationships with our clients by delivering high-quality and futuristic services that contribute to their success.
Our Services: Web Applications, Mobile Applications, Custom product Applications
Write us at: info@cognisolglobal.com
Sales enquiries: sales@cognisolglobal.com
Work Mode-Work From Home(Candidate should be available for 1st week of Onboarding @ Chennai Loc)
Shift Time-UK Shift time
Notice: Immediate to 15 days only
Placement Type: Contractual Position
Key Responsibilities:
Data Pipeline Development: Design, implement, and manage ETL data pipelines that ingest vast amounts of commercial and scientific data from various sources into cloud platforms like AWS.
Cloud Integration: Utilize AWS services such as Glue, Step Functions, Redshift, and Lambda to process and store data efficiently.
Data Transformation: Develop and maintain accurate data pipelines using Python and SQL, transforming data for aggregations, wrangling, quality control, and calculations.
Workflow Automation: Enhance end-to-end workflows with automation tools to accelerate data flow and pipeline management.
Collaboration: Work closely with business analysts, data scientists, and cross-functional teams to understand data requirements and develop solutions that support data-driven decision-making.
Qualifications:
Educational Background: Bachelor’s or Master’s degree in Information Technology, Bioinformatics, Computer Science, or a related field.
Professional Experience: Several years of experience in data engineering, with hands-on expertise in:
Developing and managing large-scale ETL data pipelines on AWS.
Proficiency in Python and SQL for data pipeline development.
Utilizing AWS services such as Glue, Step Functions, Redshift, and Lambda.
Familiarity with tools like Docker, Linux Shell Scripting, Pandas, PySpark, and Numpy.
Soft Skills: Strong problem-solving abilities, excellent communication skills, and the capacity to work collaboratively in a dynamic environment.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.