Minimum 5 years of experience in data engineering or backend development., Advanced proficiency in Python and libraries like Pandas and NumPy., Hands-on experience with AWS services such as Lambda, EC2, and S3., Strong communication and leadership skills in a remote, agile environment..
Key responsibilities:
Lead end-to-end data engineering projects from requirements to deployment.
Architect and implement scalable data pipelines and ETL processes using Python and AWS.
Mentor a remote agile team and conduct peer code reviews.
Collaborate with stakeholders to deliver business-driven solutions.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Skaleart
2 - 10
Employees
About Skaleart
Skaleart is a dynamic talent solutions firm based in Sri Lanka, specializing in workforce expansion and talent acquisition. We are dedicated to connecting top-tier professionals with leading organizations, providing customized solutions to meet each client’s unique staffing needs. Our services span a wide range of industries, ensuring the right fit for every role, whether for local businesses or international clients.
At Skaleart, we focus on building strong partnerships, delivering exceptional results, and supporting growth for both clients and candidates. With a team of experienced professionals and a commitment to excellence, Skaleart is your trusted partner in finding the ideal talent to help scale your business.
We are looking for a seasoned Lead Data Engineer Python & AWS to spearhead our data engineering initiatives and guide a growing team. The ideal candidate will have deep expertise in Python and data-centric backend development, coupled with strong AWS experience. You will lead architectural decisions, ensure code quality, and play a key role in delivering scalable and efficient data solutions in a fast-paced, remote-first environment.
Key Responsibilities:
Lead end-to-end data engineering projects from requirement gathering to deployment and maintenance.
Architect and implement scalable data pipelines and ETL processes using Python and AWS.
Mentor and collaborate with a remote agile team to deliver robust, production-grade solutions.
Conduct peer code reviews and provide constructive feedback.
Propose and implement architectural enhancements and best practices.
Integrate third-party APIs and implement event-driven data workflows using AWS services.
Ensure high quality through rigorous testing, monitoring, and documentation.
Collaborate closely with stakeholders and cross-functional teams to deliver business-driven solutions.
Required Qualifications:
Minimum 5 years of industry experience with a strong focus on data engineering or backend development.
Advanced proficiency in Python and related libraries such as Pandas and NumPy.
Experience with data processing, ETL, data scraping, and API integration.
Hands-on experience with AWS services (Lambda, EC2, S3, and AWS data tools).
Solid knowledge of SQL and NoSQL database design and optimization.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with containerization tools like Docker.
Understanding of multi-tenancy architectures.
Skilled in using Git and version control best practices.
Strong communication and leadership skills in a remote, agile environment.
Benefits:
Competitive compensation pegged to USD.
Medical insurance coverage.
Support for higher education and professional certifications.
Rapid career growth in a dynamic and empowering culture.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.