Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field., 2+ years of experience developing and operating large-scale ETL/ELT processes and data warehouses on AWS., Proficiency in AWS Glue and Redshift, and strong skills in Python and SQL., Familiarity with big data frameworks like Apache Spark, Hadoop, Kafka, or AWS Kinesis..

Key responsabilities:

  • Build and optimize scalable, efficient ETL pipelines using AWS.
  • Collaborate closely with cross-functional teams to deliver impactful data solutions.
  • Ensure seamless data extraction, transformation, and loading processes that support advanced analytics and machine learning workflows.
  • Leverage cutting-edge technologies to enhance data workflows and support machine learning initiatives.

ShippyPro logo
ShippyPro Scaleup https://www.shippypro.com/
51 - 200 Employees
See all jobs

Job description

About Us

Our journey began with a vision to "Make people work better" and today, following our recent Series B funding round of $15M, we're leading a rapid growth trajectory that's transforming how companies worldwide deliver their products.

This is not just a job; it's an opportunity to be part of something extraordinary. As we surge ahead, you'll be at the heart of a movement that's setting new standards and leaving an indelible mark. You won't be a cog in the wheel; you'll be the engine driving our success.

Our fast-paced, collaborative environment is the breeding ground for creativity and innovation. You'll be surrounded by a diverse team of experts, all driven by a shared purpose. Together, we'll navigate challenges, celebrate victories, and constantly push each other to reach new heights.

Join Us in Shaping the Future 🚀

If you're passionate, driven, and eager to make a real impact, we want you on our team. Together, let's rewrite the playbook and set new standards for excellence.

The Product

ShippyPro is the platform that simplifies shipping and fulfillment processes for merchants, helping them automate and speed up their operations. We are currently undertaking a major initiative to modernize our tech stack to a fully distributed system to support our business growth for the coming years.

Who You Are

You’re a data engineer with a passion for building and optimizing data pipelines on AWS. You thrive in dynamic environments where innovation, problem-solving, and continuous learning are essential. You’re not just about managing data - you’re driven by the desire to leverage cutting-edge technologies to enhance data workflows and support machine learning initiatives. You’re ready to take ownership of your projects and collaborate closely with cross-functional teams to deliver impactful data solutions.

The Data Pipelines You'll Be Working On

We’re on a mission to build and optimize scalable, efficient ETL pipelines using AWS. Our goal is to ensure seamless data extraction, transformation, and loading processes that support advanced analytics and machine learning workflows. We aim to integrate large language models (LLMs) to automate and enhance data engineering tasks, driving innovation and efficiency.

Our tech stack includes:

  • AWS Services: Glue, Redshift, S3, Lambda, Step Functions, EventBridge, Managed Workflows for Apache Airflow
  • Programming Languages: Python, SQL
  • Big Data Frameworks: Apache Spark, Hadoop, Kafka, AWS Kinesis
  • Infrastructure as Code: AWS CDS, serverless
  • Machine Learning: Amazon SageMaker, Bedrock
  • Databases: Redshift, DynamoDB, PostgreSQL


The Data Engineering Team

We’re a high-impact, fast-moving team focused on performance, scalability, and continuous improvement. We handle large-scale data processing and analytics, shipping regular updates and new features.

Here, your work truly matters, and you’ll have the opportunity to make a real difference in how we manage and utilize data.

About You

  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
  • Experience: 2+ years of experience developing and operating large-scale ETL/ELT processes and data warehouses on AWS.
  • Technical Skills:
  • Proficiency in AWS Glue and Redshift for building and maintaining data pipelines.
  • Strong skills in Python and SQL for automating data workflows and querying large datasets.
  • Experience with relational and NoSQL databases, including data modeling and optimization.
  • Familiarity with big data frameworks like Apache Spark, Hadoop, Kafka, or AWS Kinesis.
  • Knowledge of Infrastructure as Code tools like AWS CloudFormation or Serverless, and CI/CD pipelines.
  • Experience with AWS services for ETL orchestration and workflow automation.
  • Data Science and ML:
  • Understanding of core machine learning concepts and data science workflows.
  • Experience building or supporting data analysis pipelines on AWS.
  • Familiarity with data cleansing, transformation, and visualization techniques.
  • LLM Familiarity:
  • Awareness of how LLMs can automate and enhance data engineering tasks.
  • Ability to work alongside AI/LLM tools to augment data engineering processes.
  • Continuous learning mindset to stay updated on LLM advancements.
  • Soft Skills:
  • Excellent problem-solving skills and the ability to work autonomously.
  • Strong English communication skills, both written and verbal.
  • A team player who thrives in collaboration and embraces challenges with enthusiasm.

  • Why Join Us?

    • AI-Enhanced Development: Boost productivity with GitHub Copilot.
    • Remote-First Culture: Work from anywhere, with monthly HQ meetups in Florence for those who live in Italy.
    • Meal Perks: Enjoy daily meal vouchers, wherever you work.
    • Professional development and growth opportunities: Join a team that values collaboration, celebrates wins, and learns from challenges. We also foster continuous learning with dedicated training programs and curated courses to support your personal and professional development.
    • Career Growth Program & Continuous Feedback: Clear paths for growth with structured goals and continuous open feedback.
    • Commitment to Transparency and feedback: We value openness and fairness in everything we do. That’s why we foster a culture where feedback - even anonymous - is encouraged and welcomed, helping us improve, grow, and build a workplace we all believe in.


    Hiring Process

    We take the time to find the right fit for our team. Here’s what you can expect after the initial review of your application:

    • A video interview with the P&C team
    • A tech interview
    • A verbal assessment of your technical skills, including a live coding challenge


    Thanks for considering joining our team. We look forward to hearing from you! ✌️

    Do you want to know more? Click here: https://www.shippypro.com/en/work-with-us

    Required profile

    Experience

    Industry :
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Other Skills

    • Teamwork
    • Communication
    • Problem Solving

    Data Engineer Related jobs