Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field., 5 years of experience in data engineering or a similar role., Proficiency in SQL and experience with relational databases like PostgreSQL and MySQL., Strong experience with Databricks and ETL/ELT tools such as Apache Airflow..

Key responsibilities:

  • Develop, test, and maintain data architectures including databases, data lakes, and data warehouses.
  • Design and implement scalable ETL/ELT pipelines to ingest, transform, and load data from various sources.
  • Collaborate with cross-functional teams to understand data needs and provide efficient solutions.
  • Monitor and troubleshoot data pipelines, ensuring timely resolution of issues.

RHEI logo
RHEI https://www.rhei.com
201 - 500 Employees
See all jobs

Job description

Position: Data Engineer

Location: Brazil

About RHEI:

RHEI is a creator economy company advancing the industry by helping creators, media

companies and brands find success through digital content. We provide end-to-end solutions to help creators and media companies grow their audiences and revenue, while helping brands connect to hard-to-reach digital fans. Our proprietary technologies leverage generative AI, machine learning, digital signal processing and big data to power our platform and ecosystem, and we are the largest multi-vertical video publisher in the world, reaching tens of billions of monthly views and over 600 million monthly uniques.

About the role:

In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines, ensuring efficient data integration, and optimizing data storage solutions. You will collaborate closely with data scientists, analysts, and software engineers to build robust data architectures that support our business intelligence and analytics initiatives.

As RHEI is a high growth company, you should enjoy working in an entrepreneurial, high change environment. RHEI has a remote work model which offers the best work life balance.

Key Responsibilities:

● Develop, test, and maintain data architectures, including databases, data lakes, and data warehouses.

● Design and implement scalable and reliable ETL/ELT pipelines to ingest, transform, and load data from various sources.

● Optimize and improve data processing workflows for performance, scalability, and

cost-effectiveness.

● Ensure data integrity, consistency, and security across all data platforms.

● Collaborate with cross-functional teams to understand data needs and provide efficient solutions.

● Provide actionable insights for improving performance of relational databases.

● Monitor and troubleshoot data pipelines, ensuring timely resolution of issues.

● Implement best practices for data governance, metadata management, and documentation.

● Work with cloud-based data platforms (AWS, GCP, Azure) and leverage services such as S3, Redshift, BigQuery, Snowflake, Databricks, or similar technologies.

Required Qualifications:

● Bachelor's or Master's degree in Computer Science, Data Engineering, Information

Systems, or a related field.

● 5 years of experience in data engineering or a similar role.

● Proficiency in SQL and experience working with relational databases (PostgreSQL, MySQL, SQL Server).

● Strong experience with Databricks.

● Hands-on experience with ETL/ELT tools like Apache Airflow or similar tools.

● Proficiency in Python.

● Experience with cloud platforms (AWS, GCP, Azure) and related data services.

● Knowledge of data modeling, warehousing concepts, and best practices.

● Strong problem-solving skills and ability to work in a fast-paced environment.

● Experience integrating data solutions into a REST API.

● Understanding of CI/CD pipelines and DevOps practices for data infrastructure.

● Experience with Snowflake or similar cloud-based data warehouse technology.

Preferred Qualifications:

● Experience with NoSQL databases like DynamoDB or MongoDB.

● Familiarity with data streaming technologies such as Apache Kafka or AWS Kinesis.

● Experience working with containerized applications using Docker or Kubernetes.

● Knowledge of machine learning model deployment and MLOps concepts.

●Experience with machine learning.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Problem Solving

Data Engineer Related jobs