Bachelor's or Master's degree in Computer Science, Engineering, or a related field., Over 5 years of hands-on experience in data engineering or a similar role., Strong experience with at least one major cloud platform (AWS, Azure, or GCP)., Proficiency in Python, SQL, and data processing frameworks like Apache Spark or Kafka..
Key responsibilities:
Design, build, and maintain scalable and efficient data pipelines for batch and real-time processing.
Develop and optimize data architectures in cloud environments (AWS, Azure, or GCP).
Work closely with data scientists, analysts, and software engineers to ensure data solutions meet business needs.
Implement data quality checks and manage data workflows to ensure high availability.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Nuestro propósito
Luego de un proceso largo y arduo de pensar e identificar como compañía que nos mueve y nos orienta, construimos nuestro propósito con un resultado completamente enfocado en lo que hacemos y profesamos.
"Cuidamos nuestro entorno, multiplicando experiencias exitosas"
Cuidamos nuestros clientes, sofkianos y comunidad
Multiplicando nuestra excelencia técnica, creando comunidades prácticas y gestionando nuestro conocimiento.
Nos comprometemos
* Entregar en el menor tiempo posible
* Dentro del presupuesto
* Con calidad
* Con la alta satisfacción de nuestros clientes
Para nosotros la satisfacción de nuestros clientes está enfocada a que disfruten la experiencia completa en la construcción de sus soluciones tecnológicas.
Es por esto por lo que, en Sofka Technologies, estamos convencidos que la participación y colaboración de nuestros clientes, durante todo el proceso de construcción de la solución, es de vital importancia para lograr los objetivos finales. Esto, sin que implique desatender su negocio o tener que convertirse en expertos de tecnología.
No creemos en procesos pesados de definición y especificación de requerimientos, que demanden un gran esfuerzo y desgaste de los equipos de trabajo. Nuestra filosofía es construir software de manera incremental y funcional, donde el negocio pueda, rápidamente, obtener los beneficios de la solución.
We are looking for a highly skilled and experienced Senior Data Engineer to join our team. The ideal candidate will have a strong background in data engineering, data architecture, and cloud-based data platforms (AWS, Azure, or GCP). You will be responsible for designing, developing, and maintaining scalable and reliable data pipelines that support analytics, reporting, and data science initiatives.
Key Responsibilities:
>Design, build, and maintain scalable and efficient data pipelines for batch and real-time processing.
>Develop and optimize data architectures in cloud environments (AWS, Azure, or GCP).
>Work closely with data scientists, analysts, and software engineers to ensure data solutions meet business needs.
>Implement data quality and validation checks to ensure data integrity and accuracy.
Manage and monitor data workflows, troubleshoot issues, and ensure high availability.
>Define and enforce data engineering best practices and coding standard.
>Participate in architectural discussions and contribute to cloud and data strategy.
>Ensure data security, governance, and compliance with organizational and regulatory standards.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
+5 years of hands-on experience in data engineering or a similar role.
Strong experience with at least one major cloud platform (AWS, Azure, or GCP).
Proficiency in Python, SQL, and one or more data processing frameworks (e.g., Apache Spark, Kafka, Beam, Flink).
Experience with cloud-native data tools such as AWS Glue, Redshift, BigQuery, Azure Data Factory, or similar.
Solid understanding of data modeling, ETL processes, and data warehousing concepts.
Experience with CI/CD pipelines and infrastructure as code (e.g., Terraform, CloudFormation).
Strong problem-solving and communication skills.
Familiarity with containerization (Docker, Kubernetes) is a plus.
Nice to Have:
Experience with orchestration tools (e.g., Airflow, Prefect).
Knowledge of machine learning pipelines and MLOps.
Exposure to data governance and cataloging tools (e.g., Apache Atlas, Alation).
Important:This is a proactive opportunity to join our talent pool. While there currently isn't an open position, by joining our network, you'll be first in line when a suitable opportunity arises, ensuring a swift and seamless hiring process.
💼 What’s in it for you?
🌍 100% Remote work – Join from anywhere in LATAM
📈 Career growth – Contribute to enterprise data initiatives with global impact
🤝 Collaborative culture – Work with cross-functional, cloud-first teams
🚀 High-impact projects with cutting-edge data and AI tech
🎓 Certification support and career development programs
🧘 Emphasis on work-life balance and wellness
📝 Permanent contract – We invest in long-term collaboration
👉 Apply now and help us shape the future of data-driven transformation!
Required profile
Experience
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.