Offer summary
Qualifications:
Passion for building and deploying data-driven products with technologies like Kubernetes, Docker, CI/CD practices, and experience in AWS/GCP/Azure or Databricks/SnowFlake., Ability to develop clean code in Python, Java, SQL, and/or Scala, experience with Data Engineering or a willingness to learn, with knowledge of Backend/Frontend development, a plus..
Key responsabilities:
- Support Big Data, Data Engineering, and Data Warehouses implementations, build scalable data platforms using Apache Spark, Hadoop, SQL/NoSQL databases, integrate data in batch or real-time using Apache Kafka/RabbitMQ, and develop data pipelines with AirFlow or Luigi.