5+ years experience in data engineering or relevant field, Familiarity with Databricks, Airflow, Spark, Snowflake, AWS.
Key responsabilities:
Drive data platform architecture and modernization
Ensure reliability and cost-effectiveness of data pipeline
Assist customer support teams in problem-solving
Contribute to data governance and security practices
Participate in on-call rotation for team
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
CM Group
1001 - 5000
Employees
About CM Group
Where relationships take root. The Marigold approach to Relationship Marketing stands alone in a world of one-size-fits all marketing technology companies. Our solutions are designed for your specific size, industry, and maturity, giving you the technology and expertise you need to grow the relationships that grow your business, from customer acquisition to engagement to loyalty. And, with a team of strategists that provide insights into what’s working, what’s not, and what’s changing in your industry, you’re able to maximize ROI every step of the way.
Great marketing isn’t just about conversion, but true connection. Learn why 40,000 businesses around the world trust Marigold to be the firm foundation they need to help relationships take root.
Marigold is the largest sender of personalized email on the planet. But we’re so much more than an email provider or cross-channel marketing hub. We’re committed to creating true partnerships with our clients, not just being another vendor. Working with some of the biggest names in ecommerce and publishing, we help deliver personalized email, mobile messaging, and onsite experiences to billions of consumers every year.
The Role
Marigold Engage by Sailthru is putting together a team to support and operate our data engineering platform, including data warehouse, pipelines and machine learning systems.
This job requires a high level of technical competency and a desire to own and evolve the data platform that our product relies upon. If you’re passionate about building cutting-edge data solutions, we want you!
Responsibilities
Driving the technical direction of our data platform’s architecture, whilst modernizing legacy components.
Ensuring reliable and cost-effective operation of our data pipeline and warehouses. This makes up a critical component of our product and is a key production platform for us.
Helping our customer success and support teams with escalations, and work with the team to diagnose and fix rare and interesting problems.
Being part of our regular on-call rotation with the other team members (approximately 4x people).
Drive our data governance and security practices.
Requirements
This isn’t your first swim in the data lake. You have experience working with technologies such as Databricks, Airflow, Spark, Snowflake, AWS and can hit the ground running helping us grow and develop our architecture.
Approximately 5+ years of experience in data engineering or other relevant technical field.
Comfort writing and reviewing code written in Python and Java.
An understanding of applications that contribute to and consume from the data lake, including event-driven architecture, Kafka and a conventional SaaS stack.
Know enough AWS to understand S3, IAM, compute workloads and keeping costs under control.
An interest in moving into more of the data science side of data.
(nice to have) Experience with machine learning including training models.
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.