SDE || Data & Reporting

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience in data engineering or backend roles., Strong programming skills in Scala, Python, or Java., Solid understanding of Apache Spark and big data processing., Experience with Google BigQuery and BI/reporting tools..

Key responsibilities:

  • Design and implement scalable data pipelines using Spark and manage workflows in Databricks.
  • Create curated data layers to serve various reporting needs.
  • Collaborate with cross-functional teams to deliver dashboards and reports.
  • Troubleshoot data discrepancies and ensure timely delivery of critical reports.

Editorialist logo
Editorialist Online Marketplace and E-commerce Scaleup http://editorialist.com/
51 - 200 Employees
See all jobs

Job description

What’s Editorialist?

Editorialist melds personal styling, editorial content, and shopping into one seamless digital experience powered by proprietary technology and e-commerce tools. Editorialist.com our media property, delivers sophisticated content and commerce to aspirational and affluent consumers. Our stories connect readers with bespoke product and service solutions for fashion, accessories, beauty, and wellness needs. The cornerstone of our tech platform—the Editorialist app—blends content, digital services, and e-commerce for our elite clientele, individuals with an average net worth in excess of $550 million. Our co-founder and CEO Rafael Ortiz previously co-founded NexTag, the largest comparison shopping site for products and services, and was responsible for marketing and business development until its sale for $1.2 billion.

We're seeking a Software Development Engineer II (SDE2) to join our Data & Reporting team. This role is ideal
for someone who thrives at the intersection of data engineering and backend engineering, helping transform raw
data into meaningful insights that drive business decisions across Sales and Revenue, Billing, Traffic, User
engagement and behavior.

Your Responsibilities
  • Design and implement scalable data pipelines using Spark (Scala/PySpark) and manage workflows in Databricks.
  • Work with structured/unstructured data and create curated data layers (bronze, silver, gold) to serve various
  • reporting needs.
  • Develop and maintain integrations with internal APIs and external data sources to ingest and transform
  • operational data.
  • Collaborate with cross-functional teams to deliver dashboards and reports using Big Query and Looker Studio.
  • Partner with backend teams and data stakeholders to improve data quality, lineage, and performance.
  • Troubleshoot data discrepancies and ensure timely delivery of critical reports for business stakeholders.

  • More About You (here
  • 3+ years of experience in a similar data engineering or backend-focused role.
  • Strong programming skills in Scala and/or Python and/or Java.
  • Solid understanding of Apache Spark, Databricks, and big data processing.
  • Experience with Google BigQuery, Looker Studio, or other BI/reporting tools.
  • Working knowledge of backend development: Java/Python, RESTful APIs, MySQL/PostgreSQL.
  • Experience with batch and near-real-time data pipelines, preferably on GCP or other cloud platforms.
  • Strong problem-solving skills and ability to work independently in a remote setup.

  • Bonus points:
  • Familiarity with Airflow or other orchestration tools
  • Experience in fashion or e-commerce domains.
  • Experience working with US Tech. companies
  • Benefits:
    Work from home set up provided
    Opportunity to work with a Global Team
    Health insurance for self & family

    Required profile

    Experience

    Industry :
    Online Marketplace and E-commerce
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Other Skills

    • Collaboration
    • Problem Solving

    Related jobs