Data Infrastructure Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

5+ years of experience in engineering and data analytics, preferably in a startup environment., Advanced skills in SQL and Python for complex data analysis., Experience with data pipeline technologies like Snowflake, Databricks, and AWS Glue., Familiarity with infrastructure-as-code tools and container orchestration platforms..

Key responsabilities:

  • Design and maintain real-time and batch data analytics pipelines.
  • Collaborate with cross-functional teams to create dashboards and reports.
  • Implement data governance policies and ensure data security.
  • Manage data ingestion integrations from various sources and oversee BI tooling.

Intellectsoft logo
Intellectsoft Computer Software / SaaS SME https://www.intellectsoft.net/
51 - 200 Employees
See all jobs

Job description

Intellectsoft is a software development company delivering innovative solutions since 2007. We operate across North America, Latin America, the Nordic region, the UK, and Europe.We specialize in industries like Fintech, Healthcare, EdTech, Construction, Hospitality, and more, partnering with startups, mid-sized businesses, and Fortune 500 companies to drive growth and scalability. Our clients include Jaguar Motors, Universal Pictures, Harley-Davidson, Qualcomm, and London Stock Exchange.Together, our team delivers solutions that make a difference. Learn more at www.intellectsoft.net

Project description

As a Data Infrastructure Engineer, your expertise will be pivotal in shaping and maintaining the bedrock of our data-driven operations. You will be entrusted with the responsibility of designing, constructing, and optimizing intricate data pipelines that will serve as the lifeblood of our compliance strategies and product offerings. Your role will necessitate seamless collaboration with diverse teams across the organization, including compliance, legal, and finance, to ensure that our data infrastructure aligns with and supports overarching business objectives.

Given the foundational nature of this role, your responsibilities will extend beyond mere technical implementation. You will play a key role in shaping our technological landscape by selecting the most suitable tools and technologies, managing relationships with external vendors, and nurturing a data-driven ethos across all facets of the company. You will be expected to architect and implement both real-time and batch data pipelines, capable of ingesting and processing data from a multitude of sources and delivering it to our data warehouse in a format that is readily accessible and actionable. Furthermore, you will be tasked with establishing stringent security and privacy controls to safeguard sensitive data and ensure compliance with all relevant regulations.

Requirements

  • 5+ years of professional engineering and data analytics background, preferably with some time spent in a startup environment.
  • Advanced SQL and Python abilities for managing complex data analysis.
  • Demonstrated aptitude to create automation tools and pipelines utilizing Python, Golang, and/or Typescript.
  • Practical experience with data pipeline and warehouse technologies such as Snowflake, Databricks, Apache Spark, and AWS Glue.
  • Skill in formulating declarative data models and transformations utilizing modern tools such as dbt.
  • Experience in building and overseeing cloud-based data lakes.
  • Previous experience incorporating real-time data streaming technologies such as Kafka and Spark.
  • Prior experience setting up and upholding modern data orchestration platforms such as Airflow.
  • Familiarity with infrastructure-as-code tools like Terraform and container orchestration platforms like Kubernetes.
  • Appreciates simplicity, rapid delivery, and pragmatic solutions.
  • Self-motivated and capable of independent work.
  • Be able to overlap with US Eastern Time (EST).

Nice to have skills

  • Professional Web3/Crypto experience.

Responsibilities
  • Design, construct, and uphold contemporary and resilient real-time and batch data analytics pipelines.
  • Formulate and sustain declarative data models and transformations.
  • Put into effect data ingestion integrations for streaming and conventional sources such as Postgres, Kafka, and DynamoDB.
  • Set up and arrange BI tooling for data analysis.
  • Collaborate closely with product, finance, legal, and compliance teams to construct dashboards and reports to support business operations, regulatory obligations, and customer needs.
  • Institute, convey, and impose data governance policies.
  • Record and disseminate optimal procedures concerning schema management, data integrity, availability, and security.
  • Safeguard and restrict access to sensitive data by implementing a secure permissioning model and establishing data masking and tokenization processes.
  • Recognize and convey data platform needs, including supplementary tooling and staffing.
  • Cooperate with cross-functional teams to delineate requirements, strategize projects, and execute the strategy.

Benefits

  • 35 absence days per year for work-life balance
  • Udemy courses of your choice
  • English courses with native-speaker
  • Regular soft-skills trainings
  • Excellence Сenters meetups
  • Online/offline team-buildings

Required profile

Experience

Industry :
Computer Software / SaaS
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Self-Motivation

Data Engineer Related jobs