Match score not available

Senior Data Engineer

extra holidays - fully flexible
Remote: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Bachelor's or Master's degree in Computer Science, Engineering or related field, 2+ years developing large-scale software and 5+ years in data engineering, Strong experience with Spark, Scala, Python, and AWS services, Experience in data warehousing, SQL, Airflow, problem-solving skills, Knowledge of data governance, security best practices and high-level English skill.

Key responsabilities:

  • Work with stakeholders to prioritize data projects and align infrastructure with business goals
  • Design and maintain optimal data pipeline architecture from various sources
  • Monitor and optimize data performance, identify opportunities for improvement
  • Solve complex problems, implement data privacy and security solutions
  • Enhance team's dev-ops capabilities, stay updated on data engineering trends
Seeking Alpha logo
Seeking Alpha Information Technology & Services SME https://seekingalpha.com/
201 - 500 Employees
HQ: New York
See more Seeking Alpha offers

Job description

Description

Seeking Alpha is looking for a talented and experienced Senior Data Engineer to join us.

In this role, you will design, build, and maintain the infrastructure required to analyze large data sets. As an expert in data management, ETL (extract, transform, load) processes, and data warehousing, you will work with various big data technologies such as Spark, Hadoop, and NoSQL databases.

Beyond your technical expertise, you will need strong communication and collaboration skills. You will work closely with the data and analytics team and other stakeholders to identify and prioritize data engineering projects, ensuring the data infrastructure aligns with overall business goals and objectives.

Why we’re a great company to work for

Seeking Alpha is the leading online destination for engaged investors. We have an awesome product. Our crowdsourced research and cutting-edge investing tools are helping nearly 300,000 paying subscribers exceed their financial goals.

We care about work-life balance: We work primarily from home, provide lots of perks, and insist that you enjoy them.

We invest in people. We consider each employee a long-term investment, and we see value in continuously nurturing and training our teammates.

If that's what you're also looking for, go ahead and apply!


Responsibilities

  • Work closely with data scientists/analytics and other stakeholders to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with business goals and objectives
  • Design, build and maintain optimal data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources, including external APIs, data streams, and data stores. 
  • Continuously monitor and optimize the performance and reliability of the data infrastructure, and identify and implement solutions to improve scalability, efficiency, and security
  • Stay up-to-date with the latest trends and developments in the field of data engineering, and leverage this knowledge to identify opportunities for improvement and innovation within the organization
  • Solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality.
  • Implement data privacy and security requirements to ensure solutions comply with security standards and frameworks.
  • Enhance the team's dev-ops capabilities.

Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
  • 2+ years of proven experience developing large-scale software using an object-oriented or a functional language.
  • 5+ years of professional experience in data engineering, focusing on building and maintaining data pipelines and data warehouses
  • Strong experience with Spark, Scala, and Python, including the ability to write high-performance, maintainable code
  • Experience with AWS services, including EC2, S3, Athena, Lambda and EMR
  • Familiarity with data warehousing concepts and technologies, such as columnar storage, data lakes, and SQL
  • Experience with data pipeline orchestration and scheduling using tools such as Airflow
  • Strong problem-solving skills and the ability to work independently as well as part of a team
  • High-level English - a must
  • A team player with excellent collaboration skills.

         Nice to Have:

  • Expertise with Vertica or Redshift, including experience with query optimization and performance tuning
  • Experience with machine learning and/or data science projects
  • Knowledge of data governance and security best practices, including data privacy regulations such as GDPR and CCPA.
  • Knowledge of Spark internals (tuning, query optimization)

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
EnglishEnglish
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Verbal Communication Skills
  • Collaboration
  • Problem Solving

Data Engineer Related jobs