Pay rate range - $65/hr. to $68/hr. on W2
100% Remote
Must Have
4+ years of industry experience in software development, data engineering, business intelligence, data science, or a related field.
SQL, pyspark, Python, R, Bash, , Redshift, S3, EMR, glue
Experience with data modeling, data warehousing, and building ETL pipelines
Leadership Principles
deliver results, dive deep, earn trust.
Job Description:
Looking for a Data Engineer who has a deep understanding of the full lifecycle of data generation to the end-user application.
We're looking for a leader to maintain the analytical infrastructure to support business-critical analyses such as operational reporting, forecasts, causal analysis, and operational health monitoring. You will directly influence the success of our organization by working on critical data engineering problems, building high-quality, accurate, and architecturally sound data pipelines that align with our business needs.
You will work across diverse science/engineering/business teams, acting as the business-facing subject matter expert for data storage, feature instrumentation, and data privacy, with the responsibility of managing end-to-end execution and delivery across projects.
KEY PERFORMANCE AREAS
• Design, implement and support an analytical data infrastructure.
• Managing AWS resources including EC2, EMR, S3, Glue, Redshift, etc.
• Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies.
• Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency.
• Collaborate with Product Manager, Analysts, and Business Intelligence Engineers (BIEs) to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
• Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.
BASIC QUALIFICATIONS
The role will be purely implementation.
• 4+ years of industry experience in software development, data engineering, business intelligence, data science, or related field.
• Experience with data modeling, data warehousing, and building ETL pipelines
• Strong SQL expertise
• Expertise with pyspark / SQL; creating, tuning, and optimizing queries
• Experience using big data technologies (Distributed Storage, Columnar Data Warehouses, Spark, etc.)
• Experience in one or more of: Python, R, Bash, or other scripting languages.
PREFERRED QUALIFICATIONS
• Master's in computer science, mathematics, statistics, economics, or other quantitative fields.
• 4+ years of experience as a Data Engineer or in a similar role.
• Experience working with AWS big data technologies (Redshift, S3, EMR, Glue).
• Demonstrated strength in data modeling, ETL development, and data warehousing.