Match score not available

Data Quality Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 
Texas (USA), United States

Offer summary

Qualifications:

3-5 years experience with databases, Strong proficiency in Python, Hands-on experience with Databricks, Degree in Computer Science or relevant field, Familiarity with healthcare data management.

Key responsabilities:

  • Enforce and maintain data quality standards
  • Monitor and evaluate data quality metrics
  • Collaborate with teams on data issues
  • Develop and implement data quality processes
  • Automate data quality monitoring and validation
Loopback Analytics logo
Loopback Analytics TPE https://www.loopbackanalytics.com/
11 - 50 Employees
See more Loopback Analytics offers

Job description

About Loopback

Loopback is a leading provider of data-driven solutions for health systems and life science organizations. The company’s comprehensive analytics platform drives growth for specialty pharmacy programs while connecting pharmacy activities with clinical and economic outcomes. Loopback partners with Life Science leaders to tackle their most important challenges and capture their most exciting opportunities.  Loopback’s clients include leading academic medical centers, health systems, and life sciences companies. Founded in 2009, Loopback was rated as one of the best places to work in Dallas by the DBJ. For more information about our company and services please visit our website at www.loopbackanalytics.com.


About the Job 

We are seeking a skilled and detail-oriented Data Quality Engineer to join our team. The ideal candidate will be responsible for ensuring the accuracy, completeness, and reliability of our data assets. This role requires a strong understanding of data quality concepts, as well as hands-on experience with Python, Databricks, and ideally Snowflake. 

 

Job Duties to Include 

  • Enforce and maintain robust data quality standards and protocols 
  • Monitor and evaluate data quality metrics to identify and address anomalies and inconsistencies 
  • Participate in efforts to identify and address data-related errors and discrepancies promptly 
  • Collaborate with technical and non-technical teams to resolve issues and prevent future occurrences 
  • Develop and implement data quality assurance processes and procedures. 
  • Design and execute data quality checks and validations to ensure accuracy, completeness, and consistency of data. 
  • Collaborate with data engineers and data analysts to understand data pipelines and identify potential sources of data quality issues. 
  • Investigate and report data quality issues in a timely manner. 
  • Develop and maintain data quality metrics and reports to monitor and communicate the health of our data assets. 
  • Develop, document and maintain manual and automation  tests for ETL data pipelines using various frameworks like PyTest or Great Expectations 
  • Work closely with cross-functional teams to define data quality requirements and standards. 
  • Provide technical expertise and guidance on data quality best practices. 
  • Automate data quality monitoring and validation processes using Python and Databricks. 
  • Perform data profiling and analysis to identify patterns and trends in data quality issues. 
  • Stay up-to-date with industry trends and emerging technologies related to data quality management. 
     

 

Requirements 

  • A deep understanding of data management principles, data governance, data integration, data quality, and data architecture. 
  • 3-5 years experience with proficiency in working with relational databases, NoSQL databases, data warehouses, and big data technologies such as Snowflake, Databricks, Spark, etc. 
  • Bachelor's or Master's degree or relevant experience in a field such as Computer Science, Data Science, Healthcare Informatics 
  • Experience in working with healthcare data and familiarity with the intricacies of healthcare data management 
  • Proven experience working as a Data Quality Engineer or in a similar role. 
  •  
  • Strong proficiency in Python for data manipulation, analysis, and automation. 
  • Hands-on experience with Databricks for data processing and analytics. 
  • Familiarity with Snowflake data warehouse platform is a plus. 
  • Solid understanding of data quality concepts, frameworks, methodologies, commercial tools and best practices. 
  • Experience with data profiling tools and techniques. 

 
Travel expectation less than [15%] 

 

This employer will not sponsor applicants for employment visa status (e.g., H1-B) for this position.  All applicants must be currently authorized to work in the United States on a full-time basis. 

 

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, or national origin. 

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Technical Acumen
  • Collaboration

Data Engineer Related jobs