Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Evnek Technologies strives to be the leader in cloud and analytics services. The insights and quality services we deliver help build trust and confidence in our clients. We develop outstanding relationships with our clients and deliver on our promises. Moreover, we play a crucial role in challenging the traditional technologies and look for opportunities to innovate.
Evnek Technologies specializes in Cloud and Enterprise Data Management. We offer services in data integration, business intelligence, data warehousing, architecture, modeling and analytics. We are thought leaders and work with the latest technologies in the Big Data, Cloud, and Data Science space.
Evnek Technologies is passionate about adding value. We partner with our clients and ensure a positive user experience all the while building elegant and scalable solutions that evolve with the company and remain dynamic in an ever-changing business landscape.
Develop, maintain, and optimize large-scale data pipelines and workflows using Spark, PySpark, Airflow, and other Big Data frameworks.
Work extensively with SQL, Impala, Hive, and PL/SQL to perform advanced data transformations and analytics.
Design and implement scalable data storage and retrieval systems, ensuring high availability and performance.
Utilize Sqoop and other data ingestion tools to integrate structured and unstructured data from various sources into Hadoop-based systems.
Collaborate with cross-functional teams to define and implement data warehousing strategies, leveraging business intelligence tools to drive insights.
Develop scripts and automate processes using Python and Unix/Linux to ensure efficient data handling and monitoring.
Troubleshoot and resolve issues with big data pipelines, ensuring data quality and integrity across platforms.
Stay updated on the latest trends in big data technologies and recommend new tools and techniques to enhance data processing and analytics capabilities.
Qualifications:
Education: Bachelor’s degree in Computer Science, Information Systems, or a related field.
Experience:
3+ years of experience as a Big Data Engineer or similar role.
Proficiency in big data frameworks such as Sqoop, Spark, Hadoop, Hive, and Impala.
Strong SQL, Impala, Hive, and PL/SQL skills, with a deep understanding of query optimization and performance tuning.
Solid understanding of data warehousing concepts and experience with business intelligence (BI) tools like Tableau, Power BI, or Looker.
Hands-on experience with Python programming, including PySpark, for data manipulation and pipeline development.
Knowledge of Airflow for workflow orchestration and scheduling.
Working knowledge of Unix/Linux environments, including shell scripting.
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.