Skills you need to succeed in this role
Most Important: Integrity of character, diligence, and the commitment to do your
best
Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven
Development
Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing
Self-Learner: You must be extremely hands-on and obsessive about
implementing engineering best practices in your solutions
Sense of Ownership: Do whatever it takes to implement the highest
quality solution in the most pragmatic manner
Experience in creating end-to-end data pipelines
Experience in Azure Data Factory (ADF) creating multiple pipelines
and activities for full and incremental data loads into Azure Data
Lake Store and Azure SQL DW
Working experience in Databricks
Strong in BI/DW/Datalake Architecture, design & ETL
Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
Experience in object-oriented programming, data structures, algorithms,
and software engineering methodologies
Experience working in Agile and eXtreme Programming methodologies in
a continuous deployment environment
Interest in mastering technologies like relational DBMS, TDD, CI tools like
Azure DevOps, complexity analysis, and performance optimization
Working knowledge of server configuration/deployment
Experience using source control and bug tracking systems, writing
user stories and technical documentation
Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
Expertise in creating tables, procedures, functions, triggers, indexes, views,
joins, and optimization of complex queries
Experience with database versioning, backups, restores, and migration
Expertise in data security and integrity
Ability to perform database performance tuning queries
Notice period:- Immediate joiner - 15 days