Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
NexTurn is a next-generation engineering services firm specialized in providing cloud native solutions. We help clients accelerate their innovation & digital transformation journey by unlocking full value of cloud.
We are a team of passionate technologists with strong experience in architecting, developing, deploying, and operating large scale modern applications & infrastructure in a hybrid multi-cloud environment. Our consultants/architects are high performing engineering talent with integrated full stack competencies, have strong technology transformation experience and operate with a product-centric mindset.
NexTurn enables clients navigate the paradigm shift in digital engineering with Cloud-First solutions. Our services are aligned with Client’s digital transformation journey covering setting up strong cloud foundation & Security practices, Cloud Native Engineering and Data Engineering. Also, our platforms driven approach across the engineering lifecycle accelerates experimentation, creates new value, and drives intelligent automation.
Partner with our client teams, Engineering, IT functions to create clear requirements for build needs and convey that vision to multiple scrum teams.
Demonstrate deep understanding of technology stack and impact on final product.
Collaborate with architecture team to design the optimal solutions to meet the business needs.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Understand and work with multiple data sources to meet business rules and support analytical needs
Qualifications
Bachelor’s degree in computer science or in “STEM” Majors (Science, Technology, Engineering and Math)
Minimum of 5 years of relevant experience.
Desired Requirements
Good understanding of Cloud computing and Big data concepts.
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with structured/unstructured/semi-structured datasets.
A successful history of manipulating, processing and extracting value from large datasets.
Experience with big data tools: Hadoop, Hive, Spark, Kafka, etc.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Experience with any of the cloud platforms: AWS, GCP or Azure.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Location: Remote first job, but where the client mandates Work From Office, Candidate needs to relocate.
Kindly Submit Your Resume
Name *
Email*
Phone Number*
Resume*
Only PDF / Doc / Text Files are acceptable
Required profile
Experience
Level of experience:Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.