Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Protagona
51 - 200
Employees
About Protagona
We’re Protagona – an AWS Advanced Tier partner that helps industry leaders propel innovation with cutting-edge data solutions.
Exclusively focused on AWS, our expertise and bias for action equates to faster delivery, scalable systems, and cost-efficient outcomes for our customers. The cornerstone of our humanized delivery approach is co-innovation, ensuring your team receives as much attention as your technology.
We work with organizations of all sizes, ranging from startups to midsize organizations and Fortune 500 corporations. While our core focus is data, our expertise spans cloud acceleration, application modernization, data engineering, and DevOps & SRE.
As a Data Engineer, you will be part of a talented team of engineers responsible for the deployment and configuration of cloud resources to meet individual client business needs in AWS. Client engagements cover a wide variety of business requirements and require our engineers to adapt quickly and stay on top of recent cloud technology trends. Candidates should be able to identify and remediate issues within cloud-based systems, based on their knowledge of industry standards and best practices.
Data Engineer Responsibilities
Work with the team to evaluate business needs and priorities, liaise with key business partners and address team needs related to data systems and management
Translate business requirements into technical specifications; establish and define details, definitions, and requirements of applications, components and enhancements
Participate in project planning; identifying milestones, deliverables and resource requirements; tracks activities and task execution
Generate design, development, test plans, detailed functional specifications documents, user interface design, and process flow charts for execution of programming
Develop data pipelines / APIs using Python, SQL, potentially Spark and AWS, Azure or GCP Methods
Use an analytical, data-driven approach to drive a deep understanding of fast changing business
Build large-scale batch and real-time data pipelines with data processing frameworks in AWS, Azure or GCP cloud platform
Moving data from on-prem to cloud and cloud data conversions
Desired Skills & Experience
Experience in data engineering with an emphasis on data analytics and reporting
Exposure to the AWS Cloud Platform
Experience in SQL, data transformations, and troubleshooting across at least one database Platform (Redshift, Amazon RDS, Cassandra, Snowflake, PostgreSQL, Databricks, etc.)
Experience in the design and build of data extraction, transformation, and loading processes by writing custom data pipelines
Experience in a scripting language such as Python
Experience designing and building solutions utilizing various Cloud services such as EC2, S3, EMR, Kinesis, RDS, Redshift/Spectrum, Lambda, Glue, Athena, API gateway, etc.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.