Location: Remote β LATAM
Industry: AI / SaaS / Technology
Seniority Level: Senior
We are seeking a highly skilled and experienced Machine Learning Engineer to join our AI Engineering team and drive the design, development, and operationalization of advanced Artificial Intelligence and Machine Learning systems. The ideal candidate will bring deep technical expertise in large-scale model development, including deep learning and generative AI architectures, and a strong background in deploying these systems within cloud-native infrastructures.
This role is ideal for an engineer with over five years of hands-on experience, who thrives in cross-functional environments, and is committed to delivering scalable, real-time AI solutions.
Model Development: Architect, implement, and optimize machine learning models, including Large Language Models (LLMs), transformer-based systems, and generative models.
Framework Implementation: Leverage industry-standard frameworks such as TensorFlow, PyTorch, and Hugging Face to build and productionize AI models.
Cloud-Native Engineering: Build and manage high-performance, containerized ML services and APIs using Kubernetes and cloud technologies.
Data Pipeline Development: Engineer scalable, real-time data pipelines leveraging distributed systems like Apache Kafka, Spark, or Flink.
Product Integration: Partner with software engineers, data scientists, and product teams to embed ML functionality into applications and services.
DevOps for ML: Contribute to continuous integration and deployment workflows using GitHub Actions, Docker, and MLOps best practices to streamline model lifecycle management.
5+ years of hands-on experience designing and deploying ML/AI solutions in production environments.
Deep proficiency in Python and experience with ML libraries such as TensorFlow, PyTorch, Hugging Face, scikit-learn, and MLflow.
Strong knowledge of training and fine-tuning LLMs and deep learning models.
Solid experience working with Kubernetes and orchestrating containerized workloads in cloud environments.
Proficiency in real-time and batch processing using Apache Kafka, Spark, or similar technologies.
Understanding of modern DevOps practices and tools, particularly for AI systems (e.g., Docker, CI/CD pipelines, model monitoring).
Experience with public cloud platforms (AWS, GCP, Azure).
Knowledge of responsible AI principles and practices (fairness, bias mitigation, explainability).
Familiarity with vector databases, embedding techniques, and retrieval-augmented generation (RAG) pipelines.
A chance to work on next-generation AI systems with a tangible global impact.
A culture that promotes technical excellence, autonomy, and continuous learning.
Flexible remote work from LATAM, competitive compensation, and an international, inclusive team environment.
Right Balance
Brixio Singapore
Attentive
Hugging Face
HOPPR