Your mission
The AI teams at neoshare design and builds cutting-edge solutions that transform how our customers collaborate on financing and transaction cases. We turn vast collections of documents into structured insights, empower users to interact with their data in natural language, and enhance transparency, efficiency, and decision-making. Our goal is not just to automate but to elevateâgiving our customers greater control, clarity, and even joy in their workflows.
As an MLOps Engineer at neoshare, you will be at the core of scaling AI into production, ensuring that models are efficiently deployed, monitored, and continuously improved. You will work at the intersection of AI and DevOps, designing scalable ML pipelines, automating workflows, and enabling seamless AI operations across teams.
Where your experience is needed
Design and maintain scalable, reliable, and automated MLOps infrastructure, enabling seamless model deployment, versioning, and monitoring. Build self-service tools that empower AI teams to deploy models efficiently while ensuring high availability and operational excellence.
Develop and optimize model serving infrastructure for real-time inference, batch processing, and API-based AI services. Ensure low-latency, high-throughput execution across cloud and on-prem environments while collaborating with DevOps to scale AI workloads effectively.
Establish best practices for AI observability and monitoring, implementing tools to track model drift, accuracy, inference speed, and reliability. Drive continuous improvements in performance and stability, ensuring models operate securely and efficiently in production.
Foster a culture of technical excellence and collaboration. Share knowledge, refine best practices, and guide teams in adopting cutting-edge MLOps solutions that streamline AI development and deployment.