MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops

About Cyphertree Technologies Pvt. Ltd.
About
Connect with the team
Similar jobs
ROLES AND RESPONSIBILITIES:
- Plan, schedule, and manage all releases across product and customer projects.
- Define and maintain the release calendar, identifying dependencies and managing risks proactively.
- Partner with engineering, QA, DevOps, and product management to ensure release readiness.
- Create release documentation (notes, guides, videos) for both internal stakeholders and customers.
- Run a release review process with product leads before publishing.
- Publish releases and updates to the company website release section.
- Drive communication of release details to internal teams and customers in a clear, concise way.
- Manage post-release validation and rollback procedures when required.
- Continuously improve release management through automation, tooling, and process refinement.
IDEAL CANDIDATE:
- 3+ years of experience in Release Management, DevOps, or related roles.
- Strong knowledge of CI/CD pipelines, source control (Git), and build/deployment practices.
- Experience creating release documentation and customer-facing content (videos, notes, FAQs).
- Excellent communication and stakeholder management skills; able to translate technical changes into business impact.
- Familiarity with SaaS, iPaaS, or enterprise software environments is a strong plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive salary package.
- Opportunity to learn from and work with senior leadership & founders.
- Build solutions for large enterprises that move from concept to real-world impact.
- Exceptional career growth pathways in a highly innovative and rapidly scaling environment.
Your challenge
As a DevOps Engineer, you’re responsible for automating the deployment of our software solutions. You interact with software engineers, functional product managers, and ICT professionals daily. Using your technical skills, you provide internal tooling for development and QA teams around the globe.
We believe in an integrated approach, where every team member is involved in all steps of the software development life cycle: analysis, architectural design, programming, and maintenance. We expect you to be the proud owner of your work and take responsibility for it.
Together with a tight-knit group of 5-6 team players, you develop, maintain and support key elements of our infrastructure:
- Continuous integration and production systems
- Release and build management
- Package management
- Containerization and orchestration
Your team
As our new DevOps Engineer, you’ll be part of a large, fast-growing, international team located in Belgium (Antwerp, Ghent, Wavre), Spain (Barcelona), Ukraine (Lviv), and the US (Atlanta). Software Development creates leading software solutions that make a difference to our customers. We make smart, robust, and scalable software to solve complex supply chain planning challenges.
Your profile
We are looking for someone who meets the following qualifications:
- A bachelor’s or master’s degree in a field related to Computer Science.
- Pride in developing high-quality solutions and taking responsibility for their maintenance.
- Minimum 6 years' experience in a similar role
- Good knowledge of the following technologies: Kubernetes, PowerShell or bash scripting, Jenkins, Azure Pipelines or similar automation systems, Git.
- Familiarity with the Cloud–Native Landscape. Terraform, Ansible, and Helm are tools we use daily.
- Supportive towards users.
Bonus points if you have:
- A background in DevOps, ICT, or technical support.
- Customer support experience or other relevant work experience, including internships.
- Understanding of Windows networks and Active Directory.
- Experience with transferring applications into the cloud.
- Programming skills.
Soft skills
Team Work
Pragmatic attitude
Passionate
Analytical thinker
Tech Savvy
Fast Learner
Hard skills
Kubernetes
CI/CD
Git
Powershell
Your future
At OMP, we’re eager to find your best career fit. Our talent management program supports your personal development and empowers you to build a career in line with your ambitions.
Many of our team members who start as DevOps Engineers grow into roles in DevOps/Cloud architecture, project management, or people management.
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
● Auditing, monitoring and improving existing infrastructure components of highly available and scaled
product on cloud with Ubuntu servers
● Running daily maintenance tasks and improving it with possible automation
● Deploying new components, server and other infrastructure when needed
● Coming up with innovative ways to automate tasks
● Working with telecom carriers and getting rates and destinations and update regularly on the system
● Working with Docker containers, Tinc, Iptables, HAproxy, ETCD, mySQL, mongoDB, CouchDB and
ansible
You would be bringing below skills to our team :
● Expertise with Docker containers and its networking, Tinc, Iptables, HAproxy, ETCD, and ansible
● Extensive experience with setup, maintenance, monitoring, backup and replication with mySQL
● Expertise with the Ubuntu servers and its OS and server level networking
● Good experience of working with mongoDB, CouchDB
● Good with the networking tools
● Open Source server monitoring solutions like nagios, Zabbix etc.
● Worked on highly scaled, distributed applications running on the Datacenter Ubuntu VPS instances
● Innovative and out of box thinker with multitasking skills working in a small team efficiently
● Working Knowledge of any scripting languages like bash, node or python
● It would be an advantage if have experience with the calling platforms like FreeSWITCH, OpenSIPS or
Kamailio and have basic knowledge of SIP protocol
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years
• DevOps/Build and Release Engineer with maturity to help, define and automate the processes.
• Work, configure, install, manage, on source control tools like AWS Codecommit / GitHub / BitBucket.
• Automate implementation/deployment of code in the cloud-based infrastructure (AWS Preferred).
• Setup monitoring of infrastructure and applications with alerting frameworks
Requirements:
• Able to code in Python.
• Extensive experience with building and supporting Docker and Kubernetes in
production.
• Understand AWS (Amazon Web Services) and be able to jump right into our
environment.
• Security Clearance will be required.
• Lambda used in conjunction with S3, CloudTrail and EC2.
• CloudFormation (Infrastructure as code)
• CloudWatch and CloudTrail
• Version Control (SVN, Git, Artifactory, Bit bucket)
• CI/CD (Jenkins or similar)
• Docker Compose or other orchestration tools
• Rest API
• DB (Postgres/Oracle/SQL Server or NoSql or Graph DB)
• Bachelor’s Degree in Computer Science, Computer Engineering or a closely
related field.
• Server orchestration using tools like Puppet, Chef, Ansible, etc.
Please send your CV at priyanka.sharma @ neotas.com
Neotas.com
About RaRa Delivery
Not just a delivery company…
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
-
4+ years of experience in IT and infrastructure
-
2+ years of experience in Azure Devops
-
Experience with Azure DevOps using both as CI / CD tool and Agile framework
-
Practical experience building and maintaining automated operational infrastructure
-
Experience in building React or Angular applications, .NET is must.
-
Practical experience using version control systems with Azure Repo
-
Developed and maintained scripts using Power Shell, ARM templates/ Terraform scripts for Infrastructure as a Code.
-
Experience in Linux shell scripting (Ubuntu) is must
-
Hands on experience with release automation, configuration and debugging.
-
Should have good knowledge of branching and merging
-
Integration of tools like static code analysis tools like SonarCube and Snky or static code analyser tools is a must.
Responsibilities
- Building and maintenance of resilient and scalable production infrastructure
Improvement of monitoring systems
- Improvement of monitoring systems
- Creation and support of development automation processes (CI / CD)
- Participation in infrastructure development
- Detection of problems in architecture and proposing of solutions for solving them
- Creation of tasks for system improvements for system scalability, performance and monitoring
- Analysis of product requirements in the aspect of devops
- Incident analysis and fixing
Skills and Experience
- Understanding of the distributed systems principles
- Understanding of principles for building a resistant network infrastructure
- Experience of Ubuntu Linux administration (Debian-like will be a plus)
- Strong knowledge of Bash
- Experience of working with LXC-containers
- Understanding and experience with infrastructure as a code approach
- Experience of development idempotent Ansible roles
- Experience of working with git
Preferred experience
Experience with relational databases (PostgeSQL), ability to create simple SQL queries
- Experience with monitoring and metric collect systems (Prometheus, Grafana, Zabbix)
- Understanding of dynamic routing (OSPF)
- Knowledge and experience of working with network equipment Cisco
- Experience of working with Cisco NX-OS
- Experience of working with IPsec, VXLAN, Open vSwitch
- Knowledge of principles of multicast protocols IGMP, PIM
- Experience of setting multicast on Cisco equipment
- Experience administering Atlassian products
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience









