
MLOps Lead Engineer
at IT solutions specialized in Apps Lifecycle management. (MG1)
- Automate and maintain ML and Data pipelines at scale
- Collaborate with Data Scientists and Data Engineers on feature development teams to containerize and build out deployment pipelines for new modules
- Maintain and expand our on-prem deployments with spark clusters
- Design, build and optimize applications containerization and orchestration with Docker and Kubernetes and AWS or Azure
- 5 years of IT experience in data-driven or AI technology products
- Understanding of ML Model Deployment and Lifecycle
- Extensive experience in Apache airflow for MLOps workflow automation
- Experience is building and automating data pipelines
- Experience in working on Spark Cluster architecture
- Extensive experience with Unix/Linux environments
- Experience with standard concepts and technologies used in CI/CD build, deployment pipelines using Jenkins
- Strong experience in Python and PySpark and building required automation (using standard technologies such as Docker, Jenkins, and Ansible).
- Experience with Kubernetes or Docker Swarm
- Working technical knowledge of current systems software, protocols, and standards, including firewalls, Active Directory, etc.
- Basic knowledge of Multi-tier architectures: load balancers, caching, web servers, application servers, and databases.
- Experience with various virtualization technologies and multi-tenant, private and hybrid cloud environments.
- Hands-on software and hardware troubleshooting experience.
- Experience documenting and maintaining configuration and process information.
- Basic Knowledge of machine learning frameworks: Tensorflow, Caffe/Caffe2, Pytorch

Similar jobs
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Multiplier enables companies to employ anyone, anywhere in a few clicks. Our
SaaS platform combines the multi-local complexities of hiring & paying employees
anywhere in the world, and automates everything. We are passionate about
creating a world where people can get a job they love, without having to leave the
people they love.
We are an early stage start up with a "Day one" attitude and we are building a
team that will make Multiplier the market leader in this space. Every day is an
exciting one at Multiplier right now because we are figuring out a real problem in
the market and building a first-of-its-kind product around it. We are looking for
smart and talented people who will add on to our collective energy and share the
same excitement in making Multiplier a big deal. We are headquartered in
Singapore, but our team is remote.
What will I be doing? 👩💻👨💻
Owning and managing our cloud infrastructure on AWS.
Working as part of product development from inception to launch, and own
deployment pipelines through to site reliability.
Ensuring a high availability production site with proper alerting, monitoring and
security in place.
Creating an efficient environment for product development teams to build, test
and deploy features quickly by providing multiple environments for testing and
staging.
Use infrastructure as code and the best of methods and tools in DevOps to
innovate and keep improving.
Create an automation culture and add automation to wherever it is needed..
DevOps Engineer Remote) 2
What do I need? 🤓
4 years of industry experience in a similar DevOps role, preferably as part of
a SaaS
product team. You can demonstrate the significant impact your work has had
on the product and/or the team.
Deep knowledge in AWS and the services available. 2 years of experience in
building complex architecture on cloud infrastructure.
Exceptional understanding of containerisation technologies and Docker. Have
had hands on experience with Kubernetes, AWS ECS and AWS EKS.
Experience with Terraform or any other infrastructure as code solutions.
Able to comfortably use at least one high level programming languages such
as Java, Javascript or Python. Hands on experiences of scripting in bash,
groovy and others.
Good understanding of security in web technologies and cloud infrastructure.
Work with and solve problems of very complex nature and enjoy doing it.
Willingness to quickly learn and use new technologies or frameworks.
Clear and responsive communication.
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release processes
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and 'fixes'
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Bachelors in Computer Science, Engineering or relevant field
- Experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good knowledge of Python
- Working knowledge of databases such as Mysql,Postgres and SQL
- Problem solving attitude
- Collaborative team spirit
- Detail knowledge of Linux systems (Ubuntu)
- Proficient in AWS console and should have handled the infrastructure of any product (Including dev and prod environments)
Mandatory hands on experience in the following :
- Python based application deployment and maintenance
- NGINX web server
- AWS modules EC2, VPC, EBS, S3
- IAM setup
- Database configurations MySQL, PostgreSQL
- Linux flavoured OS
- Instance/Disaster management
We are hiring DevOps Engineers for luxury-commerce platform that is well-funded and is now ready for its next level of growth. It is backed by reputed investors and is already a leader in its space. The focus for the coming years will be heavily on scaling the platform through technology. Market-driven competitive salary or the right candidate
Job Title : DevOps System Engineer
Responsibilities:
- Implementing, maintaining, monitoring, and supporting the IT infrastructure
- Writing scripts for service quality analysis, monitoring, and operation
- Designing procedures for system troubleshooting and maintenance
- Investigating and resolving technical issues by deploying updates/fixes
- Implementing automation tools and frameworks for automatic code deployment (CI/CD)
- Quality control and management of the codebase
- Ownership of infrastructure and deployments in various environments
Requirements:
- Degree in Computer Science, Engineering or a related field
- Prior experience as a DevOps engineer
- Good knowledge of various operating systems - Linux, Windows, Mac.
- Good Knowledge of Networking, virtualization, Containerization technologies.
- Familiarity with software release management and deployment (Git, CI/CD)
- Familiarity with one or more popular cloud platforms such as AWS, Azure, etc.
- Solid understanding of DevOps principles and practices
- Knowledge of systems and platforms security
- Good problem-solving skills and attention to detail
Skills: Linux, Networking, Docker, Kubernetes, AWS/Azure, Git/GitHub, Jenkins, Selenium, Puppet/Chef/Ansible, Nagios
Experience : 5+ years
Location: Prabhadevi, Mumbai
Interested candidates can apply with their updated profiles.
Regards,
HR Team
Aza Fashions
The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation. A key part of the role is championing and leading infrastructure as code. The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure.
Duties & Responsibilities:
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Qualifications:
- At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Expertise using Chef for configuration management
- Hands-on experience deploying and managing infrastructure with Terraform
- Solid foundation of networking and Linux administration
- Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Strong bias for action and ownership
Job Summary
You'd be meticulously analyzing project requirements and carry forward the development of highly robust, scalable and easily maintainable backend applications, work independently, and you'll have the support & opportunity to thrive in a fast-paced environment.
Responsibilities and Duties:
- building and setting up new development tools and infrastructure
- understanding the needs of stakeholders and conveying this to developers
- working on ways to automate and improve development and release processes
- testing and examining code written by others and analysing results
- ensuring that systems are safe and secure against cybersecurity threats
- identifying technical problems and developing software updates and ‘fixes’
- working with software developers and software engineers to ensure that development follows established processes and works as intended
- planning out projects and being involved in project management decisions
Skill Requirements:
- Managing GitHub (example: - creating branches for test, QA, development and production, creating Release tags, resolve merge conflict)
- Setting up of the servers based on the projects in either AWS or Azure (test, development, QA, staging and production)
- AWS S3 configuring and s3 web hosting, Archiving data from s3 to s3-glacier
- Deploying the build(application) to the servers using AWS CI/CD and Jenkins (Automated and manual)
- AWS Networking and Content delivery (VPC, Route 53 and CloudFront)
- Managing databases like RDS, Snowflake, Athena, Redis and Elasticsearch
- Managing IAM roles and policies for the functions like Lambda, SNS, aws cognito, secret manager, certificate manager, Guard Duty, Inspector EC2 and S3.
- AWS Analytics (Elasticsearch, Athena, Glue and kinesis).
- AWS containers (elastic container registry, elastic container service, elastic Kubernetes service, Docker Hub and Docker compose
- AWS Auto scaling group (launch configuration, launch template) and load balancer
- EBS (snapshots, volumes and AMI.)
- AWS CI/CD build spec scripting, Jenkins groovy scripting, shell scripting and python scripting.
- Sagemaker, Textract, forecast, LightSail
- Android and IOS automation building
- Monitoring tools like cloudwatch, cloudwatch log group, Alarm, metric dashboard, SNS(simple notification service), SES(simple email service)
- Amazon MQ
- Operating system Linux and windows
- X-Ray, Cloud9, Codestar
- Fluent Shell Scripting
- Soft Skills
- Scripting Skills , Good to have knowledge (Python, Javascript, Java,Node.js)
- Knowledge On Various DevOps Tools And Technologies
Qualifications and Skills
Job Type: Full-time
Experience: 4 - 7 yrs
Qualification: BE/ BTech/MCA.
Location: Bengaluru, Karnataka
Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR
About the client :
Asia’s largest global sports media property in history with a global broadcast to 150+ countries. As the world’s largest martial arts organization, they are a celebration of Asia’s greatest cultural treasure, and its deep-rooted Asian values of integrity, humility, honor, respect, courage, discipline, and compassion. Has achieved some of the highest TV ratings and social media engagement metrics across Asia with its unique brand of Asian values, world-class athletes, and world-class production. Broadcast partners include Turner Sports, Star India, TV Tokyo, Fox Sports, ABS-CBN, Astro, ClaroSports, Bandsports, Startimes, Premier Sports, Thairath TV, Skynet, Mediacorp, OSN, and more. Institutional investors include Sequoia Capital, Temasek Holdings, GIC, Iconiq Capital, Greenoaks Capital, and Mission Holdings. Currently has offices in Singapore, Tokyo, Los Angeles, Shanghai, Milan, Beijing, Bangkok, Manila, Jakarta, and Bangalore.
Position : Devops Engineer – SDE3
As part of the engineering team, you would be expected to have deep technology expertise with a passion for building highly scalable products. This is a unique opportunity where you can impact the lives of people across 150+ countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability, performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager, Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus









