
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.

About Dori AI
About
At Dori, we develop platforms and services that enable artificial intelligence centered application development for mobile edge devices, embedded IoT devices, on-premise servers, and cloud platforms. The company provides a turnkey solution to add intelligence in applications by simplifying model development and deployment.
We have developed an AI-as-a-service platform that provides prebuilt and custom engines to evaluate, deploy, and monitor artificial intelligence systems for consumer and enterprise applications. Application developers can rapidly develop and deploy AI-enabled applications for multiple operating systems, hardware architectures, and cloud infrastructures.
Connect with the team
Similar jobs
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Job Description
We are seeking an experienced DevOps Engineer to build and manage infrastructure for a FinTech product company operating with stateful microservices. The deployment environments include hybrid cloud and on-premise setups. The ideal candidate must have strong production experience with Kubernetes, cloud platforms, and infrastructure automation.
Key Responsibilities
- Design, build, and manage infrastructure for stateful microservices (databases, queues, caching layers).
- Work on Kubernetes environments—both managed (EKS/AKS/GKE) and self-managed clusters.
- Build, enhance, and maintain custom Helm Charts for complex deployments.
- Set up and manage CI/CD pipelines using ArgoCD, FluxCD, or similar GitOps tools.
- Architect and optimize multi-tenant deployment models.
- Implement and manage high availability, load balancing, certificate management (SSL/TLS).
- Design deployment architectures based on business requirements.
- Manage cloud infrastructure on AWS/Azure including VPC, IAM, cloud networking, and security.
- Work with Infrastructure-as-Code (IaC) tools (Terraform/CloudFormation/Pulumi), including writing reusable modules.
- Monitor, troubleshoot, and optimize performance across production environments.
- Ensure security best practices in networking, access control, and secrets management.
Mandatory Skills
- 5+ years of DevOps experience in product-based companies (not services/consulting).
- Strong hands-on experience with stateful microservices in production.
- Deep expertise in Kubernetes (managed + self-managed).
- Strong ability to write custom Helm Charts.
- Experience with multi-tenant production environments.
- Expertise in AWS or Azure (cloud networking, IAM, VPC, security groups, etc.).
- Experience setting up GitOps-based CI/CD (ArgoCD/FluxCD).
- Strong understanding of HA, load balancing, DNS, SSL/TLS certificates.
- Ability to justify architectural decisions and propose deployment designs.
- Hands-on experience with IaC tools and writing custom Terraform/Pulumi modules.
Nice to Have
- Exposure to hybrid cloud deployments
- Knowledge of on-premise orchestration & networking
- Experience with service mesh (e.g., Istio, Linkerd)
- Experience with monitoring/logging tools (Prometheus, Grafana, Loki, ELK)
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
- The experience in AWS is desirable
- Python experience is optional however Power shell is mandatory
- Know-how on the use of GitHub
- DevOps Pre Sales Consultant
Ashnik
Established in 2009, Ashnik is a leading open source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
For more details, kindly visit www.ashnik.com
THE POSITION Ashnik is looking for experienced technology consultant to work in DevOps team in sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tender
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
· Should be ready to travel 20-40% in a month - Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
Salary: upto 20L
Location: Mumbai
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).- 3+ years experience leading a team of DevOps engineers
- 8+ years experience managing DevOps for large engineering teams developing cloud-native software
- Strong in networking concepts
- In-depth knowledge of AWS and cloud architectures/services.
- Experience within the container and container orchestration space (Docker, Kubernetes)
- Passion for CI/CD pipeline using tools such as Jenkins etc.
- Familiarity with config management tools like Ansible Terraform etc
- Proven record of measuring and improving DevOps metrics
- Familiarity with observability tools and experience setting them up
- Passion for building tools and productizing services that empower development teams.
- Excellent knowledge of Linux command-line tools and ability to write bash scripts.
- Strong in Unix / Linux administration and management,
KEY ROLES/RESPONSIBILITIES:
- Own and manage the entire cloud infrastructure
- Create the entire CI/CD pipeline to build and release
- Explore new technologies and tools and recommend those that best fit the team and organization
- Own and manage the site reliability
- Strong decision-making skills and metric-driven approach
- Mentor and coach other team members
One of our US based client is looking for a Devops professional who can handle Technical as well as Trainings for them in US.
If you are hired, you will be sent to US for the working from there. Training & Technical work ratio will be 70% & 30% respectively.
Company Will sponsor for US Visa.
If you are an Experienced Devops professional and also given professional trainings then feel free to connect with us for more.
Implement integrations requested by customers
Deploy updates and fixes
Provide Level 2 technical support
Build tools to reduce occurrences of errors and improve customer experience
Develop software to integrate with internal back-end systems
Perform root cause analysis for production errors
Investigate and resolve technical issues
Develop scripts to automate visualization
Design procedures for system troubleshooting and maintenance
Multiple Clouds [AWS/Azure/GCP] hands on experience
Good Experience on Docker implementation at scale.
Kubernets implementation and orchestration.
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East. This is an excellent opportunity to join ITP’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description:
Must-Have’s:
We are looking for a Sr. Engineer DevOps and SysOps, who is responsible for managing AWS and Azure cloud computing. Your primary focus would be to help multiple projects with various cloud service implementation, create and manage CI/CD pipelines for deployment, explore new services on cloud and help projects to implement them.
Technical Requirements & Responsibilities
- Have 4+ years’ experience as a DevOps and SysOps Engineer.
- Apply cloud computing skills to deploy upgrades and fixes on AWS and Azure (GCP is optional / Good to have).
- Design, develop, and implement software integrations based on user feedback.
- Troubleshoot production issues and coordinate with the development team to streamline code deployment.
- Implement automation tools and frameworks (CI/CD pipelines).
- Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
- Optimize the company’s computing architecture.
- Conduct systems tests for security, performance, and availability.
- Develop and maintain design and troubleshooting documentation.
- Expert in code deployment tools (Puppet, Ansible, and Chef).
- Can maintain Java / PHP / Ruby on Rail / DotNet web applications.
- Experience in network, server, and application-status monitoring.
- Possess a strong command of software-automation production systems (Jenkins and Selenium).
- Expertise in software development methodologies.
- You have working knowledge of known DevOps tools like Git and GitHub.
- Possess a problem-solving attitude.
- Can work independently and as part of a team.
Soft Skills Requirements
- Strong communication skills
- Agility and quick learner
- Attention to detail
- Organizational skills
- Understanding of the Software development life cycle
- Good Analytical and problem-solving skills
- Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities
- Should have a high level of energy working as an individual contributor and as a part of team.
- Good command over verbal and written English communication
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery












