
- DevOps Pre Sales Consultant
Ashnik
Established in 2009, Ashnik is a leading open source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
For more details, kindly visit www.ashnik.com
THE POSITION Ashnik is looking for experienced technology consultant to work in DevOps team in sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tender
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
· Should be ready to travel 20-40% in a month - Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
Salary: upto 20L
Location: Mumbai

Similar jobs
Lead DevSecOps Engineer
Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time
Apply here → https://lnk.ink/CLqe2
About FlytBase:
FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.
We're building intelligent autonomy — not just automation — and security is core to that vision.
What You’ll Own
You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.
Expect to:
- Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
- Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
- Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
- Build for Zero Trust security — logs, secrets, audits, access policies
- Lead incident response, postmortems, and playbooks to reduce MTTR
- Automate and secure CI/CD pipelines with SAST, DAST, image hardening
- Script your way out of toil using Python, Bash, or LLM-based agents
- Work alongside dev, platform, and product teams to ship secure, scalable systems
What We’re Looking For:
You’ve probably done a lot of this already:
- 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
- Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
- Strong in Linux internals, OS hardening, and network security
- Built and owned CI/CD pipelines, IaC, and automated releases
- Written scripts (Python/Bash) that saved your team hours
- Familiar with SOC 2, ISO 27001, threat detection, and compliance work
Bonus if you’ve:
- Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.
What It Means to Be a Flyter
- AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
- Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
- Joy in complexity: Security + infra + scale = your happy place.
- Radical candor: You give and receive sharp feedback early — and grow faster because of it.
- Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
- H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
- Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.
Perks:
▪ Unlimited leave & flexible hours
▪ Top-tier health coverage
▪ Budget for AI tools, courses
▪ International deployments
▪ ESOPs and high-agency team culture
Apply Here- https://lnk.ink/CLqe2

Senior Software Engineer I - DevOps Engineer
Exceptional software engineering is challenging. Amplifying it to ensure that multiple teams can concurrently create and manage a vast, intricate product escalates the complexity. As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your task will be to develop and sustain automated tooling for the release processes of all our services. You will contribute significantly to establishing automated delivery pipelines, empowering autonomous teams to create independently deployable services. Your role is integral to our overarching strategy of enhancing software delivery and progressing Sumo Logic’s internal Platform-as-a-Service.
What you will do:
• Own the Delivery pipeline and release automation framework for all Sumo services
• Educate and collaborate with teams during both design and development phases to ensure best practices.
• Mentor a team of Engineers (Junior to Senior) and improve software development processes.
• Evaluate, test, and provide technology and design recommendations to executives.
• Write detailed design documents and documentation on system design and implementation.
• Ensuring the engineering teams are set up to deliver quality software quickly and reliably.
• Enhance and maintain infrastructure and tooling for development, testing and debugging
What you already have
• B.S. or M.S. Computer Sciences or related discipline
• Ability to influence: Understand people’s values and motivations and influence them towards making good architectural choices.
• Collaborative working style: You can work with other engineers to come up with good decisions.
• Bias towards action: You need to make things happen. It is essential you don’t become an inhibitor of progress, but an enabler.
• Flexibility: You are willing to learn and change. Admit past approaches might not be the right ones now.
Technical skills:
- 4+ years of experience in the design, development, and use of release automation tooling, DevOps, CI/CD, etc.
- 2+ years of experience in software development in Java/Scala/Golang or similar
- 3+ years of experience on software delivery technologies like jenkins including experience writing and developing CI/CD pipelines and knowledge of build tools like make/gradle/npm etc.
- Experience with cloud technologies, such as AWS/Azure/GCP
- Experience with Infrastructure-as-Code and tools such as Terraform
- Experience with scripting languages such as Groovy, Python, Bash etc.
- Knowledge of monitoring tools such as Prometheus/Grafana or similar tools
- Understanding of GitOps and ArgoCD concepts/workflows
- Understanding of security and compliance aspects of DevSecOps
About Us
Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com.
Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection.

Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
● Manage AWS services and day to day cloud operations.
● Work closely with the development and QA team to make the deployment process
smooth and devise new tools and technologies in order to achieve automation of most
of the components.
● Strengthen the infrastructure in terms of Reliability (configuring HA etc.), Security (cloud
network management, VPC, etc.) and Scalability (configuring clusters, load balancers,
etc.)
● Expert level understanding of DB replication, Sharding (mySQL DB Systems), HA
clusters, Failovers and recovery mechanisms.
● Build and maintain CI-CD (continuous integration/deployment) workflows.
● Having an expert knowledge on AWS EC2, S3, RDS, Cloudfront and other AWS offered
services and products.
● Installation and management of software systems in order to support the development
team e.g. DB installation and administration, web servers, caching and other such
systems.
Requirements:
● B. Tech or Bachelor's in a related field.
● 2-5 years of hands-on experience with AWS cloud services such as EC2, ECS,
Cloudwatch, SQS, S3, CloudFront, route53.
● Experience with setting up CI-CD pipelines and successfully running large scale
systems.
● Experience with source control systems (SVN, GIT etc), Deployment and build
automation tools like Jenkins, Bamboo, Ansible etc.
● Good experience and understanding of Linux/Unix based systems and hands-on
experience working with them with respect to networking, security, administration.
● Atleast 1-2 years of experience with shell/python/perl scripting; having experience with
Bash scripting is an added advantage.
● Experience with automation tasks like, automated backups, configuring fail overs,
automating deployment related process is a must have.
● Good to have knowledge of setting up the ELK stack; Infrastructure as a code services
like Terraform; working and automating processes with AWS SDK/CLI tools with scripts

Responsibilities
- Building and maintenance of resilient and scalable production infrastructure
- Improvement of monitoring systems
- Creation and support of development automation processes (CI / CD)
- Participation in infrastructure development
- Detection of problems in architecture and proposing of solutions for solving them
- Creation of tasks for system improvements for system scalability, performance and monitoring
- Analysis of product requirements in the aspect of devops
- Managing a team of DevOps, control of task deliveries
- Incident analysis and fixing
Technology stack
Linux, Bash, Salt/Ansible, LXC, libvirt, IPsec, VXLAN, Open vSwitch, OpenVPN, OSPF, BIRD, Cisco NX-OS, Multicast, PIM, LVM, software RAID, LUKS, PostgreSQL, nginx, haproxy, Prometheus, Grafana, Zabbix, GitLab, Capistrano
Skills and Experience
- Understanding of the distributed systems principles
- Understanding of principles for building a resistant network infrastructure
- Experience of Ubuntu Linux administration (Debian-like will be a plus)
- Strong knowledge of Bash
- Experience of working with LXC-containers
- Understanding and experience with infrastructure as a code approach
- Experience of development idempotent Ansible roles
- Experience with relational databases (PostgeSQL), ability to create simple SQL queries
- Experience with git
- Experience with monitoring and metric collect systems (Prometheus, Grafana, Zabbix)
- Understanding of dynamic routing (OSPF)
Preferred experience
- Experience of working with highload zero-downtown environments
- Experience of coding on Python
- Experience of working with IPsec, VXLAN, Open vSwitch
- Knowledge and experience of working with network equipment Cisco
- Experience of working with Cisco NX-OS
- Knowledge of principles of multicast protocols IGMP, PIM
- Experience of setting multicast on Cisco equipment
- Experience of working with Solarflare Onload
- Experience administering Atlassian products
We are a growth-oriented, dynamic, multi-national startup, so those that are looking for that startup excitement, dynamics, and buzz are here at the right place. Read on -
FrontM (http://www.frontm.com/" target="_blank">www.frontm.com) is an edge AI company with a platform that is redefining how businesses and people in remote and isolated environments (maritime, aviation, mining....) collaborate and drive smart decisions.
Successful candidate will lead the back end architecture working alongside VP of delivery, CTO and CEO
The problem you will be working on:
- Take ownership of AWS cloud infrastructure
- Overlook tech ops with hands-on CI/CD and administration
- Develop Node.js Java and backend system procedures for stability, scale and performance
- Understand FrontM platform roadmap and contribute to planning strategic and tactical capabilities
- Integrate APIs and abstractions for complex requirements
Who you are:
- You are an experienced Cloud Architect and back end developer
- You have experience creating AWS Serverless Lambdas EC2 MongoDB backends
- You have extensive CI/CD and DevOps experience
- You can take ownership of continuous server uptime, maintenance, stability and performance
- You can lead a team of backend developers and architects
- You are a die-hard problem solver and never-say-no person
- You have 10+ years experience
- You are very sound in English language
- You have the ability to initiate and lead teams working with senior management
Additional benefits
- Generous pay package, flexible for the right candidate
- Career development and growth planning
- Entrepreneurial environment that nurtures and promotes innovation
- Multi-national team with an enjoyable culture
We'd love to talk to you if you find this interesting and like to join in on our exciting journey
We are looking for a Sr. Engineer DevOps and SysOps, who is responsible for managing AWS and Azure cloud computing. Your primary focus would be to help multiple projects with various cloud service implementation, create and manage CI/CD pipelines for deployment, explore new services on cloud and help projects to implement them.
Technical Requirements & Responsibilities
- Have 4+ years’ experience as a DevOps and SysOps Engineer.
- Apply cloud computing skills to deploy upgrades and fixes on AWS and Azure (GCP is optional / Good to have).
- Design, develop, and implement software integrations based on user feedback.
- Troubleshoot production issues and coordinate with the development team to streamline code deployment.
- Implement automation tools and frameworks (CI/CD pipelines).
- Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
- Optimize the company’s computing architecture.
- Conduct systems tests for security, performance, and availability.
- Develop and maintain design and troubleshooting documentation.
- Expert in code deployment tools (Puppet, Ansible, and Chef).
- Can maintain Java / PHP / Ruby on Rail / DotNet web applications.
- Experience in network, server, and application-status monitoring.
- Possess a strong command of software-automation production systems (Jenkins and Selenium).
- Expertise in software development methodologies.
- You have working knowledge of known DevOps tools like Git and GitHub.
- Possess a problem-solving attitude.
- Can work independently and as part of a team.
Soft Skills Requirements
- Strong communication skills
- Agility and quick learner
- Attention to detail
- Organizational skills
- Understanding of the Software development life cycle
- Good Analytical and problem-solving skills
- Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities
- Should have a high level of energy working as an individual contributor and as a part of team.
- Good command over verbal and written English communication
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.


