
- DevOps Pre Sales Consultant
Ashnik
Established in 2009, Ashnik is a leading open source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
For more details, kindly visit www.ashnik.com
THE POSITION Ashnik is looking for experienced technology consultant to work in DevOps team in sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tender
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
· Should be ready to travel 20-40% in a month - Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
Salary: upto 20L
Location: Mumbai

Similar jobs
Job Description
Position Title: Senior System Engineer
Position Type: Full Time
Department: RSG
Reports to: First Level Manager, Indian Development Centre
Company Background:
Cglia is a software development company building highly available, highly secure, cloud-based enterprise software products that helps speed the research process resulting in new drugs, new devices, and new treatments to improve the health and wellbeing of world population.
At Cglia, our work shows our dedication and passion for innovative quality software products that are intuitive and easy to use and exceeds every aspect of customer expectations.
Cglia, is the place that develops world-class professionals who would like to be innovative, creative, learn continuously, and build a solid foundation to build products that are special and delight the customer.
Job Description:
The Senior System Engineer will have expertise in managing both Linux and Windows environments, along with hands-on experience in containerization technologies such as Kubernetes and Docker. Proficiency in Ansible for automation and configuration management is essential. This role is critical in ensuring the seamless operation, deployment, and maintenance of our IT infrastructure.
The ideal candidate has to oversee and participate with the installation, monitoring, maintenance, support, optimization and documentation of all network hardware and software. This includes managing multiple projects, planning network technology roadmaps and configuring/optimizing network services both internally and those integrated with Internet-based services
Job Responsibilities:
· Manage, maintain, and monitor Linux and Windows servers to ensure high availability and performance.
· Perform system upgrades, patches, and performance tuning for both operating systems and DBA servers.
· Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
· Design and implement Kubernetes clusters to ensure scalability, security, and reliability.
· Develop and maintain Ansible playbooks for automation of repetitive tasks, configuration management,
and system provisioning.
· Implement security best practices for both Linux and Windows environments.
· Set up and manage backup and disaster recovery solutions for critical systems and data.
· Work closely with development teams to support CI/CD pipelines and troubleshoot application issues.
· Manage VM Ware in a high availability environment with Disaster Recovery
· Good experience in RAID & Firewall
· Maintaining and managing SQL database server support
· Experience with scripting languages Unix/Shell, Bash or PowerShell
· Assist Quality Assurance with testing program changes, new releases or user documentation and support
new product release activities that include testing customer flows
· Must have the ability to work a flexible schedule and is required to participate in on-call rotation, which
includes different shift timings, weekends, and holidays
· Work across multiple time zones with remote team members
· Perform other duties as deemed necessary to provide quality service to the clients
Experience and Skills Required:
· Minimum 4+ years of experience in Linux and Windows administration
· 3 years of experience in VM Ware in a high availability environment with Disaster Recovery
· Good experience in RAID & Firewall
· 2+ years of experience in SQL database server support
· Ability to quickly acquire an in-depth knowledge of multiple custom applications
· Experience in setting up IT policies based on best practices and monitoring them
· Experience in shell scripting and automating tasks
· Experience in hardware and software monitoring tools
· Experience in administration and best practices for Apache and Tomcat
· Experience in handling Cisco router and firewall configurations and management
· Working knowledge on SQL Server, Oracle and other RDBMS databases
· Must be proactive and possess strong interpersonal, communication and organization skills
· Must possess excellent written and verbal presentation skills
· Must be self-motivated
· Certification in Linux/Windows administration is preferable.
Academics:
· Bachelor's / Master's degree (or equivalent) in computer science or related field or equivalent experience.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled DevOps Service Desk Specialist to join our global DevOps team. As a DevOps Service Desk Specialist, you will be responsible for managing service desk operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing 24/7 support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
On-Call Duty: staffing of a 24/7 on-call service, handling of incidents outside normal office hours, escalation to and coordination with the onsite team, customers and 3rd parties where necessary.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
Technical experience in Java Enterprise environment as developer or DevOps specialist.
Familiarity with DevOps principles and ticketing tools like ServiceNow.
Strong analytical, communication, and organizational abilities. Easy to work with.
Strong problem-solving, communication, and ability to work in a fast-paced 24/7 environment.
Optional: Experience with our relevant business domain (Automotive industry). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
Java Enterprise, DevOps, Service Desk, Incident Management, Change Management, Problem Management, System Monitoring, Troubleshooting, Automation Workflows, CI/CD, Jenkins, GitLab, Docker, Kubernetes, AWS, Azure, ServiceNow, Root Cause Analysis, Documentation, Communication, On-Call Support, ITIL, SCRUM, Automotive Industry.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!
About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.
Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
We are looking for an experienced DevOps (Development and Operations) professional to join our growing organization. In this position, you will be responsible for finding and reporting bugs in web and mobile apps & assist Sr DevOps to manage infrastructure projects and processes. Keen attention to detail, problem-solving abilities, and a solid knowledge base are essential.
As a DevOps, you will work in a Kubernetes based microservices environment.
Experience in Microsoft Azure cloud and Kubernetes is preferred, not mandatory.
Ultimately, you will ensure that our products, applications and systems work correctly.
Responsibilities:
- Detect and track software defects and inconsistencies
- Apply quality engineering principals throughout the Agile product lifecycle
- Handle code deployments in all environments
- Monitor metrics and develop ways to improve
- Consult with peers for feedback during testing stages
- Build, maintain, and monitor configuration standards
- Maintain day-to-day management and administration of projects
- Manage CI and CD tools with team
- Follow all best practices and procedures as established by the company
- Provide support and documentation
Required Technical and Professional Expertise
- Minimum 2+ years if DevOps
- Have experience in SaaS infrastructure development and Web Apps
- Experience in delivering microservices at scale; designing microservices solutions
- Proven Cloud experience/delivery of applications on Azure
- Proficient in configuration Management tools such as Ansible or any of Terraform Puppet, Chef, Salt, etc
- Hands-on experience in Networking/network configuration, Application performance monitoring, Container performance, and security.
- Understanding of Kubernetes, Python along with scripting languages like bash/shell
- Good to have experience in Linux internals, Linux packaging, Release Engineering (Branching, versioning, tagging), Artifact repository, Artifactory, Nexus, and CI/CD tooling (Concourse CI, Travis, Jenkins)
- Must be a proactive person
- You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- An ambitious individual who can work under their own direction towards agreed targets/goals and with a creative approach to work.
- An intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Below is the Job Description for the position of DevOps Azure Engineer in Xceedance co.
Qualifications BE/ B.Tech/ MCA in computer science
Key Requirement for the Position Develop Azure application design and connectivity patterns, Azure networking topologies, and Azure storage facilities.
• Run code conformance tools as part of releases.
• Design Azure app service web app by using Azure CLI, PowerShell, and other tools.
• Implement containerized solution using Docker and Azure Kubernetes Service
• Automating the build and deployment process through Azure DevOps approach and tools from development to production
• Design and implement CI/CD pipelines
• Script and update build and deployments.
• Coordinate environment usage and alignment.
• Develop, maintain, and optimize automated deployments code for development, test, staging and production environments.
• Configure the application and container platform with proactive monitoring tools and trigger alerts through communication channels
• Develop infrastructure and platform code
• Effectively contribute to building the overall knowledge and expertise of the technical team
• Provide Level 2/3 technical support
Location Noida or Gurgaon
- Proven experience in handling large infrastructure and distributed systems like Kafka, Yarn, Elastic Search, etc..
- Familiarity with Python-related technologies and frameworks like Django or Pyramid.
- Experience with Unix/Linux operating systems internals and administration (e.g. filesystems, inodes, system calls, etc) or networking (e.g. TCP/IP, routing, network topologies, and hardware, SDN, etc)
- Familiarity with at least one of the cloud computing infrastructures - GCP / Azure / AWS
- Familiarity with task queue frameworks like Celery or Pika is a plus.
- Source code management and Implementation of security best practices.
- Experienced in building monitoring/metrics & alerting tool (APM tool), a custom dashboard for each Application stack against the supported environment
- Good understanding & implementation experience using 12-factor App principles
- Awareness of Cloud Security concepts
- Awareness of Information Security concepts and Best Practices
KaiOS is a mobile operating system for smart feature phones that stormed the scene to become the 3rd largest mobile OS globally (2nd in India ahead of iOS). We are on 100M+ devices in 100+ countries. We recently closed a Series B round with Cathay Innovation, Google and TCL.
What we are looking for:
- BE/B-Tech in Computer Science or related discipline
- 3+ years of overall commercial DevOps / infrastructure experience, cross-functional delivery-focused agile environment; previous experience as a software engineer and ability to see into and reason about the internals of an application a major plus
- Extensive experience designing, delivering and operating highly scalable resilient mission-critical backend services serving millions of users; experience operating and evolving multi-regional services a major plus
- Strong engineering skills; proficiency in programming languages
- Experience in Infrastructure-as-code tools, for example Terraform, CloudFormation
- Proven ability to execute on a technical and product roadmap
- Extensive experience with iterative development and delivery
- Extensive experience with observability for operating robust distributed systems – instrumentation, log shipping, alerting, etc.
- Extensive experience with test and deployment automation, CI / CD
- Extensive experience with infrastructure-as-code and tooling, such as Terraform
- Extensive experience with AWS, including services such as ECS, Lambda, DynamoDB, Kinesis, EMR, Redshift, Elasticsearch, RDS, CloudWatch, CloudFront, IAM, etc.
- Strong knowledge of backend and web technology; infrastructure experience with at-scale modern data technology a major plus
- Deep expertise in server-side security concerns and best practices
- Fluency with at least one programming language. We mainly use Golang, Python and JavaScript so far
- Outstanding analytical thinking, problem solving skills and attention to detail
- Excellent verbal and written communication skills
Requirements
Designation: DevOps Engineer
Location: Bangalore OR Hongkong
Experience: 4 to 7 Years
Notice period: 30 days or Less

