
DevOps Engineer (Automation)
at leading open-source solutions and consulting company
DevOps Engineer (Automation)
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
LOCATION: Mumbai
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of Containers, Kubernetes, CI/CD, IaC.
- Get hands-on experience with various technologies of Mirantis Kubernetes Engine, Terraform, Vault, Sysdig
After 2 months: The ideal candidate can will ensure the following outcomes from every client deployment:
- Utilize various open source technologies and tools to orchestrate solutions.
- Write scripts and automation using Perl/Python/Groovy/Java/Bash
- Build independent tools and solutions to effectively scale the infrastructure.
- Will be involved in automation using CI/CD/DevOps concepts.
- Be able to document procedures for building and deploying.
- Build independent tools and solutions to effectively scale the infrastructure.
- Work on a cloud-based infrastructure spanning Amazon Web Services and Microsoft Azure
- Work with pre-sales and sales team to help customers during their evaluation of Terraform, Vault and other open source technologies.
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
SKILLS AND EXPERIENCE
- Graduate/Post Graduate in any technology.
- Hands on experience in Terraform, AWS Cloud Formation, Ansible, Jenkins, Docker, Git, Jira etc
- Hands on experience at least in one scripting language like Perl/Python/Groovy/Bash
- Knowledge of Java/JVM based languages.
- Experience in Jenkins maintenance and scalability, designing and implementing advanced automation pipelines with Jenkins.
- Experience with a repository manager like JFrog Artifactory
- Strong background in git, Github/Bitbucket, and code branching/merging strategies
- Ability to understand and make trade-offs among different DevOps tools.
ADDITIONAL SKILLS
- Experience with Kubernetes, AWS, Google Cloud and/or Azure is a strong plus
- Some experience with secrets/key management preferably with HashiCorp Vault
- Experience using monitoring solutions, i.e. Datadog, Prometheus, ELK Stack, NewRelic, Nagios etc
Package: 25 lakhs- 30 lakhs

Similar jobs
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Sr.DevOps Engineer (5 to 8 yrs. Exp.)
Location: Ahmedabad
- Strong Experience in Infrastructure provisioning in cloud using Terraform & AWS CloudFormation Templates.
- Strong Experience in Serverless Containerization technologies such as Kubernetes, Docker etc.
- Strong Experience in Jenkins & AWS Native CI/CD implementation using code
- Strong Experience in Cloud operational automation using Python, Shell script, AWS CLI, AWS Systems Manager, AWS Lamnda, etc.
- Day to Day AWS Cloud administration tasks
- Strong Experience in Configuration management using Ansible and PowerShell.
- Strong Experience in Linux and any scripting language must required.
- Knowledge of Monitoring tool will be added advantage.
- Understanding of DevOps practices which involves Continuous Integration, Delivery and Deployment.
- Hands on with application deployment process
Key Skills: AWS, terraform, Serverless, Jenkins,Devops,CI/CD,Python,CLI,Linux,Git,Kubernetes
Role: Software Developer
Industry Type: IT-Software, Software Services
FunctionalArea:ITSoftware- Application Programming, Maintenance
Employment Type: Full Time, Permanent
Education: Any computer graduate.
Salary: Best in Industry.Hiring for a funded fintech startup based out of Bangalore!!!
Our Ideal Candidate
We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments.
Requirements
- 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase
- Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run
- Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks
- Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql)
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches
- Experience supporting on-premise solutions as well as on AWS cloud
- Experience working with and supporting Spark-based applications on YARN
- Experience with one or more automation tools such as Ansible, Teraform, etc
- Experience working with CI/CD tools like Jenkins and various test report and coverage plugins
- Experience defining and automating the build, versioning and release processes for complex enterprise products
- Experience supporting clients remotely and on-site
- Experience working with and supporting Java- and Python-based tech stacks would be a plus
Desired Non-technical Requirements
- Very strong communication skills both written and verbal
- Strong desire to work with start-ups
- Must be a team player
Job Perks
- Attractive variable compensation package
- Flexible working hours – everything is results-oriented
- Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
" Skills : Strong experience in Ansible, Cloud, Linux, Python or Shell or Bash scripting
" Experience : 3 - 6 Years
" Location : Bangalore
Good to have cloud skills - Docker / Kubernetes
Scripting skills - Any of Shell / Perl/ bash/Python
Good to have Terraform
Position: DevOps Lead
Job Description
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
Qualifications
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network
Designation - DevOps Engineer
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 3-5 years.
Package Offered:- Rs.4,00,000/- to Rs.7,00,000/- pa.
DevOps Engineer Job Description:-
Responsibilities
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer & client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements
- Work experience as a DevOps Engineer or similar software engineering role
- Work experience on AWS
- Good knowledge of Ruby or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Team spirit
- BSc in Computer Science, Engineering or relevant field
Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients.
Job Description
What We Do and How?
We are a market research company, revolutionizing how it's done! We mix fast paced development and unique approaches to bring best practices and strategy to our technology. Our tech stack is deep, leveraging several languages and frameworks including Python, C#, Java, Kotlin, React, Angular, and Django among others. Our engineering hurdles sit at the intersection of technologies ranging from mobile, computer vision and crowdsourcing, to machine learning and big data analytics.
Our Team
From San Francisco to Chicago to Ottawa, our R&D team is comprised of talented individuals spanning across a robust tech stack. The R&D team is comprised of product, data analytics, engineers across Front End, Back End, DevOps, Business Intelligence, ETL, Data Science, Mobile Apps, and much more. Across these different groups we work towards one common goal: To build products into efficient and seamless user experiences that help our clients succeed.
Numerator is looking for a Infrastructure Engineer to join our growing team. This is a unique opportunity where you will get a chance to work with an established and rapidly evolving platforms that handles millions of requests and massive amounts of data. In this position, you will be responsible for taking on new initiatives to automate, enhance, maintain, and scale services in a rapidly-scaling SaaS environment.
As a member of our team, you will make an immediate impact as you help build out and expand our technology platforms across several software products. This is a fast-paced role with high growth, visibility, impact, and where many of the decisions for new projects will be driven by you and your team from inception through production.
Some of the technologies we frequently use include: Terraform, Ansible, SumoLogic, Kubernetes, and many AWS-native services.
• Develop and test the cloud infrastructure to scale a rapidly growing ecosystem.
• Monitor and improve DevOps tools and processes, automate mundane tasks, and improve system reliability.
• Provide deep expertise to help steer scalability and stability improvements early in the life-cycle of development while working with the rest of the team to automate existing processes that deploy, test, and lead our production environments.
• Train teams to improve self-healing and self-service cloud-based ecosystems in an evolving AWS infrastructure.
• Build internal tools to demonstrate performance and operational efficiency.
• Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Sumologic, Prometheus, Grafana.
• Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
• Work cross-functionally with various teams to improve Numerator’s infrastructure through automation.
• Work with other teams to assist with issue resolutions related to application configuration, deployment, or debugging.
• Lead by example and evangelize DevOps best practice within other engineering teams at Numerator.
Skills & Requirements
What you bring
• A minimum of 3 years of work experience in backend software, DevOps, or a related field.
• A passion for software engineering, automation and operations and are excited about reliability, availability and performance.
• Availability to participate in after-hours on-call support with your fellow engineers.
• Strong analytical and problem-solving mindset combined with experience troubleshooting large scale systems.
• Fundamental knowledge in networking; operating systems; package build system (IP subnets and routing, ACL’s, Core Ubuntu, PIP and NPM).
• Experience with automation technologies to build, deploy and integrate both infrastructure and applications (e.g., Terraform, Ansible).
• Experience using scripting languages like Python and *nix tools (Bash, sed/awk, Make).
• You enjoy developing and managing real-time distributed platforms and services that scale billions of requests.
• Have the ability to manage multiple systems across stratified environments.
• A deep enthusiasm for the Cloud and DevOps and keen to get other people involved.
• Experience with scaling and operationalizing distributed data stores, file systems and services.
• Running services in AWS or other cloud platforms, strong experience with Linux systems.
• Experience in modern software paradigms including cloud applications and serverless architectures.
• You look ahead to identify opportunities and foster a culture of innovation.
• BS, MS or Ph.D. in Computer Science or a related field, or equivalent work experience.
Nice to haves
• Previous experience working with a geographically distributed software engineering team.
• Experience working with Jenkins or Circle-CI
• Experience with storage optimizations and management
• Solid understanding of building scalable, highly performant systems and services
• Expertise with big data, analytics, machine learning, and personalization.
• Start-up or CPG industry experience
If this sounds like something you would like to be part of, we’d love for you to apply! Don't worry if you think that you don't meet all the qualifications here. The tools, technology, and methodologies we use are constantly changing and we value talent and interest over specific experience.
Disclaimer: We do not charge any fee for employment and the same applies to the Recruitment Partners who we work with. Numerator is an equal opportunity employer. Employment decisions are based on merit. Additionally, we do not ask for any refundable security deposit to be paid in bank accounts for employment purposes. We request candidates to be cautious of misleading communications and not pay any fee/ deposit to individuals/ agencies/ employment portals on the pretext of attending Numerator interview process or seeking employment with us. These would be fraudulent in nature. Anyone dealing with such individuals/agencies/
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology








