
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure

About Codemonk
About
We are a product engineering company that empowers other startups and enterprises by building simple and elegant software solutions. Through our expertise in the domains of AI and Enterprise Applications, we have helped brands such as Unilever, IndiaMART, GreytHR, Fyle, Skylark Drones, etc to craft world-class products and improve their business. We are churning out amazing software for our clients located across the globe from our headquarters in Bengaluru.
Codemonk is on a mission to transform the way industries work by leveraging the power of AI, Blockchain and IoT. There is something special when you know that every line of code that you write impacts thousands of human lives!
By joining us you can expect newness and challenges every day. As a member of the team, you will be part of shaping the company's future fuelling the growth and defining the culture.
Tech stack
Candid answers by the company
Codemonk is a global engineering and design studio that builds and scales digital products, specializing in product development, UX design, and AI/ML solutions for startups and enterprises.
Photos
Connect with the team
Similar jobs
Objectives of this role
•Building and implementing new development tools and infrastructure
•Understanding the needs of stakeholders and conveying them to developers
•Working on ways to automate and improve development and release processes
•Testing and examining code written by others and analysing results
•Ensuring that systems are safe and secure against cybersecurity threats
•Identifying technical problems and developing software updates and fixes
•Working with software developers and software engineers to ensure that development follows established processes and works as intended
•Planning projects and being involved in project management decisions
Responsibilities:
• Set up CI/CD pipelines for automated deployment and delivery
•Setup and management of new and Existing cloud-based Kubernetes cluster services
•Write Ad/Hoc Bash/Python scripts to automate certain operational tasks.
•Designing, maintenance and management of tools for automation of different operational processes.
•Provision of critical system security by leveraging best practices and prolific cloud security solutions.
•System troubleshooting and problem resolution across various application domains and platforms
•Support/maintain development, UAT and production infrastructure.
•Providing recommendations for architecture and process improvements.
•Respond to L2 calls and emails.
•Help administer monitoring systems, alerting, log management, and other IT infrastructure systems.
•Perform root cause analysis of production errors and resolve technical issues
•Design procedures for system troubleshooting and maintenance
Technical Skill Requirements:
•Experience in a DevOps role in AWS/OCI cloud environment.
•Must have experience with CI/CD Pipelines and hands-on experience with DevOps tools such as, Jenkins, Git, Docker, Kubernetes, Ansible, etc.
•Strong knowledge in Terraform for multi-stack cloud infrastructure provisioning.
•Strong knowledge in OCI/AWS-based Kubernetes service management.
•Must have experience with Python/Bash as a scripting language.
•Good knowledge in software debugging, web applications and services (Apache, Nginx, HAProxy)
•Must have knowledge in monitoring setup with Prometheus, Alertmanager, Grafana, Thanos, Loki, Fluentbit, etc.
Good To Have Skills
•PostgreSQL, MySQL, MongoDB, Redis, Keycloak.
•Migrating application from one cloud to another; OCI certifications
•Test Driven Development
Soft Skill Requirements:
•Able to learn new skills and technology quickly.
•Energetic with amazing customer service skills and a team-oriented approach.
•Strong verbal and written communication skills
Job Requirements
Required Experience
5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment
management.
Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
Significant hands-on experience with at least two of the following tools: Gearset, Copado,
Flosum.
Solid understanding of Salesforce architecture, metadata, and development lifecycle.
Familiarity with version control systems (e.g., Git) and agile methodologies.
Key Responsibilities
Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset,
Copado, or Flosum.
Automate and optimize deployment processes to ensure efficient, reliable, and repeatable
releases across Salesforce environments.
Collaborate with development, QA, and operations teams to gather requirements and ensure
alignment of deployment strategies.
Monitor, troubleshoot, and resolve deployment and release issues.
Maintain documentation for deployment processes and provide training on best practices.
Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills
Skill Area Requirements
Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
CI/CDBuilding and maintaining pipelines, automation, and release management
Version ControlProficiency with Git and related workflows
Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
Scripting
Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
Communication
Strong written and verbal communication skills
Preferred Qualifications
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
Job Description
Role: Sr. DevOps – Architect
Location: Bangalore
Who are we looking for?
A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;
Technical Skills:
• 8+ years of relevant DevOps /Operations/Development experience working under Agile DevOps culture on large scale distributed systems.
• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.
• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash
• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.
• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.
• Experience with Docker and Kubernetes.
• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.
• Experience with planning tools like Jira, Rally etc.
• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.
• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.
• Experience on setting up and managing DevOps tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)
• Understanding of Applications, Networking and Open source tools.
• Experience on security side of DevOps i.e. DevSecOps
• Good to have understanding of Micro services architecture.
• Experience working with remote/offshore teams is a huge plus
• Experience in building a Dashboard based on latest JS technologies like NodeJS
• Experience with NoSQL database like MongoDB
• Experience in working with REST APIs
• Experience with tools like NPM, Gulp
Process Skills:
• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets
• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers
• Compile, deliver, and evangelize roadmaps that guide the evolution of services
• Grasp and communicate big-picture enterprise-wide issues to team
• Experience working in an Agile / Scrum / SAFe environment preferred
Behavioral Skills :
• Should have directly worked on creating enterprise level operating models, architecture options
• Model as-is and to-be architectures based on business requirements
• Good communication & presentation skills
• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work
• Flexible for short term travel
Primary Duties / Responsibilities:
• Build Automations and modules for DevOps platform
• Build integrations between various DevOps tools
• Interface with another teams to provide support and understand the overall vision of the transformation platform.
• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.
• Preparing HLDs and LLDs.
• Presenting status to leadership and key stakeholders at regular intervals
Qualification:
• Somebody who has at least 12+ years of work experience in software development.
• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience
• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college
- Development/Technical support experience in preferably DevOps.
- Looking for an engineer to be part of GitHub Actions support. Experience with CI/CD tools like Bamboo, Harness, Ansible, Salt Scripting.
- Hands-on expertise with GitHub Actions and CICD Tools like Bamboo, Harness, CI/CD Pipeline stages, Build Tools, SonarQube, Artifactory, Nuget, Proget Veracode, LaunchDarkly, GitHub/Bitbucket repos, Monitoring tools.
- Handelling Xmatters,Techlines,Incidents
- Strong Scripting skills (PowerShell, Python, Bash/Shell Scripting) for Implementing automation scripts and Tools to streamline administrative tasks and improve efficiency.
- An Atlassian Tools Administrator is responsible for managing and maintaining Atlassian products such as Jira, Confluence, Bitbucket, and Bamboo.
- Expertise in Bitbucket, GitHub for version control and collaboration global level.
- Good experience on Linux/Windows systems activities, Databases.
- Aware of SLA and Error concepts and their implementations; provide support and participate in Incident management & Jira Stories. Continuously Monitoring system performance and availability, and responding to incidents promptly to minimize downtime.
- Well-versed with Observability tool as Splunk for Monitoring, alerting and logging solutions to identify and address potential issues, especially in infrastructure.
- Expert with Troubleshooting production issues and bugs. Identifying and resolving issues in production environments.
- Experience in providing 24x5 support.
- GitHub Actions
- Atlassian Tools (Bamboo, Bitbucket, Jira, Confluence)
- Build Tools (Maven, Gradle, MS Build, NodeJS)
- SonarQube, Veracode.
- Nexus, JFrog, Nuget, Proget
- Harness
- Salt Services, Ansible
- PowerShell, Shell scripting
- Splunk
- Linux, Windows
Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito
Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform
• Bachelor or Master Degree in Computer Science, Software Engineering from a reputed
University.
• 5 - 8 Years of experience in building scalable, secure and compliant systems.
• More than 2 years of experience in working with GCP deployment for millions of daily visitors
• 5+ years hosting experience in a large heavy-traffic environment
• 5+ years production application support experience in a high uptime environment
• Software development and monitoring knowledge with Automated builds
• Technology:
o Cloud: AWS or Google Cloud
o Source Control: Gitlab or Bitbucket or Github
o Container Concepts: Docker, Microservices
o Continuous Integration: Jenkins, Bamboos
o Infrastructure Automation: Puppet, Chef or Ansible
o Deployment Automation: Jenkins, VSTS or Octopus Deploy
o Orchestration: Kubernets, Mesos, Swarm
o Automation: Node JS or Python
o Linux environment network administration, DNS, firewall and security management
• Ability to be adapt to the startup culture, handle multiple competing priorities, meet
deadlines and troubleshoot problems.
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery
















