
About the role:
We are seeking an experienced DevOps Engineer with deep expertise in Jenkins, Docker, Ansible, and Kubernetes to architect and maintain secure, scalable infrastructure and CI/CD pipelines. This role emphasizes security-first DevOps practices, on-premises Kubernetes operations, and integration with data engineering workflows.
🛠 Required Skills & Experience
Technical Expertise
- Jenkins (Expert): Advanced pipeline development, DSL scripting, security integration, troubleshooting
- Docker (Expert): Secure multi-stage builds, vulnerability management, optimisation for Java/Scala/Python
- Ansible (Expert): Complex playbook development, configuration management, automation at scale
- Kubernetes (Expert - Primary Focus): On-premises cluster operations, security hardening, networking, storage management
- SonarQube/Code Quality (Strong): Integration, quality gate enforcement, threshold management
- DevSecOps (Strong): Security scanning, compliance automation, vulnerability remediation, workload governance
- Spark ETL/ETA (Moderate): Understanding of distributed data processing, job configuration, runtime behavior
Core Competencies
- Deep understanding of DevSecOps principles and security-first automation
- Strong troubleshooting and problem-solving abilities across complex distributed systems
- Experience with infrastructure-as-code and GitOps methodologies
- Knowledge of compliance frameworks and security standards
- Ability to mentor teams and drive best practice adoption
🎓Qualifications
- 6 - 10 Years years of hands-on DevOps
- Proven track record with Jenkins, Docker, Kubernetes, and Ansible in production environments
- Experience managing on-premises Kubernetes clusters (bare-metal preferred)
- Strong background in security hardening and compliance automation
- Familiarity with data engineering platforms and big data technologies
- Excellent communication and collaboration skills
🚀 Key Responsibilities
1.CI/CD Pipeline Architecture & Security
- Design, implement, and maintain enterprise-grade CI/CD pipelines in Jenkins with embedded security controls:
- Build greenfield pipelines and enhance/stabilize existing pipeline infrastructure
- Diagnose and resolve build, test, and deployment failures across multi-service environments
- Integrate security gates, compliance checks, and automated quality controls at every pipeline stage
- Manage and optimize SonarQube and static code analysis tooling:
- Enforce code quality and security scanning standards across all services
- Maintain organizational coding standards, vulnerability thresholds, and remediation workflows
- Automate quality gates as integral components of CI/CD processes
- Engineer optimized Docker images for Java, Scala, and Python applications:
- Implement multi-stage builds, layer optimization, and minimal base images
- Conduct image vulnerability scanning and enforce compliance policies
- Apply containerization best practices for security and performance
- Develop comprehensive Ansible automation:
- Create modular, reusable, and secure playbooks for configuration management
- Automate environment provisioning and application lifecycle operations
- Maintain infrastructure-as-code standards and version control
2.Kubernetes Platform Operations & Security
- Lead complete lifecycle management of on-premises/bare-metal Kubernetes clusters:
- Cluster provisioning, version upgrades, node maintenance, and capacity planning
- Configure and manage networking (CNI), persistent storage solutions, and ingress controllers
- Troubleshoot workload performance, resource constraints, and reliability issues
- Implement and enforce Kubernetes security best practices:
- Design and manage RBAC policies, service account isolation, and least-privilege access models
- Apply Pod Security Standards, network policies, secrets encryption, and certificate lifecycle management
- Conduct cluster hardening, security audits, monitoring, and policy governance
- Provide technical leadership to development teams:
- Guide secure deployment patterns and containerized application best practices
- Establish workload governance frameworks for distributed systems
- Drive adoption of security-first mindsets across engineering teams
3.Data Engineering Support
- Collaborate with data engineering teams on Spark-based workloads:
- Support deployment and operational tuning of Spark ETL/ETA jobs
- Understand cluster integration, job orchestration, and performance optimization
- Debug and troubleshoot Spark workflow issues in production environments

About CoffeeBeans
About
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projects—driving meaningful business outcomes through tech excellence.
Similar jobs
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting
Senior Software Engineer I - DevOps Engineer
Exceptional software engineering is challenging. Amplifying it to ensure that multiple teams can concurrently create and manage a vast, intricate product escalates the complexity. As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your task will be to develop and sustain automated tooling for the release processes of all our services. You will contribute significantly to establishing automated delivery pipelines, empowering autonomous teams to create independently deployable services. Your role is integral to our overarching strategy of enhancing software delivery and progressing Sumo Logic’s internal Platform-as-a-Service.
What you will do:
• Own the Delivery pipeline and release automation framework for all Sumo services
• Educate and collaborate with teams during both design and development phases to ensure best practices.
• Mentor a team of Engineers (Junior to Senior) and improve software development processes.
• Evaluate, test, and provide technology and design recommendations to executives.
• Write detailed design documents and documentation on system design and implementation.
• Ensuring the engineering teams are set up to deliver quality software quickly and reliably.
• Enhance and maintain infrastructure and tooling for development, testing and debugging
What you already have
• B.S. or M.S. Computer Sciences or related discipline
• Ability to influence: Understand people’s values and motivations and influence them towards making good architectural choices.
• Collaborative working style: You can work with other engineers to come up with good decisions.
• Bias towards action: You need to make things happen. It is essential you don’t become an inhibitor of progress, but an enabler.
• Flexibility: You are willing to learn and change. Admit past approaches might not be the right ones now.
Technical skills:
- 4+ years of experience in the design, development, and use of release automation tooling, DevOps, CI/CD, etc.
- 2+ years of experience in software development in Java/Scala/Golang or similar
- 3+ years of experience on software delivery technologies like jenkins including experience writing and developing CI/CD pipelines and knowledge of build tools like make/gradle/npm etc.
- Experience with cloud technologies, such as AWS/Azure/GCP
- Experience with Infrastructure-as-Code and tools such as Terraform
- Experience with scripting languages such as Groovy, Python, Bash etc.
- Knowledge of monitoring tools such as Prometheus/Grafana or similar tools
- Understanding of GitOps and ArgoCD concepts/workflows
- Understanding of security and compliance aspects of DevSecOps
About Us
Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com.
Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection.
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a Cloud Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- End to End Complete RHOSP Deployment (Under and Overcloud) considered as a NFVi Deployment
- Installing Red Hat’s OpenStack technology using the OSP-director
- Deploying a Red Hat OpenStack Platform based on Red Hat's reference architecture
- Deploying managed hosts with required OpenStack parameters
- Deploying Three (3) Node Highly available (HA) Controller Hosts using Pacemaker and HAP Roxy
- Deploying all the supplied compute hosts that will be hosting multiple VNFs (SRIOV & DPDK)
- Implementing CEPH
- Integrate Software-defined storage (CEPH) with RHOSP, Red Hat OpenStack Platform Operation Tools as
per Industry standard & best practices
- Detailed Network configuration and implementation using Neutron networking - (VXLAN) network type
and Modular Layer 2 (ML2) Open Switch plugin
- Integrating Monitoring Solution with RHOSP
- Design and deployment of common Alarm and Performance Management solution
- Red Hat OpenStack management & monitoring
- VM Alarm and Performance management
- Cloud Management Platform will be configured for day-to-day operational tools to measure uponCPU/Mem/Network utilization etc. from VM level
- Baseline Security Standard) & VA (Vulnerability Assessment) for RHOSP.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimize hardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits
discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability
status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other
characteristic protected by applicable central, state or local laws.
About the compnay:
Our Client is a B2B2C tech Web3 startup founded by founders - IITB Graduates who are experienced in retail, ecommerce and fintech.
Vision: Our Client aims to change the way that customers, creators, and retail investors interact and transact at brands of all shapes and sizes. Essentially, becoming the Web3 version of brands driven social ecommerce & investment platform.
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment
automation and CI /CD
Key Responsibilities
Building and setting up new development tools and infrastructure
Understanding the needs of stakeholders and conveying this to developers
Working on ways to automate and improve development and release processes
Testing and examining code written by others and analyzing results
Ensuring that systems are safe and secure against cybersecurity threats
Identifying technical problems and developing software updates and ‘fixes’
Working with software developers and software engineers to ensure that development
follows established processes and works as intended
Planning out projects and being involved in project management decisions
Required Skills and Qualifications
BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
4+ years of overall development experience.
Strong understanding of cloud deployment and setup.
Hands-on experience with tools like Jenkins, Gradle etc.
Deploy updates and fixes.
Provide Level 2 technical support.
Build tools to reduce occurrences of errors and improve customer experience.
Perform root cause analysis for production errors.
Investigate and resolve technical issues.
Develop scripts to automate deployment.
Design procedures for system troubleshooting and maintenance.
Proficient with git and git workflows.
Working knowledge of databases and SQL.
Problem-solving attitude.
Collaborative team spirit
Regards
Team Merito
- Works independently without any supervision
- Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
- Experience in deploying highly complex, distributed transaction processing systems.
- Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
- As a component owner, where the component impacts across multiple platforms (5-10-member team), work with customers to obtain their requirements and deliver the end-to-end project.
Required Experience, Skills, and Qualifications
- 5+ years of experience as a DevOps Engineer. Experience with the Golang cycle is a plus
- At least one End to End CI/CD Implementation experience
- Excellent Problem Solving and Debugging skills in DevOps area· Good understanding of Containerization (Docker/Kubernetes)
- Hands-on Build/Package tool experience· Experience with AWS services Glue, Athena, Lambda, EC2, RDS, EKS/ECS, ALB, VPC, SSM, Route 53
- Experience with setting up CI/CD pipeline for Glue jobs, Athena, Lambda functions
- Experience architecting interaction with services and application deployments on AWS
- Experience with Groovy and writing Jenkinsfile
- Experience with repository management, code scanning/linting, secure scanning tools
- Experience with deployments and application configuration on Kubernetes
- Experience with microservice orchestration tools (e.g. Kubernetes, Openshift, HashiCorp Nomad)
- Experience with time-series and document databases (e.g. Elasticsearch, InfluxDB, Prometheus)
- Experience with message buses (e.g. Apache Kafka, NATS)
- Experience with key-value stores and service discovery mechanisms (e.g. Redis, HashiCorp Consul, etc)
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
Must-Have’s:
- Hands-on DevOps (Git, Ansible, Terraform, Jenkins, Python/Ruby)
Job Description:
- Knowledge on what is a DevOps CI/CD Pipeline
- Understanding of version control systems like Git, including branching and merging strategies
- Knowledge of what is continuous delivery and integration tools like Jenkins, Github
- Knowledge developing code using Ruby or Python and Java or PHP
- Knowledge writing Unix Shell (bash, ksh) scripts
- Knowledge of what is automation/configuration management using Ansible, Terraform, Chef or Puppet
- Experience and willingness to keep learning in a Linux environment
- Ability to provide after-hours support as needed for emergency or urgent situations
Nice to have’s:
- Proficient with container based products like docker and Kubernetes
- Excellent communication skills (verbal and written)
- Able to work in a team and be a team player
- Knowledge of PHP, MySQL, Apache and other open source software
- BA/BS in computer science or similar









