This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
Responsibilities
|
|
Technical Skills
|

About Numerator
About
Numerator backed by Vista Equity Partners is a Data-Tech company reinventing Market Research. Headquartered in Chicago, USA, Numerator has more than 1,600 employees worldwide. We blend proprietary data with advanced technology and elite services to create unique insights in a market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients.
Photos
Connect with the team
Similar jobs
About the Role
We are looking for a highly skilled Senior DevOps Engineer with expertise in Java-based applications. You will lead automation, deployment, and cloud infrastructure efforts, ensuring efficient CI/CD pipelines and scalable, secure environments.
Key Responsibilities
- CI/CD Management: Design, implement, and optimize CI/CD pipelines for Java applications using Jenkins/GitLab.
- Cloud Infrastructure: Deploy and manage cloud resources (AWS, Azure, or GCP) for scalable applications.
- Containerization & Orchestration: Manage Docker containers and Kubernetes clusters for streamlined deployment.
- Automation & Scripting: Write efficient scripts (Python, Bash) for automation tasks related to infrastructure and deployments.
- Security & Compliance: Implement security best practices for cloud environments and containerized applications.
- Monitoring & Performance Tuning: Utilize tools like Prometheus, Grafana, ELK stack for monitoring system health and optimizing performance.
- Collaboration: Work with developers to enhance deployment workflows and troubleshoot production issues.
Required Skills
- Programming: Strong knowledge of Java, Spring Boot, and Microservices architecture.
- DevOps Tools: Experience with Jenkins, GitLab CI/CD, Terraform, and Ansible.
- Cloud Platforms: Expertise in AWS, Azure, or GCP with hands-on infrastructure management.
- Containers: Proficiency in Docker, Kubernetes, Helm.
- Networking & Security: Understanding of VPNs, firewalls, and IAM policies.
- Version Control: Git and GitHub/Bitbucket experience.
Preferred Qualifications
- Certifications in AWS, Kubernetes, or DevOps.
- Knowledge of Istio, Service Mesh, or Kafka.
- Experience in high-traffic production environments.
Benefits
- Competitive salary & bonuses.
- Flexible work arrangements (hybrid/remote options).
- Training programs & certifications.
- Seeking an Individual carrying around 5+ yrs of experience.
- Must have skills - Jenkins, Groovy, Ansible, Shell Scripting, Python, Linux Admin
- Terraform, AWS deep knowledge to automate and provision EC2, EBS, SQL Server, cost optimization, CI/CD pipeline using Jenkins, Server less automation is plus.
- Excellent writing and communication skills in English. Enjoy writing crisp and understandable documentation
- Comfortable programming in one or more scripting languages
- Enjoys tinkering with tooling. Find easier ways to handle systems by doing some research. Strong awareness around build vs buy.
- 2+ years work experience in a DevOps or similar role
- Knowledge of OO programming and concepts (Java, C++, C#, Python)
- A drive towards automating repetitive tasks (e.g., scripting via Bash, Python, etc)
- Fluency in one or more scripting languages such as Python or Ruby.
- Familiarity with Microservice-based architectures
- Practical experience with Docker containerization and clustering (Kubernetes/ECS)
- In-depth, hands-on experience with Linux, networking, server, and cloud architectures.
- Experience with CI/CD tools Azure DevOps, AWS cloud formation, Lamda functions, Jenkins, and Ansible
- Experience with AWS, Azure, or another cloud PaaS provider.
- Solid understanding of configuration, deployment, management, and maintenance of large cloud-hosted systems; including auto-scaling, monitoring, performance tuning, troubleshooting, and disaster recovery
- Proficiency with source control, continuous integration, and testing pipelines
- Effective communication skills
Job Responsibilities:
- Deploy and maintain critical applications on cloud-native microservices architecture.
- Implement automation, effective monitoring, and infrastructure-as-code.
- Deploy and maintain CI/CD pipelines across multiple environments.
- Streamline the software development lifecycle by identifying pain points and productivity barriers and determining ways to resolve them.
- Analyze how customers are using the platform and help drive continuous improvement.
- Support and work alongside a cross-functional engineering team on the latest technologies.
- Iterate on best practices to increase the quality & velocity of deployments.
- Sustain and improve the process of knowledge sharing throughout the engineering team
- Identification and prioritization of technical debt that risks instability or creates wasteful operational toil.
- Own daily operational goals with the team.
Description
DevOps Engineer / SRE
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
- Experience running, operating and maintainence of Kubernetes pods
- Strong Scripting skills
- Experience in AWS
- Knowledge of configuring/optimizing open source tools like Kafka, etc.
- Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
- Optimizing and standardizing monitoring, alerting.
- Experience in Google cloud platform
- Experience/ Knowledge in Python will be an added advantage
- Experience on Monitoring Tools like Jenkins, Kubernetes ,Nagios,Terraform etc
- 3 - 6 years of software development, and operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Experience managing any distributed NoSQL system (Kafka/Cassandra/etc.)
- Experience with Containers, Microservices, deployment and service orchestration using Kubernetes, EKS (preferred), AKS or GKE.
- Strong scripting language knowledge, such as Python, Shell
- Experience and a deep understanding of Kubernetes.
- Experience in Continuous Integration and Delivery.
- Work collaboratively with software engineers to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux-based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
- AWS
- Docker
- Kubernetes
- Envoy
- Istio
- Jenkins
- Cloud Security & SIEM stacks
- Terraform
- Strong communication skills (written and verbal)
- Responsive, reliable and results oriented with the ability to execute on aggressive plans
- A background in software development, with experience of working in an agile product software development environment
- An understanding of modern deployment tools (Git, Bitbucket, Jenkins, etc.), workflow tools (Jira, Confluence) and practices (Agile (SCRUM), DevOps, etc.)
- Expert level experience with AWS tools, technologies and APIs associated with it - IAM, Cloud-Formation, Cloud Watch, AMIs, SNS, EC2, EBS, EFS, S3, RDS, VPC, ELB, IAM, Route 53, Security Groups, Lambda, VPC etc.
- Hands on experience with Kubernetes (EKS preferred)
- Strong DevOps skills across CI/CD and configuration management using Jenkins, Ansible, Terraform, Docker.
- Experience provisioning and spinning up AWS Clusters using Terraform, Helm, Helm Charts
- Ability to work across multiple projects simultaneously
- Ability to manage and work with teams and customers across the globe
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Designation : DevOp Engineer
Location : HSR, Bangalore
About the Company
Making impact driven by Data.
Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time..
About the Role
We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.
What you’ll do
- Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
- Monitor Logs on servers & Cloud management
- Build and set up new development tools and infrastructure to reduce occurrences of errors
- Understand the needs of stakeholders and convey this to developers
- Design scripts to automate and improve development and release processes
- Test and examine codes written by others and analyze results
- Ensure that systems are safe and secure against cybersecurity threats
- Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
- Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
- Design procedures for system troubleshooting and maintenance.
What you need to have
TECHNICAL SKILLS
- Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
- Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
- Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
- 2 or more years of experience as a DevOp (startup/technical experience preferred)
You are
- Excited to learn, are a hustler and “Do-er”
- Passionate about building products that create impact.
- Updated with the latest technological developments & enjoy upskilling yourself with market trends.
- Willing to experiment with novel ideas & take calculated risks.
- Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
- Interested to work as part of a supportive, highly motivated and fun team.
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead ( and/or CTO ) to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
Responsibilities
- Implement and own the CI.
- Manage CD tooling.
- Implement and maintain monitoring and alerting.
- Build and maintain highly available production systems.
Qualification- B.tech in IT

