Job Description: Senior DevOps Engineer
Role Details
Position: Senior DevOps Engineer
Location: Bangalore/Remote
About SCRUT Automation
Scrut Automation is an information security and compliance monitoring platform, aimed at helping small and medium cloud-native enterprises develop and maintain a robust security posture, and comply with various infosec standards such as SOC 2, ISO 27001, GDPR, and the like with ease. With the help of the Scrut platform, customers reduce their manual effort for security and compliance tasks by 70% and build real-time visibility of their security posture.
Founded by IIT/ISB/McKinsey alumni, the founding team has over 15 years of combined Infosec experience. Scrut is built out of India for the world, with customers across India, APAC, North America, Europe and the Middle East. Scrut is backed by Lightspeed Ventures, MassMutual Ventures and Endiya Partners, along with prominent angels from the global SaaS community.
About the Job:
The Senior DevOps Engineer will be pivotal in driving our infrastructure's growth and efficiency. This role will focus on ensuring agile deployments, scalable infrastructure, and tight collaboration with our software development teams. Leveraging our extensive tech stack, the ideal candidate will be both a technical guru and a strategic team player.
Responsibilities
- CI/CD Management: Oversee and refine CI/CD processes using CodePipeline and Github Actions.
- Cloud Infrastructure: Manage our cloud services on AWS and GCP, focusing on tools like ECS, Fargate, EKS, and more.
- Containerization: Lead orchestration efforts using Kubernetes and manage containerized solutions with Docker.
- Networking: Implement and maintain solutions with VPC Networking, DNS, Route53, and Nginx.
- Content Delivery: Ensure optimal service delivery using Cloudfront.
- Monitoring and Performance: Utilize APM tools for system monitoring and guarantee high availability.
- Infrastructure as Code (IaC): Embrace best practices, leveraging tools such as Terraform, AWS CF.
- Security: Maintain top-tier security standards across all systems.
- Collaboration: Engage with development teams to ensure the best infrastructure solutions.
- On-call Support: Offer essential support for production systems on a rotational basis.
- Scripting: Develop and maintain automation scripts using languages like Bash and Python.
- Industry Awareness: Remain updated with the latest trends, tools, and best practices in the DevOps arena.
Requirements
- Bachelor's degree in Computer Science or a relevant field.
- 4+ years of hands-on DevOps experience.
- Mastery of AWS and GCP platforms.
- Proficiency in scripting languages, notably Bash and Python.
- Comprehensive understanding of container orchestration with Kubernetes and Docker.
- Robust expertise in networking and database management.
Must have skills
- Certifications in AWS or GCP.
- Experience with serverless architectures.
- Familiarity with microservices deployment and management.
Why should this job excite you?
- Opportunity to make an early impact on one of the most promising, high-growth SaaS startups in India
- A high-performing action-oriented team
- Immense exposure to the founders and the leadership
- Opportunity to shape the future of the B2B SaaS Technology team with YOUR innovative ideas
- The competitive compensation package, benefits, and employee-friendly work culture

About SCRUT Automation
About
What is Scrut Automation?
Scrut Automation is an information security and compliance monitoring platform, aimed at helping small and medium cloud-native enterprises develop and maintain a robust security posture, and comply with various infosec standards such as SOC 2, ISO 27001, GDPR, and the like with ease. With the help of the Scrut platform, customers reduce their manual effort for security and compliance tasks by 70% and build real-time visibility of their security posture.
Founded by IIT/ISB/McKinsey alumni, the founding team has over 15 years of combined Infosec experience. Scrut is built out of India for the world, with customers across India, APAC, North America, Europe and the Middle East. Scrut is backed by Lightspeed Ventures, MassMutual Ventures and Endiya Partners, along with prominent angels from the global SaaS community.
Candid answers by the company
Scrut Automation helps early-stage and growth-stage startups which struggle to identify and manage risks to their security posture, jeopardizing their ability to either achieve or maintain compliance with their key frameworks.
Connect with the team
Similar jobs

GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
- The experience in AWS is desirable
- Python experience is optional however Power shell is mandatory
- Know-how on the use of GitHub
Contract Review and Lifecycle management is no longer a niche idea. It is one of the fastest growing sectors within legal operations automation with a market size of $10B growing at 15% YoY. InkPaper helps corporations and law firms optimize their contract workflow and lifecycle management by providing workflow automation, process transparency, efficiency, and speed. Automation and Blockchain have the power to transform legal contracts as we know of today; if you are interested in being part of that journey, keep reading!
InkPaper.AI is looking for passionate DevOps Engineer who can drive and build next generation AI-powered products in Legal Technology: Document Workflow Management and E-signature platforms. You will be a part of the product engineering team based out of Gurugram, India working closely with our team in Austin, USA.
If you are a highly skilled DevOps Engineer with expertise in GCP, Azure, AWS ecosystems, and Cybersecurity, and you are passionate about designing and maintaining secure cloud infrastructure, we would love to hear from you. Join our team and play a critical role in driving our success while ensuring the highest standards of security.
Responsibilities:
- Solid experience in building enterprise-level cloud solutions on one of the big-3(AWS/Azure/GCP)
- Collaborate with development teams to automate software delivery pipelines, utilizing CI/CD tools and technologies.
- Responsible for configuring and overseeing cloud services, including virtual machines, containers, serverless functions, databases, and networking components, ensuring their effective management and operation.
- Responsible for implementing robust monitoring, logging, and alerting solutions to ensure optimal system health and performance
- Develop and maintain documentation for infrastructure, deployment processes, and security procedures.
- Troubleshoot and resolve infrastructure and deployment issues, ensuring system availability and reliability.
- Conduct regular security assessments, vulnerability scans, and penetration tests to identify and address potential threats.
- Implement security controls and best practices to protect systems, data, and applications in compliance with industry standards and regulations
- Stay updated on emerging trends and technologies in DevOps, cloud, and cybersecurity. Recommend improvements to enhance system efficiency and security.
An ideal candidate would credibly demonstrate various aspects of the InkPaper Culture code –
- We solve for the customer
- We practice good judgment
- We are action-oriented
- We value deep work over shallow work
- We reward work over words
- We value character over only skills
- We believe the best perk is amazing peers
- We favor autonomy
- We value contrarian ideas
- We strive for long-term impact
You Have:
- B.Tech in Computer Science.
- 2 to 4 years of relevant experience in DevOps.
- Proficiency in GCP, Azure, AWS ecosystems, and Cybersecurity
- Experience with: CI/CD automation, cloud service configuration, monitoring, troubleshooting, security implementation.
- Familiarity with Blockchain will be an edge.
- Excellent verbal communication skills.
- Good problem-solving skills.
- Attention to detail
At InkPaper, we hire people who will help us change the future of legal services. Even if you do not think you check off every bullet point on this list, we still encourage you to apply! We value both current experience and future potential.
Benefits
- Hybrid environment to work from our Gurgaon Office and from the comfort of your home.
- Great compensation package!
- Tools you need on us!
- Our insurance plan offers medical, dental, vision, short- and long-term disability coverage, plus supplemental for all employees and dependents
- 15 planned leaves + 10 Casual Leaves + Company holidays as per government norms
InkPaper is committed to creating a welcoming and inclusive workplace for everyone. We value and celebrate our differences because those differences are what make our team shine. We hire great people from diverse backgrounds, not just because it is the right thing to do, but because it makes us stronger. We are an equal opportunity employer and does not discriminate against candidates based on race, ethnicity, religion, sex, gender, sexual orientation, gender identity, or disability
Location: Gurugram or remote
About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Beng involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Creating and managing CI/ CD pipelines for microservice architectures.
- Creating and managing application configuration.
- Researching and planning architectures and tools for smooth deployments.
- Logging, metrics and alerting management.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in designing CI/ CD pipelines using jenkins. Experience in deployment using Ansible.
- Experience in microservices architecture deployment, Hands-on experience on Docker, Kubernetes, EKS.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Configuration management tools like ansible/chef/puppet.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Proficient in bash scripting, python scripting is an advantage.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
- Proficient in Configuration Management.
Experienced with Azure DevOps, CI/CD and Jenkins.
Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.
Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.
Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.
Demonstrable experience with the following technologies:
Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.
Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.
Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.
Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.
Ability and desire to quickly pick up new technologies, languages, and tools
Excellent communication skills and Good team player.
Passionate about code quality and best practices is an absolute must
Must show evidence of your passion for technology and continuous learning
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash

- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others

