
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash

Similar jobs
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Job Title: Lead DevOps Engineer
Experience Required: 8+ years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
8+ years of experience in DevOps or Site Reliability Engineering (SRE).
3+ years in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
staging, QA, and development of cloud infrastructures running in 24×7 environments.
● Most of our deployments are in K8s, You will work with the team to run and manage multiple K8s
environments 24/7
● Implement and oversee all aspects of the cloud environment including provisioning, scale,
monitoring, and security.
● Nurture cloud computing expertise internally and externally to drive cloud adoption.
● Implement systems solutions, and processes needed to manage cloud cost, monitoring, scalability,
and redundancy.
● Ensure all cloud solutions adhere to security and compliance best practices.
● Collaborate with Enterprise Architecture, Data Platform, DevOps, and Integration Teams to ensure
cloud adoption follows standard best practices.
Responsibilities :
● Bachelor’s degree in Computer Science, Computer Engineering or Information Technology or
equivalent experience.
● Experience with Kubernetes on cloud and deployment technologies such as Helm is a major plus
● Expert level hands on experience with AWS (Azure and GCP experience are a big plus)
● 10 or more years of experience.
● Minimum of 5 years’ experience building and supporting cloud solutions
Role – Sr. Devops Engineer
Location - Bangalore
Experience 5+ Years
Responsibilities
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Incidence management and root cause analysis.
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
Requirements
- 5-6 years of relevant experience in a Devops role.
- Good knowledge in cloud technologies such as AWS/Google Cloud.
- Familiarity with container orchestration services, especially Kubernetes experience (Mandatory) and good knowledge in Docker.
- Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, or Jenkins.
- Good knowledge in complex Debugging mechanisms (especially JVM) and Java programming experience (Mandatory)
- Significant experience with Windows and Linux operating system environments
- A team player with excellent communication skills.
About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
The ideal candidate should have automation skills to automate Infrastructure, microservices deployment through automation tools. Should be handling Kubernetes cluster in Production, both cloud and on-premise
Key Responsibilities
- Overall 6+ Years of experience and should have handled Production Kubernetes Cluster both on Cloud and On-premises environments.
- Build monitoring that alerts on symptoms rather than on outages.
- Should migrate VM based applications to Kubernetes cluster.
- Automate Infrastructure component provision.
- Document every task, your findings turn into repeatable actions and then into automation.
- Follow the Agile process and plan the work accordingly.
Must have Skills
- Knowledge on container solutions like Docker, Kubernetes and understanding of Virtualization concepts.
- Experience in Configuration Management Tools like Ansible, Chef.
- Experience with Terraform, Cloud Formation or other infrastructure as code tools.
- Experience with CI/CD in Jenkins.
- Good knowledge of AWS/Azure cloud environments.
- Good hands-on experience on webservers like Nginx, Apache, tomcat configuration, and administration.
- Experience working with and maintaining package management systems (e.g., Artifactory, APT).
- Knowledge of scripting in PowerShell/Python/Bash/
- Experience building automation and pipelines (integration, testing, deployment)
- Experience with Docker containers (images and registry management).
- Understanding of metrics collectors such as Metricbeat, Heartbeat or Prometheus is good to have.
- Ability to work collaboratively on a cross-functional team with a wide range of experience levels
- Ability to analyze existing services and identify technical debt to work.
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Company Introduction :
My Next Developer is a global network of top talent in business, design, and technology that enables companies to scale their teams, on-demand.
We take the best elements of virtual teams and combine them with a support structure that encourages innovation, social interaction, and fun. We see no borders, move at a fast pace, and are never afraid to break the mold.
Job Responsibilities
- Ensure security in the development activities.
- Implement risk management techniques and threat modeling.
- Implement infrastructure automation, monitoring and alerts as part of ISO 27001 and SOC 2 certifications.
- Collaborate with internal teams to produce the best security solutions
Minimum requirements :
- Bachelor's/Master's degree in degree in computer science, cybersecurity, engineering, or equivalent degree.
- 3+ years of experience as a DevSecOps engineer.
- Proficiency in back-end technologies such as NodeJs or Python.
- Expertise in using DevOps tools like GitHub, dependency management, and CI/CD.
- Profound knowledge of AWS cloud, DevOps culture and automation tools.
- Fluent in both spoken and written English communication.
- Ability to work full-time (40 hours/week) with a 2-3 hour overlap with European time zone.
- Stay up-to-date on cybersecurity threats and follow the best practices.
- Previous project experience related to ISO 27001 and SOC 2 certifications.
• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.
o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions
o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.
Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.











