
Job Title : Senior DevOps Engineer
Experience : 5+ Years
Location : Gurgaon, Sector 39
About the Role :
We are seeking an experienced Senior DevOps Engineer to lead our DevOps practices, manage a small team, and build functional, scalable systems that enhance customer experience. You will be responsible for deployments, automation, troubleshooting, integrations, monitoring, and team mentoring while ensuring secure and efficient operations.
Mandatory Skills :
Linux Administration, Shell Scripting, CI/CD (Jenkins), Git/GitHub, Docker, Kubernetes, AWS, Ansible, Database Administration (MariaDB/MySQL/MongoDB), Apache httpd/Tomcat, HAProxy, Nagios, Keepalived, Monitoring/Logging/Alerting, and On-premise Server Management.
Key Responsibilities :
- Implement and manage integrations as per business and customer needs.
- Deploy product updates, fixes, and enhancements.
- Provide Level 2 technical support and resolve production issues.
- Build tools to reduce errors and improve system performance.
- Develop scripts and automation for CI/CD, monitoring, and visualization.
- Perform root cause analysis of incidents and implement long-term fixes.
- Ensure robust monitoring, logging, and alerting systems are in place.
- Manage on-premise servers and ensure smooth deployments.
- Collaborate with development teams for system integration.
- Mentor and guide a team of 3 to 4 engineers.
Required Qualifications & Experience :
- Bachelor’s degree in Computer Science, Software Engineering, IT, or related field (Master’s preferred).
- 5+ years of experience in DevOps engineering with team management exposure.
- Strong expertise in:
- Linux Administration & Shell Scripting
- CI/CD pipelines (Jenkins or similar)
- Git/GitHub, branching, and code repository standards
- Docker, Kubernetes, AWS, Ansible
- Database administration (MariaDB, MySQL, MongoDB)
- Web servers (Apache httpd, Apache Tomcat)
- Networking & Load Balancing tools (HAProxy, Keepalived)
- Monitoring & alerting tools (Nagios, logging frameworks)
- On-premise server management
- Strong debugging, automation, and system troubleshooting skills.
- Knowledge of security best practices including data encryption.
Personal Attributes :
- Excellent problem-solving and analytical skills.
- Strong communication and leadership abilities.
- Detail-oriented with a focus on reliability and performance.
- Ability to mentor juniors and collaborate with cross-functional teams.
- Keen interest in emerging DevOps and cloud trends.

About NeoGenCode Technologies Pvt Ltd
About
Welcome to Neogencode Technologies, an IT services and consulting firm that provides innovative solutions to help businesses achieve their goals. Our team of experienced professionals is committed to providing tailored services to meet the specific needs of each client. Our comprehensive range of services includes software development, web design and development, mobile app development, cloud computing, cybersecurity, digital marketing, and skilled resource acquisition. We specialize in helping our clients find the right skilled resources to meet their unique business needs. At Neogencode Technologies, we prioritize communication and collaboration with our clients, striving to understand their unique challenges and provide customized solutions that exceed their expectations. We value long-term partnerships with our clients and are committed to delivering exceptional service at every stage of the engagement. Whether you are a small business looking to improve your processes or a large enterprise seeking to stay ahead of the competition, Neogencode Technologies has the expertise and experience to help you succeed. Contact us today to learn more about how we can support your business growth and provide skilled resources to meet your business needs.
Candid answers by the company
IT & Engineering Talent Staffing
- Provides full-time and contract-based hiring, delivering handpicked, pre‑screened developers across tech stacks—ranging from web, mobile, AI/ML, Web3/blockchain.
- Maintains a bench o vetted candidates, offering fast delivery of interview-ready profiles—often within 24 hours.
- Offers payroll management, handling compliance, tax, attendance, and documentation for both contractors and full-time employees.
2. End-to-End Project Delivery
- Delivers full-stack development solutions: web, mobile, cloud, AI/ML, Blockchain/Web3.
- Manages entire project lifecycle—requirements gathering, design (UI/UX), development, deployment, and ongoing support .
3. Additional Offerings
- Expands into cybersecurity consulting, digital marketing, and cloud platform services (like AWS, GCP, Azure) .
- Provides strategic IT consulting to align technology solutions with business objectives
Similar jobs
We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Requirements
● Bachelor’s degree in Computer Science, Engineering, or related field.
● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g.,
Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar).
● Understanding of observability, monitoring, and incident response tooling (e.g.,Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments
● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration,regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Success Metrics
● Achieve >95% release success rate with minimal hotfix rollbacks.
● Reduce mean release deployment time by 30% within 6 months.
● Maintain a weekly release readiness report with zero critical blockers.
● Enable full traceability of builds from commit to deployment.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.
Requirements
- 3+ years work experience writing clean production code
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation etc). High proficiency with Terraform / Terragrunt is absolutely critical
- Experience of setting CI/CD pipelines from scratch
- Experience with AWS(EC2, ECS, RDS, Elastic Cache etc), AWS lambda, Kubernetes, Docker, ServiceMesh
- Experience with ETL pipelines, Bigdata infra
- Understanding of common security issues
Roles / Responsibilities:
- Write terraform modules for deploying different component of infrastructure in AWS like Kubernetes, RDS, Prometheus, Grafana, Static Website
- Configure networking, autoscaling. continuous deployment, security and multiple environments
- Make sure the infrastructure is SOC2, ISO 27001 and HIPAA compliant
- Automate all the steps to provide a seamless experience to developers.
Job Overview:
You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.
Experience:
- Experience working on billing and budgets for a GCP project - MUST
- Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE
- Experience in implementing the recommendations on GCP
- Architect Certifications on GCP - MUST
- Excellent communication skills (both verbal & written) - MUST
- Excellent documentation skills on processes and steps and instructions- MUST
- At least 2 years of experience on GCP.
Basic Qualifications:
- Bachelor’s/Master’s Degree in Engineering OR Equivalent.
- Extensive scripting or programming experience (Shell Script, Python).
- Extensive experience working with CI/CD (e.g. Jenkins).
- Extensive experience working with GCP, Azure, or Cloud Foundry.
- Experience working with databases (PostgreSQL, elastic search).
- Must have 2 years of minimum experience with GCP certification.
Benefits :
- Competitive salary.
- Work from anywhere.
- Learning and gaining experience rapidly.
- Reimbursement for basic working set up at home.
- Insurance (including top-up insurance for COVID).
Location :
Remote - work from anywhere.
Ideal joining preferences:
Immediate or 15 days
-
Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.
Job Description
We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!
What do we expect from this role?
- This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
- Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
- Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
- Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
- Troubleshoots basic software or DevOps stack issues
You would be great at this job, if you have below mentioned competencies
- Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
- 5+ years of relevant work experience
- https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
- Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
- Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
- Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
- Experience with the design and implementation of big data backup/recovery solutions.
- Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
- Working knowledge in Python is a plus
- Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
- Strong interpersonal and communication skills
- Proven ability to complete projects according to outlined scope and timeline
- Willingness to travel within India and internationally whenever required
- Demonstrated leadership qualities in past roles
More about Pixuate:
Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.
We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.
Rewards & Recognitions:
- Winner of Elevate by Startup Karnataka (https://pixuate.ai/thermal-analytics/">https://pixuate.ai/thermal-analytics/).
- Winner of Manufacturing Innovation Challenge in the 2nd edition of Fusion 4.0’s MIC2020 organized by the NASSCOM Centre of Excellence in IoT & AI in 2021
- Winner of SASACT program organized by MEITY in 2021
Why join us?
You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.
Where to find us?
Website: http://pixuate.com/">http://pixuate.com/
Linked in: https://www.linkedin.com/company/pixuate-ai
Work from Office – BengaluruPlace of Work:
Hands on experience in:
- Deploying, managing, securing and patching enterprise applications on large scale in Cloud preferably AWS.
- Experience leading End-to-end DevOps projects with modern tools encompassing both Applications and Infrastructure
- AWS Code deploy, Code build, Jenkins, Sonarqube.
- Incident management and root cause analysis.
- Strong understanding of immutable infrastructure and infrastructure as code concepts. Participate in capacity planning and provisioning of new resources. Importing already deployed infra into IaaC.
- Utilizing AWS cloud services such as EC2, S3, IAM, Route53, RDS, VPC, NAT/IG Gateway, LAMBDA, Load Balancers, CloudWatch, API Gateway are some of them.
- AWS ECS managing multi cluster container environments (ECS with EC2 and Fargate with service discovery using Route53)
- Monitoring/analytics tools like Nagios/DataDog and logging tools like LogStash/SumoLogic
- Simple Notification Service (SNS)
- Version Control System: Git, Gitlab, Bitbucket
- Participate in Security Audit of Cloud Infrastructure.
- Exceptional documentation and communication skills.
- Ready to work in Shift
- Knowledge of Akamai is Plus.
- Microsoft Azure is Plus
- Adobe AEM is plus.
- AWS Certified DevOps Professional is plus
MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Automate deployments of infrastructure components and repetitive tasks.
- Drive changes strictly via the infrastructure-as-code methodology.
- Promote the use of source control for all changes including application and system-level changes.
- Design & Implement self-recovering systems after failure events.
- Participate in system sizing and capacity planning of various components.
- Create and maintain technical documents such as installation/upgrade MOPs.
- Coordinate & collaborate with internal teams to facilitate installation & upgrades of systems.
- Support 24x7 availability for corporate sites & tools.
- Participate in rotating on-call schedules.
- Actively involved in researching, evaluating & selecting new tools & technologies.
- Cloud computing – AWS, OCI, OpenStack
- Automation/Configuration management tools such as Terraform & Chef
- Atlassian tools administration (JIRA, Confluence, Bamboo, Bitbucket)
- Scripting languages - Ruby, Python, Bash
- Systems administration experience – Linux (Redhat), Mac, Windows
- SCM systems - Git
- Build tools - Maven, Gradle, Ant, Make
- Networking concepts - TCP/IP, Load balancing, Firewall
- High-Availability, Redundancy & Failover concepts
- SQL scripting & queries - DML, DDL, stored procedures
- Decisive and ability to work under pressure
- Prioritizing workload and multi-tasking ability
- Excellent written and verbal communication skills
- Database systems – Postgres, Oracle, or other RDBMS
- Mac automation tools - JAMF or other
- Atlassian Datacenter products
- Project management skills
Qualifications
- 3+ years of hands-on experience in the field or related area
- Requires MS or BS in Computer Science or equivalent field
- Cloud and virtualization-based technologies (Amazon Web Services (AWS), VMWare).
- Java Application Server Administration (Weblogic, WidlFfy, JBoss, Tomcat).
- Docker and Kubernetes (EKS)
- Linux/UNIX Administration (Amazon Linux and RedHat).
- Developing and supporting cloud infrastructure designs and implementations and guiding application development teams.
- Configuration Management tools (Chef or Puppet or ansible).
- Log aggregations tools such as Elastic and/or Splunk.
- Automate infrastructure and application deployment-related tasks using terraform.
- Automate repetitive tasks required to maintain a secure and up-to-date operational environment.
Responsibilities
- Build and support always-available private/public cloud-based software-as-a-service (SaaS) applications.
- Build AWS or other public cloud infrastructure using Terraform.
- Deploy and manage Kubernetes (EKS) based docker applications in AWS.
- Create custom OS images using Packer.
- Create and revise infrastructure and architectural designs and implementation plans and guide the implementation with operations.
- Liaison between application development, infrastructure support, and tools (IT Services) teams.
- Development and documentation of Chef recipes and/or ansible scripts. Support throughout the entire deployment lifecycle (development, quality assurance, and production).
- Help developers leverage infrastructure, application, and cloud platform features and functionality participate in code and design reviews, and support developers by building CI/CD pipelines using Bamboo, Jenkins, or Spinnaker.
- Create knowledge-sharing presentations and documentation to help developers and operations teams understand and leverage the system's capabilities.
- Learn on the job and explore new technologies with little supervision.
- Leverage scripting (BASH, Perl, Ruby, Python) to build required automation and tools on an ad-hoc basis.
Who we have in mind:
- Solid experience in building a solution on AWS or other public cloud services using Terraform.
- Excellent problem-solving skills with a desire to take on responsibility.
- Extensive knowledge in containerized application and deployment in Kubernetes
- Extensive knowledge of the Linux operating system, RHEL preferred.
- Proficiency with shell scripting.
- Experience with Java application servers.
- Experience with GiT and Subversion.
- Excellent written and verbal communication skills with the ability to communicate technical issues to non-technical and technical audiences.
- Experience working in a large-scale operational environment.
- Internet and operating system security fundamentals.
- Extensive knowledge of massively scalable systems. Linux operating system/application development desirable.
- Programming in scripting languages such as Python. Other object-oriented languages (C++, Java) are a plus.
- Experience with Configuration Management Automation tools (chef or puppet).
- Experience with virtualization, preferably on multiple hypervisors.
- BS/MS in Computer Science or equivalent experience.
- Excellent written and verbal skills.
Education or Equivalent Experience:
- Bachelor's degree or equivalent education in related fields
- Certificates of training in associated fields/equipment’s
Hi ,
Greetings from ToppersEdge.com India Pvt Ltd
We have job openings for our Client. Kindly find the details below:
Work Location : Bengaluru(remote axis presently)later on they should relocate to Bangalore.
Shift Timings – general shift
Job Type – Permanent Position
Experience – 3-7 years
Candidate should be from Product Based Company only
Job Description
We are looking to expand our DevOps team. This team is responsible for writing scripts to set up infrastructure to support 24*7 availability of the Netradyne services. The team is also responsible for setting up monitoring and alerting, to troubleshoot any issues reported in multiple environments. The team is responsible for triaging of production issues and providing appropriate and timely response to customers.
Requirements
- B Tech/M Tech/MS in Computer Science or a related field from a reputed university.
- Total industry experience of around 3-7 years.
- Programming experience in Python, Ruby, Perl or equivalent is a must.
- Good knowledge and experience of configuration management tool (like Ansible, etc.)
- Good knowledge and experience of provisioning tools (like Terraform, etc.)
- Good knowledge and experience with AWS.
- Experience with setting up CI/CD pipelines.
- Experience, in individual capacity, managing multiple live SaaS applications with high volume, high load, low-latency and high availability (24x7).
- Experience setting up web servers like apache, application servers like Tomcat/Websphere and databases (RDBMS and NoSQL).
- Good knowledge of UNIX (Linux) administration tools.
- Good knowledge of security best practices and knowledge of relevant tools (Firewalls, VPN) etc.
- Good knowledge of networking concepts and UNIX administration tools.
- Ability to troubleshoot issues quickly is required.
- 3-6 years of relevant work experience in a DevOps role.
- Deep understanding of Amazon Web Services or equivalent cloud platforms.
- Proven record of infra automation and programming skills in any of these languages - Python, Ruby, Perl, Javascript.
- Implement DevOps Industry best practices and the application of procedures to achieve a continuously deployable system
- Continuously improve and increase the capabilities of the CI/CD pipeline
- Support engineering teams in the implementation of life-cycle infrastructure solutions and documentation operations in order to meet the engineering departments quality and standards
- Participate in production outages and handle complex issues and works towards resolution







