50+ DevOps Jobs in India
Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!

CTC: up to 20 LPA
Required Skills:
- Strong experience in SAP EWM Technical Development.
- Proficiency in ABAP (Reports, Interfaces, Enhancements, Forms, BAPIs, BADIs).
- Hands-on experience with RF developments, PPF framework, and queue monitoring.
- Understanding of EWM master data, inbound/outbound processes, and warehouse tasks.
- Experience with SAP integration technologies (IDoc, ALE, Web Services).
- Good analytical, problem-solving, and communication skills.
Nice to Have:
- Exposure to S/4HANA EWM.
- Knowledge of Functional EWM processes.
- Experience in Agile / DevOps environments.
If interested kindly share your updated resume on 82008 31681

A global technology-driven performance apparel retailer

Core Focus:
- Operate with a full DevOps mindset, owning the software lifecycle from development through production support.
- Participate in Agile ceremonies and global team collaboration, including on-call support.
Mandatory/Strong Technical Skills (6–8+ years of relevant experience required):
- Java: 4.5 to 6.5 years experience
- AWS: Strong knowledge and working experience with Cloud technologies minimum 2 years.
- Kafka: 2 years of Strong knowledge and working experience with data integration technologies
- Databases: Experience with SQL/NoSQL databases (e.g., Postgres, MongoDB).
Other Key Technologies & Practices:
- Python, Spring Boot, and API-based system design.
- Containers/Orchestration (Kubernetes).
- CI/CD tools (Gitlab, Splunk, Datadog).
- Familiarity with Terraform and Airflow.
- Experience in Agile methodology (Jira, Confluence).
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)
Role: Lead Java Developer
Work Location: Chennai, Pune
No of years’ experience: 8+ years
Hybrid (3 days office and 2 days home)
Type: Fulltime
Skill Set: Java + Spring Boot + Sql + Microservices + DevOps
Job Responsibilities:
Design, develop, and maintain high-quality software applications using Java and Spring Boot.
Develop and maintain RESTful APIs to support various business requirements.
Write and execute unit tests using TestNG to ensure code quality and reliability.
Work with NoSQL databases to design and implement data storage solutions.
Collaborate with cross-functional teams in an Agile environment to deliver high-quality software solutions.
Utilize Git for version control and collaborate with team members on code reviews and merge requests.
Troubleshoot and resolve software defects and issues in a timely manner.
Continuously improve software development processes and practices.
Description:
8 years of professional experience in backend development using Java and leading a team.
Strong expertise in Spring Boot, Apache Camel, Hibernate, JPA, and REST API design
Hands-on experience with PostgreSQL, MySQL, or other SQL-based databases
Working knowledge of AWS cloud services (EC2, S3, RDS, etc.)
Experience in DevOps activities.
Proficiency in using Docker for containerization and deployment.
Strong understanding of object-oriented programming, multithreading, and performance tuning
Self-driven and capable of working independently with minimal supervision
We are seeking a disciplined, serious, and highly skilled Lead JavaScript Engineer with hands-on experience in SvelteKit and Generative AI (RAG & MCP).
Key Responsibilities:
- Lead and mentor a small team of 5 Software Developers.
- Architect, develop, and maintain web and native applications.
- Integrate Generative AI solutions (RAG & MCP).
- Manage authentication, security, and DevOps pipelines.
- Enforce disciplined coding practices, processes, and timely delivery.
- Collaborate with product and design teams to implement scalable solutions.
Requirements:
- Strong expertise in JavaScript / TypeScript mainly with JS Framework of SvelteKit is must.
- Hands-on experience with Generative AI (RAG & MCP).
- Experience leading small teams.
- Deep understanding of web app architecture, security, and DevOps.
- Knowledge of native app frameworks is a plus.
- Disciplined, self-driven, and result-oriented mindset.
- Local candidates preferred.
Additional Notes:
- Full-time office role, 6 days/week.
- Long-term commitment required;
- Frequent job hoppers need not apply.
- Immediate joining.

Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Job Description:
Please find below details:
Experience - 6+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resources’s key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.

About Us:
TVL Media is a creative digital agency helping brands grow through design, technology, and marketing. We work with e-commerce and lifestyle businesses to build powerful digital experiences and scalable solutions.
Role Overview:
We’re looking for a passionate Full Stack Developer Intern (MERN) who’s eager to work on real-world web applications and internal tools. You’ll collaborate with our design and product teams to build, test, and deploy dynamic applications using the MERN stack.
Key Responsibilities:
- Develop and maintain front-end components using React.js.
- Build scalable back-end APIs using Node.js and Express.js.
- Manage and query databases with MongoDB.
- Collaborate with designers and developers to integrate UI/UX designs.
- Test, debug, and optimize applications for performance.
- Support deployment and version control using Git and GitHub.
Requirements:
- Basic understanding of MongoDB, Express.js, React.js, Node.js.
- Familiarity with RESTful APIs and JSON.
- Knowledge of HTML, CSS, and JavaScript fundamentals.
- Strong problem-solving and analytical skills.
- Pursuing or recently completed a degree in Computer Science, IT, or related field.
- Experience with Git and basic deployment tools is a plus.
Perks:
- Internship certificate & letter of recommendation (based on performance).
- Opportunity to convert to a full-time role.
- Work on live projects and build portfolio-worthy applications.
- Mentorship from experienced developers.
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
- Advocate DevOps best practices, automation, and continuous improvement
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Mumbai
Employment Type : Full-time
Job Overview :
We’re looking for an experienced DevOps Engineer to design, build, and manage Kubernetes-based deployments for a microservices data discovery platform.
The ideal candidate has strong hands-on expertise with Helm, Docker, CI/CD pipelines, and cloud networking — and can handle complex deployments across on-prem, cloud, and air-gapped environments.
Mandatory Skills :
✅ Helm, Kubernetes, Docker
✅ Jenkins, ArgoCD, GitOps
✅ Cloud Networking (VPCs, bare metal vs. VMs)
✅ Storage (MinIO, Ceph, NFS, S3/EBS)
✅ Air-gapped & multi-tenant deployments
Key Responsibilities :
- Build and customize Helm charts from scratch.
- Implement CI/CD pipelines using Jenkins, ArgoCD, GitOps.
- Manage containerized deployments across on-prem/cloud setups.
- Work on air-gapped and restricted environments.
- Optimize for scalability, monitoring, and security (Prometheus, Grafana, RBAC, HPA).


Exp: 10 to 15 Years
CTC: up to 25 LPA
Core skill required:
- In-depth knowledge of Angular 8 or above , Typescript, JavaScript , HTML, and CSS
- Should have adequate knowledge of API Development Technologies to guide the Team to develop the API code and get it tested
- Excellent communication and interpersonal skills, with the ability to lead and mentor technical teams
- Should have good knowledge of the current Technology trends to implement techniques which can enhance the security, performance and stability of the product
- Should have good knowledge in preparing the Low Level Design and ensure the developers are having full understanding before commencement of work
- Good Knowledge of the DevOps process for CI/CD will be an added advantage
- Should have a solid understand of SDLC process using Waterfall, Iterative or Agile Methodology
- Good Knowledge of Quality Processes and Quality Standards
- Have experience in handling risk and providing mitigation strategies to the Product Manager
Primary skills:
- 8+ years of experience Angular 8+ version, Type Script
- Minimum 5 years of experience on Web Application development HTML, CSS, JavaScript/JQuery, Entity framework and Linq Queries
- Been on a Lead role and led a team of 3-5 people for a period of 1 - 2 years
- Must have a good exposure on query writing and DB management for writing stored procedures/ user-defined functions
- Should have a very good understanding of the project architecture
- Should provide Technical guidance to the team to get the task completed on time.
- Assist project manager in the project coordination/management
Kindly share your resume on 82008 31681

Data Engineer
Experience: 4–6 years
Key Responsibilities
- Design, build, and maintain scalable data pipelines and workflows.
- Manage and optimize cloud-native data platforms on Azure with Databricks and Apache Spark (1–2 years).
- Implement CI/CD workflows and monitor data pipelines for performance, reliability, and accuracy.
- Work with relational databases (Sybase, DB2, Snowflake, PostgreSQL, SQL Server) and ensure efficient SQL query performance.
- Apply data warehousing concepts including dimensional modelling, star schema, data vault modelling, Kimball and Inmon methodologies, and data lake design.
- Develop and maintain ETL/ELT pipelines using open-source frameworks such as Apache Spark and Apache Airflow.
- Integrate and process data streams from message queues and streaming platforms (Kafka, RabbitMQ).
- Collaborate with cross-functional teams in a geographically distributed setup.
- Leverage Jupyter notebooks for data exploration, analysis, and visualization.
Required Skills
- 4+ years of experience in data engineering or a related field.
- Strong programming skills in Python with experience in Pandas, NumPy, Flask.
- Hands-on experience with pipeline monitoring and CI/CD workflows.
- Proficiency in SQL and relational databases.
- Familiarity with Git for version control.
- Strong communication and collaboration skills with ability to work independently.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.



Job Title : Senior Technical Consultant (Polyglot)
Experience Required : 5 to 10 Years
Location : Bengaluru / Chennai (Remote Available)
Positions : 2
Notice Period : Immediate to 1 Month
Role Overview :
We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.
You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..
Mandatory Skills :
Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.
Key Skills & Requirements :
Backend (80% Focus) :
- Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
- Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
- Hands-on experience in building scalable, high-performance backend systems.
Frontend (20% Focus) :
- Proficiency in React or Angular
- Solid knowledge of HTML, CSS, JavaScript
Other Must-Haves :
- Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
- Ability to write clean, testable, and maintainable code.
- Excellent communication and client-facing skills.
Roles & Responsibilities :
- Tackle technically challenging and mission-critical problems.
- Collaborate with teams to design and implement pragmatic solutions.
- Build prototypes and showcase products to clients.
- Contribute to system design and architecture discussions.
- Engage with the broader tech community through talks and conferences.
Interview Process :
- Technical Round (Online Assessment)
- Technical Round with Client (Code Pairing)
- System Design & Architecture (Build from Scratch)
✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).
✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.
🚀 We’re Hiring: Senior Software Engineer – Backend | Remote | Full-time
Are you a backend engineering expert who thrives in high-growth startup environments?
Do you enjoy solving complex challenges with the latest technologies like Java 18+, Spring Boot 3+, and scalable microservices?
We’re looking for a Senior Software Engineer – Backend to help us build a world-class data science platform that powers cutting-edge AI solutions.
What You’ll Do:
🔹 Build and optimize scalable, secure backend systems
🔹 Collaborate with product owners & architects to shape business solutions
🔹 Deliver high-quality, production-ready code with best practices (unit testing, CI/CD, automation)
🔹 Work with modern tools like Kubernetes, Docker, NodeJS, React
🔹 Contribute to a high-performance engineering culture and drive innovation
What We’re Looking For:
✔️ 6+ years of experience with Java/Python, Spring Boot, REST APIs, microservices
✔️ Strong CS fundamentals, algorithms, and system design skills
✔️ Experience in secure web applications & scalable backend architectures
✔️ Knowledge of cloud (AWS/GCP/Azure), GitHub Actions, and Unix scripting
✔️ Startup mindset – fast learner, problem solver, impact-driven
🌍 Remote | High-growth environment | Global impact
About Us:
Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.
Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.
Job description:
IT Infrastructure & System Administration:
Manage Windows/Linux servers, desktop systems, Active Directory, DNS, DHCP, and virtual environments (VMware/Hyper-V). Monitor system performance and implement improvements for efficiency and availability.
Oversee patch management, backups, disaster recovery, and security configurations. Ensure IT compliance, conduct audits, and maintain detailed documentation
DevOps & Cloud Operations:
Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, or similar tools Manage container orchestration using Kubernetes; deploy infrastructure using Terraform Administer and optimize AWS cloud infrastructure Automate deployment, monitoring, and alerting solutions for production environments
Security, Maintenance & Support Define and enforce IT and DevOps security policies and procedures Perform root cause analysis (RCA) for system failures and outages Provide Tier 2/3 support and resolve complex system and production issues.
Collaboration & Communication Coordinate IT projects (e.g., upgrades, migrations, cloud implementations) Collaborate with engineering and product teams for release cycles and production deployments.
Maintain clear communication with internal stakeholders and provide regular reporting.
Qualification:
8+ years of experience in IT systems administration and/or DevOps roles
Minimum of 8-10 years of experience as a Windows Administrator or in a similar role.
Strong knowledge of Windows Server (2016/2019/2022) and Windows operating systems.
Experience with Active Directory, Group Policy, DNS, DHCP, and other Windows-based services.
Familiarity with virtualization technologies (e.g., VMware, Hyper-V). Proficiency in scripting languages (e.g., PowerShell).
Strong understanding of networking principles and protocols.
Relevant certifications (e.g., MCSA, MCSE) are a plus.
Salary Range: Competitive
Employment Type: Full Time
Location: Mumbai / Navi Mumbai
Qualification: Any graduate or master’s degree in science, engineering or technology
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
Job Overview
We are hiring an experienced Senior Database Administrator to manage mission-critical database systems and support the growing scale of our data infrastructure. The ideal candidate will be responsible for maintaining database performance, availability, integrity, and security across various platforms.
Key Responsibilities
- Administer and manage MySQL, MongoDB, PostgreSQL, Oracle, and Redis databases
- Handle performance tuning, query optimization, and indexing strategies
- Implement and manage database backup, recovery, replication, and high availability setups
- Oversee schema design, user roles, access control, and partitioning
- Conduct vulnerability assessments and implement database security best practices
- Collaborate with DevOps teams to support CI/CD pipelines and cloud-based deployments
- Ensure smooth operation of cloud-hosted databases on AWS (RDS, EC2, etc.)
Must-Have Skills
- 6–8 years of proven DBA experience in production environments
- Strong expertise in MySQL (replication, performance tuning, XtraBackup)
- Proficiency in MongoDB (replica sets, sharding, indexing, aggregations, security)
- Hands-on experience with schema management, database monitoring, and partitioning
- Cloud infrastructure knowledge, particularly AWS (RDS, EC2, backups)
- Familiarity with tools such as MySQL Workbench, MongoDB Compass, and pgAdmin
Preferred Skills
- Exposure to PostgreSQL (logical replication, PITR, PostGIS)
- Experience with Oracle (RAC, ASM, RMAN, Data Guard)
- Understanding of Redis and MSSQL
- Scripting knowledge in Python, Bash, or PowerShell
- Experience in CI/CD and DevOps environments
- Database certifications (MySQL, MongoDB, PostgreSQL, Oracle) are a plus

About the Role
NeoGenCode Technologies is looking for a Senior Technical Architect with strong expertise in enterprise architecture, cloud, data engineering, and microservices. This is a critical role demanding leadership, client engagement, and architectural ownership in designing scalable, secure enterprise systems.
Key Responsibilities
- Design scalable, secure, and high-performance enterprise software architectures.
- Architect distributed, fault-tolerant systems using microservices and event-driven patterns.
- Provide technical leadership and hands-on guidance to engineering teams.
- Collaborate with clients, understand business needs, and translate them into architectural designs.
- Evaluate, recommend, and implement modern tools, technologies, and processes.
- Drive DevOps, CI/CD best practices, and application security.
- Mentor engineers and participate in architecture reviews.
Must-Have Skills
- Architecture: Enterprise Solutions, EAI, Design Patterns, Microservices (API & Event-driven)
- Tech Stack: Java, Spring Boot, Python, Angular (recent 2+ years experience), MVC
- Cloud Platforms: AWS, Azure, or Google Cloud
- Client Handling: Strong experience with client-facing roles and delivery
- Data: Data Modeling, RDBMS & NoSQL, Data Migration/Retention Strategies
- Security: Familiarity with OWASP, PCI DSS, InfoSec principles
Good to Have
- Experience with Mobile Technologies (native, hybrid, cross-platform)
- Knowledge of tools like Enterprise Architect, TOGAF frameworks
- DevOps tools, containerization (Docker), CI/CD
- Experience in financial services / payments domain
- Familiarity with BI/Analytics, AI/ML, Predictive Analytics
- 3rd-party integrations (e.g., MuleSoft, BizTalk)
Full Stack Developer Intern
Company: BizDateUp Pvt Ltd
Location: Charni Road, Mumbai
Employment Type: Internship
(Immediate Joining Preferred)
Experience Level: Entry / Mid-Level(0 – 1 year experience)
Stipend: As per company standards
About
BizDateUp Pvt Ltd BizDateUp Pvt Ltd is a dynamic and fast-paced technology startup committed to building innovative solutions. We believe in fostering a collaborative environment where our team members can learn, grow, and make a tangible impact on real-world projects. Join us to kickstart your career in a supportive and challenging setting. About the Role We are looking for a highly motivated and enthusiastic Full Stack Developer Intern to join our growing tech team. This is an exceptional opportunity for aspiring developers to gain hands-on experience with modern web technologies, contribute to live projects, and receive mentorship from experienced professionals. If you're passionate about building scalable full-stack applications and eager to learn in a startup environment, we encourage you to apply. Key Responsibilities As a Full Stack Developer Intern, you'll be an integral part of our development team, contributing to the full software development lifecycle.
Your responsibilities will include:
• Code Review and Analysis: o Review and analyze existing codebase to understand functionality, identify areas for optimization, and ensure adherence to best practices. o Participate in code reviews with senior developers, providing constructive feedback and learning from their insights.
• Front-End Development: o Assist in building responsive and intuitive user interfaces using React.js. o Implement new features and functionalities based on design specifications and user stories. o Debug and resolve issues related to the front-end application.
• Back-End Development: o Support the development of robust and scalable server-side applications using Node.js and Express.js. o Contribute to the design and implementation of RESTful APIs to facilitate communication between front-end and back-end systems. o Work with PostgreSQL to manage and query databases, ensuring data integrity and efficient retrieval.
• Collaboration and Learning: o Collaborate closely with senior developers, product managers, and other team members to understand project requirements and deliver high-quality solutions. o Actively participate in team meetings, stand-ups, and brainstorming sessions. o Proactively seek feedback and mentorship from experienced developers to enhance your technical skills and understanding.
• Testing and Debugging: o Assist in writing unit and integration tests to ensure the reliability and stability of applications. o Identify, analyze, and resolve bugs and performance issues across both front-end and back-end systems.
• Version Control: o Utilize Git for version control, including branching, merging, and pull requests, to manage codebase effectively within a team environment.
• Documentation: o Contribute to the creation and maintenance of technical documentation for features, APIs, and system architecture. Required Skills To succeed in this role, you should have a foundational understanding and practical experience with the following technologies: • Programming Languages: Strong command of TypeScript and JavaScript. • Frontend Framework: Proficiency in React.js.
• Backend Technologies: Experience with Node.js and Express.js.
• Database: Familiarity with PostgreSQL.
• Version Control: Working knowledge of Git.
• APIs: Understanding of RESTful APIs.
Eligibility Criteria
• Currently pursuing or have completed a BCA / B.Tech / B.Sc in Computer Science or a closely related field.
• Possess strong problem-solving skills and a genuine eagerness to learn and adapt to new technologies.
• Demonstrate a strong passion for building full-stack applications and a desire to contribute to innovative projects.
Perks & Benefits
We are committed to providing a rewarding internship experience. As a Full Stack Developer Intern at BizDateUp Pvt Ltd, you will enjoy:
• Hands-on Experience: Work directly with modern web technologies and contribute to real-time projects.
• Mentorship: Receive guidance and support from experienced senior developers who are invested in your growth.
• Learning Opportunity: Gain invaluable insights into the full software development lifecycle in a fast-paced startup.
• Internship Certificate: A certificate of internship will be provided upon successful completion of your tenure.
• Potential for Full-Time Conversion: Exceptional performance during your internship may lead to an offer for a full-time position.

Job Title: Technical Content Writer
Experience: 3–5 Years
Location: [Insert Location / Remote / Hybrid]
Industry: Enterprise Application Development / IT Services
Employment Type: Full-Time
About Us
We’re a fast-growing technology startup on a mission to build impactful, scalable, and custom enterprise applications that solve real business challenges. From intelligent automation to full-stack product development, we partner with leading enterprises to build digital solutions that power growth.
As we expand, we’re looking for a Technical Content Writer who can bring our technology stories to life and position our expertise in front of global enterprise decision-makers.
Role Overview
You will be responsible for crafting high-quality, technically sound, and engaging content that showcases our capabilities, highlights client success stories, and strengthens our brand voice. Your content will help educate, inform, and convert enterprise audiences.
This is a hands-on role where you’ll work closely with founders, engineers, and sales teams to turn technical ideas into powerful narratives.
Key Responsibilities
- Write and edit content such as blogs, white papers, solution briefs, website copy, case studies, and product collateral.
- Collaborate with tech teams to translate complex software solutions (custom apps, cloud-native solutions, AI/ML, APIs, automation) into clear and compelling content.
- Support marketing campaigns with SEO-optimized articles, email content, and social media posts.
- Interview subject matter experts and clients to create authentic success stories and thought leadership content.
- Build content that communicates our differentiation in a competitive IT services landscape.
- Own and maintain a content calendar and ensure timely delivery.
- Ensure consistency of tone, voice, and brand across all content.
What We’re Looking For
- 3–5 years of experience in technical content writing, preferably in a startup or IT services environment.
- Excellent written and verbal communication skills with a keen eye for detail.
- Strong understanding of enterprise technology trends – application development, cloud, APIs, DevOps, microservices, and more.
- Ability to simplify technical jargon and communicate to business and tech audiences alike.
- Proficiency with SEO writing, keyword research tools, and CMS platforms.
- Self-starter with the ability to manage multiple priorities in a fast-paced environment.
Nice to Have
- Experience writing for B2B SaaS or enterprise software products.
- Exposure to agile development environments and product teams.
- Familiarity with design tools (e.g., Canva or Figma) to create visual content briefs.
- Experience contributing to RFPs, pitch decks, or sales enablement material.
Why Join Us?
- Be part of a high-impact team building enterprise-grade products from scratch.
- Flat hierarchy, ownership-driven culture, and direct access to founders.
- Flexible work environment with performance-led growth.
- Opportunity to shape the voice of a fast-scaling tech brand.
- Experience comparable to DevOps SIRE providing SME-tevel application or platform support with responsibility for designing and automating operational procedures and best practices
-Experience writing python and shell scripts to perform health checks and automations
- Experience with Linux System Administration (preferably Red Hat)
- Hands-on experience with multi-tenant hosting environments for middleware applications (for example: centrally managed platform or infrastructure as a service)
- Experience with implementing observabitity, monitoring and alerting tools
- Excellent written and oral English communication skills. The candidate must write user-facing documentation, prepare and deliver presentations to an internal audience and effectively interact with upper management, colleagues, and customers
- Independent problem-solving skills, self-motivated, and a mindset for taking ownership
- A minimum of 5 years of infrastructure production support or DevOps experience
Additional Technical Skills
Experience with broker-based messaging infrastructure such as Apache Kafka, IBM MQ (or similar technology like ActiveMQ, Azure Service Bus) including configuration and performance tuning
Experience with public/private cloud and containerization technologies (e.g. Kubernetes)
Experience with Agile development methodology, Cl/CD and automated build pipelines
Experience with DevOps methodology (e.g. Phoenix Project)
Experience with tools such as Jira, Confluence and ServiceNow
Experience working with JSON, XML, Google Protocol Buffers, Avro, FIX
Experience with troubleshooting tools such as TCPdump and Wireshark
Experience with NoSQL databases such as MongoDB and Redis interest and understanding of emerging IT trends
Experience with system architecture design
Job Title: Senior DevOps Engineer
Location: Sector 39, Gurgaon (Onsite)
Employment Type: Full-Time
Working Days: 6 Days (Alternate Saturdays Working)
Experience Required: 5+ Years
Team Role: Lead & Mentor a team of 3–4 engineers
About the Role
We are seeking a highly skilled Senior DevOps Engineer to lead our infrastructure and automation initiatives while mentoring a small team. This role involves setting up and managing physical and cloud-based servers, configuring storage systems, and implementing automation to ensure high system availability and reliability. The ideal candidate will have strong Linux administration skills, hands-on experience with DevOps tools, and the leadership capabilities to guide and grow the team.
Key Responsibilities
Infrastructure & Server Management (60%)
- Set up, configure, and manage bare-metal (physical) servers as well as cloud-based environments.
- Configure network bonding, firewalls, and system security for optimal performance and reliability.
- Implement and maintain high-availability solutions for mission-critical systems.
Queue Systems (Kafka / RabbitMQ) (15%)
- Deploy and manage message queue systems to support high-throughput, real-time data exchange.
- Ensure reliable event-driven communication between distributed services.
Storage Systems (SAN/NAS) (15%)
- Configure and manage Storage Area Networks (SAN) and Network Attached Storage (NAS).
- Optimize storage performance, redundancy, and availability.
Database Administration (5%)
- Administer and optimize MariaDB, MySQL, MongoDB, Redis, and Elasticsearch.
- Handle backup, recovery, replication, and performance tuning.
General DevOps & Automation
- Deploy product updates, patches, and fixes while ensuring minimal downtime.
- Design and manage CI/CD pipelines using Jenkins or similar tools.
- Administer and automate workflows with Docker, Kubernetes, Ansible, AWS, and Git.
- Manage web and application servers (Apache httpd, Tomcat).
- Implement monitoring, logging, and alerting systems (Nagios, HAProxy, Keepalived).
- Conduct root cause analysis and implement automation to reduce manual interventions.
- Mentor a team of 3–4 engineers, fostering best practices and continuous improvement.
Required Skills & Qualifications
✅ 5+ years of proven DevOps engineering experience
✅ Strong expertise in Linux administration & shell scripting
✅ Hands-on experience with bare-metal server management & storage systems
✅ Proficiency in Docker, Kubernetes, AWS, Jenkins, Git, and Ansible
✅ Experience with Kafka or RabbitMQ in production environments
✅ Knowledge of CI/CD, automation, monitoring, and high-availability tools (Nagios, HAProxy, Keepalived)
✅ Excellent problem-solving, troubleshooting, and leadership abilities
✅ Strong communication skills with the ability to mentor and lead teams
Good to Have
- Experience in Telecom projects involving SMS, voice, or real-time data handling.

Role: GenAI Full Stack Engineer
Fulltime
Work Location: Remote
Job Description:
• Python and familiar with AI/Gen AI frameworks. Experience with data manipulation libraries like Pandas and NumPy is crucial.
• Specific expertise in implementing and managing large language models (LLMs) is a must.
• Fast API experience for API development
• A solid grasp of software engineering principles, including version control (Git), continuous integration and continuous deployment (CI/CD) practices, and automated testing, is required. Experience in MLOps, ML engineering, and Data Science, with a proven track record of developing and maintaining AI solutions, is essential.
• We also need proficiency in DevOps tools such as Docker, Kubernetes, Jenkins, and Terraform, along with advanced CI/CD practices.
Job Position: DevOps Engineer
Experience Range: 2 - 3 years
Type:Full Time
Location:India (Remote)
Desired Skills: DevOps, Kubernetes (EKS), Docker, Kafka, HAProxy, MQTT brokers, Redis, PostgreSQL, TimescaleDB, Shell Scripting, Terraform, AWS (API Gateway, ALB, ECS, EKS, SNS, SES, CloudWatch Logs), Prometheus, Grafana, Jenkins, GitHub
Your key responsibilities:
- Collaborate with developers to design and implement scalable, secure, and reliable infrastructure.
- Manage and automate CI/CD pipelines (Jenkins - Groovy Scripts, GitHub Actions), ensuring smooth deployments.
- Containerise applications using Docker and manage workloads on Kubernetes (EKS).
- Work with AWS services (ECS, EKS, API Gateway, SNS, SES, CloudWatch Logs) to provision and maintain infrastructure.
- Implement infrastructure as code using Terraform.
- Set up and manage monitoring and alerting using Prometheus and Grafana.
- Manage and optimize Kafka, Redis, PostgreSQL, TimescaleDB deployments.
- Troubleshoot issues in distributed systems and ensure high availability using HAProxy, load balancing, and failover strategies.
- Drive automation initiatives across development, testing, and production environments.
What you’ll bring
Required:
- 2–3 years of hands-on DevOps experience.
- Strong proficiency in Shell Scripting.
- Practical experience with Docker and Kubernetes (EKS).
- Knowledge of Terraform or other IaC tools.
- Experience with Jenkins pipelines (Groovy scripting preferred).
- Exposure to AWS cloud services (ECS, EKS, API Gateway, SNS, SES, CloudWatch).
- Understanding of microservices deployment and orchestration.
- Familiarity with monitoring/observability tools (Prometheus, Grafana).
- Good communication and collaboration skills.
Nice to have:
- Experience with Kafka, HAProxy, MQTT brokers.
- Knowledge of Redis, PostgreSQL, TimescaleDB.
- Exposure to DevOps best practices in agile environments.
About CoverSelf: We are an InsurTech start-up based out of Bangalore, with a focus on Healthcare. CoverSelf empowers healthcare insurance companies with a truly NEXT-GEN cloud-native, holistic & customizable platform preventing and adapting to the ever-evolving claims & payment inaccuracies. Reduce complexity and administrative costs with a unified healthcare dedicated platform.
Overview about the role: We are looking for a Junior DevOps Engineer who would be working on the bleeding edge of technologies. The role would be primarily to achieve various functions like maintaining, monitoring, securing and automating our cloud infrastructure and applications. If you have a solid background in kubernetes and terraform, we’d love to speak with you.
Responsibilities:
➔ Implement, and maintain application infrastructure, databases, and networks.
➔ Develop and implement automation scripts using Terraform for infrastructure deployment.
➔ Implement and maintain containerized applications using Docker and Kubernetes.
➔ Work with other DevOps Engineers in the team on deploying applications, provisioning infrastructure, Automation, routine audits, upgrading systems, capacity planning, and benchmarking.
➔ Work closely with our Engineering Team to ensure seamless integration of new tools and perform day-to-day activities which can help developers deploy and release their code seamlessly.
➔ Respond to service outages/incidents and ensure system uptime requirements are met.
➔ Ensure the security and compliance of our applications and infrastructure.
Requirements:
➔ Must have a B.Tech degree
➔ Must have at least 2 years experience as a devops engineer
➔ Operating Systems: Good understanding in any of the UNIX/Linux platforms and good to have windows.
➔ Source Code Management: Expertise in GIT for version control and managing branching strategies.
➔ Networking: Basic understanding of network fundamentals like Networks, DNS, PORTS, ROUTES,NAT GATEWAYS and VPN.
➔ Cloud Platforms: Should have minimum 2 years of experience in working with AWS and good to have understanding of other cloud platforms, such as Microsoft Azure and Google Cloud Platform.
➔ Infrastructure Automation: Experience with Terraform to automate infrastructure provisioning and configuration.
➔ Container Orchestration: Must have at least 1 year of experience in managing Kubernetes clusters.
➔ Containerization: Experience in containerization of applications using Docker.
➔ CI/CD and Scripting: Experience with CI/CD concepts and tools (e.g., Gitlab CI) and scripting languages like Python or Shell for automation.
➔ Monitoring and Observability: Familiarity with monitoring tools like Prometheus, Grafana, CloudWatch, and troubleshooting using logs and metrics analysis.
➔ Security Practices: Basic understanding of security best practices in a DevOps environment and Integration of security into the CI/CD pipeline (DevSecOps).
➔ Databases: Good to have knowledge on one of the databases like MySQL, Postgres, mongo.
➔ Problem Solving and Troubleshooting: Debugging and troubleshooting skills for resolving issues in development, testing, and production environments.
Work Location: Jayanagar - Bangalore.
Work Mode: Work from Office.
Benefits: Best in the Industry Compensation, Friendly & Flexible Leave Policy, Health Benefits, Certifications & Courses Reimbursements, Chance to be part of rapidly growing start-up & the next success story, and many more.
Additional Information: At CoverSelf, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.

Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.
Job Responsibilities:
Identify and develop new business opportunities in the defined territory.
Drive end-to-end sales cycle: lead generation, proposal preparation, negotiation, and closure.
Achieve and exceed assigned revenue targets and other key performance matrices.
Build and maintain strong client relationships with multi-level decision-makers.
Understand client requirements and propose suitable solutions & services.
Collaborate with technical and delivery teams to ensure successful customer onboarding.
Develop and maintain knowledge of our solutions, industry trends, target markets etc.
Maintain accurate sales pipeline and reporting in CRM system.
Required Qualifications and Experience:
5-8 years of experience selling Software Solution and Services to enterprise customers.
Bachelor’s or Master’s degree in Engineering, Business, Marketing or appropriate field.
Familiar with solution selling techniques and strong account management skills.
Must have passion for selling and constant desire to learn new technologies.
Proven track record of consistently and successfully achieving assigned targets.
Ability to communicate and influence people at all levels in an organization.
Must have good industry network and existing relationship with C -level contacts.
Must be able to work in a fast -paced, goal oriented and collaborative work environment.
Ability to organize and prioritize work independently with minimal supervision.
Must have excellent communication, organizational and relationship building skills.
2
Possess high degree of self-motivation, initiative, integrity, discipline and commitment.
Must be highly adaptable, process oriented, result driven, quick learner and team player.
Job Summary:
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely.
Roles & Responsibilities:
- Strong experience with essential DevOps tools and technologies, including Kubernetes, Terraform, Azure DevOps, Jenkins, Maven, Git, GitHub, and Docker.
- Hands-on experience in Azure cloud services, including:
a) Virtual Machines (VMs)
b) Blob Storage
c) Virtual Network (VNet)
d) Load Balancer & Application Gateway
e) Azure Resource Manager (ARM)
f) Azure Key Vault
g) Azure Functions
h) Azure Kubernetes Service (AKS)
i) Azure Monitor, Log Analytics, and Application Insights
j) Azure Container Registry (ACR) and Azure Container Instances (ACI)
k) Azure Active Directory (AAD) and RBAC
- Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers.
- Build and maintain CI/CD pipelines using Azure DevOps, Jenkins, and scripting for scalable SaaS deployments.
- Develop automation and infrastructure-as-code (IaC) using Terraform, ARM Templates, or Bicep for managing and provisioning cloud resources.
- Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS).
- Proficient in setting up monitoring, logging, and alerting systems using Azure-native tools and integrating with third-party observability stacks.
- Experience implementing auto-scaling, load balancing, and high-availability strategies for cloud-native SaaS applications.
- Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing, compliance, and secure deployments.
- Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments.
- Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation.
- Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications.
- Skilled in GitHub repository management, including setting up project-specific access, enforcing code quality standards, and managing pull requests.
- Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications.
- Ability to design and maintain scalable, resilient, and secure infrastructure to support rapid growth of SaaS applications.
Qualifications & Requirements:
- Proven experience as a DevOps Engineer, Site Reliability Engineer, or in a similar software engineering role.
- Strong experience working in SaaS environments with a focus on scalability, availability, and performance.
- Proficiency in Python or Ruby for scripting and automation.
- Working knowledge of SQL and database management tools.
- Strong analytical and problem-solving skills with a collaborative and proactive mindset.
- Familiarity with Agile methodologies and ability to work in cross-functional teams.
Job Description
We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows, with hands-on experience in Azure cloud environments. You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale.
Key Responsibilities:
· Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications.
· Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning.
· Deploy and manage applications using Helm and ArgoCD with GitOps best practices.
· Work with Podman and Docker as container runtimes for development and production environments.
· Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations.
· Optimize infrastructure for cost, performance, and reliability within Azure cloud.
· Troubleshoot, monitor, and maintain system health, scalability, and performance.
Required Skills & Experience:
· Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration.
· Proficiency in Terraform or OpenTofu for infrastructure as code.
· Experience with Helm and ArgoCD for application deployment and GitOps.
· Solid understanding of Docker / Podman container runtimes.
· Cloud expertise in Azure with experience deploying and scaling workloads.
· Familiarity with CI/CD pipelines, monitoring, and logging frameworks.
· Knowledge of best practices around cloud security, scalability, and high availability.
Preferred Qualifications:
· Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses.
· Experience working in global distributed teams across CST/PST time zones.
· Strong problem-solving skills and ability to work independently in a fast-paced environment.

Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities
If Interested kindly share your updated resume on 82008 31681
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.

NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards

We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)
- Looking manage IaC module
- Terraform experience is a must
- Terraform Module as a part of central platform team
- Azure/GCP exp is a must
- C#/Python/Java coding – is good to have
Key Responsibilities:-
- Design, build, and enhance Salesforce applications using Apex, Lightning Web Components (LWC), Visualforce, and SOQL.
- Implement integrations with external systems using REST APIs and event-driven messaging (e.g., Kafka).
- Collaborate with architects and business analysts to translate requirements into scalable, maintainable solutions.
- Establish and follow engineering best practices, including source control (Git), code reviews, branching strategies, CI/CD pipelines, automated testing, and environment management.
- Establish and maintain Azure DevOps-based workflows (repos, pipelines, automated testing) for Salesforce engineering.
- Ensure solutions follow Salesforce security, data modeling, and performance guidelines.
- Participate in Agile ceremonies, providing technical expertise and leadership within sprints and releases.
- Optimize workflows, automations, and data processes across Sales Cloud, Service Cloud, and custom Salesforce apps.
- Provide technical mentoring and knowledge sharing when required.
- Support production environments, troubleshoot issues, and drive root-cause analysis for long-term reliability.
- Stay current on Salesforce platform updates, releases, and new features, recommending adoption where beneficial.
Required Qualifications:-
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
- 6+ years of Salesforce development experience with strong knowledge of Apex, Lightning Web Components, and Salesforce APIs.
- Proven experience with Salesforce core clouds (Sales Cloud, Service Cloud, or equivalent).
- Strong hands-on experience with API integrations (REST/SOAP) and event-driven architectures (Kafka, Pub/Sub).
- Solid understanding of engineering practices: Git-based source control (Salesforce DX/metadata), branching strategies, CI/CD, automated testing, and deployment management.
- Familiarity with Azure DevOps repositories and pipelines.
- Strong knowledge of Salesforce data modeling, security, and sharing rules.
- Excellent problem-solving skills and ability to collaborate across teams.
Preferred Qualifications:-
- Salesforce Platform Developer II certification (or equivalent advanced credentials).
- Experience with Health Cloud, Financial Services Cloud, or other industry-specific Salesforce products.
- Experience implementing logging, monitoring, and observability within Salesforce and integrated systems.
- Background in Agile/Scrum delivery with strong collaboration skills.
- Prior experience establishing or enforcing engineering standards across Salesforce teams.

Quantalent AI is hiring for a fastest growing fin-tech firm
Job Title: DevOps - 3
Roles and Responsibilities:
- Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
- Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
- Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
- Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
- Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
- Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
- Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
- Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
- Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
- Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.
Must Haves
- 5-8 years of experience as Devops / SRE / Platform Engineer.
- Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
- Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
- Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
- Expertise in managing production grade Kubernetes clusters
- Experience in scripting using programming languages like Bash, Python, etc.
- Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
- Experience in Managing and building High scale API Gateway, Service Mesh, etc
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
- Have a working knowledge of a backend programming language
- Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
- A working knowledge and deep understanding of cloud security concepts
- Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
- Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment.

A modern configuration management platform based on advanced
Key Responsibilities:
Kubernetes Management:
Deploy, configure, and maintain Kubernetes clusters on AKS, EKS, GKE, and OKE.
Troubleshoot and resolve issues related to cluster performance and availability.
Database Migration:
Plan and execute database migration strategies across multicloud environments, ensuring data integrity and minimal downtime.
Collaborate with database teams to optimize data flow and management.
Coding and Development:
Develop, test, and optimize code with a focus on enhancing algorithms and data structures for system performance.
Implement best coding practices and contribute to code reviews.
Cross-Platform Integration:
Facilitate seamless integration of services across different cloud providers to enhance interoperability.
Collaborate with development teams to ensure consistent application performance across environments.
Performance Optimization:
Monitor system performance metrics, identify bottlenecks, and implement effective solutions to optimize resource utilization.
Conduct regular performance assessments and provide recommendations for improvements.
Experience:
Minimum of 2+ years of experience in cloud computing, with a strong focus on Kubernetes management across multiple platforms.
Technical Skills:
Proficient in cloud services and infrastructure, including networking and security considerations.
Strong programming skills in languages such as Python, Go, or Java, with a solid understanding of algorithms and data structures.
Problem-Solving:
Excellent analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.
Communication:
Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams.
Preferred Skills:
- Familiarity with CI/CD tools and practices.
- Experience with container orchestration and management tools.
- Knowledge of microservices architecture and design patterns.

eading provider of electronic trading solutions in India. With over 1,000 clients and a presence in more than 400 cities, we have established ourselves as a trusted partner for brokerages across the nation. Our commitment to excellence is reflected in millions of active end users and our reputation for delivering the best customer service in the industry.
Looking for an experienced Cloud Engineering & DevOps Leader with proven
expertise in building and managing large-scale SaaS platforms in the financial services or
high-transaction domain. The ideal candidate will have a strong background in AWS cloud
infrastructure, DevOps automation, compliance frameworks (ISO, VAPT), and cost
optimization strategies.
Cloud Platforms: AWS (Lambda, EC2, VPC, CloudFront, Auto Scaling, etc.)
● DevOps & Automation: Python, CI/CD, Infrastructure as Code, Monitoring/Alerting
systems
● Monitoring & Logging: ELK stack, Kafka, Redis, Grafana
● Networking & Virtualization: Virtual Machines, Firewalls, Load Balancers, DR
setup
● Compliance & Security: ISO Audits, VAPT, ISMS, DR drills, high-availability
planning
● Leadership & Management: Team leadership, project management, stakeholder
collaboration
Preferred Profile
● Experience: 15–20 years in infrastructure, cloud engineering, or DevOps roles, with
at least 5 years in a leadership position.
● Domain Knowledge: Experience in broking, financial services, or high-volume
trading platforms is strongly preferred.
● Education: Bachelor’s Degree in Engineering / Computer Science / Electronics or
related field.
● Soft Skills: Strong problem-solving, cost-conscious approach, ability to work under
pressure, cross-functional collaboration.
Requirements
- Design, implement, and manage CI/CD pipelines using Azure DevOps, GitHub, and Jenkins for automated deployments of applications and infrastructure changes.
- Architect and deploy solutions on Kubernetes clusters (EKS and AKS) to support containerized applications and microservices architecture.
- Collaborate with development teams to streamline code deployments, releases, and continuous integration processes across multiple environments.
- Configure and manage Azure services including Azure Synapse Analytics, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and other data services for efficient data processing and analytics workflows.
- Utilize AWS services such as Amazon EMR, Amazon Redshift, Amazon S3, Amazon Aurora, IAM policies, and Azure Monitor for data management, warehousing, and governance.
- Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and management of cloud resources.
- Ensure high availability, performance monitoring, and disaster recovery strategies for cloud-based applications and services.
- Develop and enforce security best practices and compliance policies, including IAM policies, encryption, and access controls across Azure environments.
- Collaborate with cross-functional teams to troubleshoot production issues, conduct root cause analysis, and implement solutions to prevent recurrence.
- Stay current with industry trends, best practices, and evolving technologies in cloud computing, DevOps, and container orchestration.
**Qualifications: **
- Bachelor’s degree in Computer Science, Engineering, or related field; or equivalent work experience.
- 5+ years of experience as a DevOps Engineer or similar role with hands-on expertise in AWS and Azure cloud environments.
- Strong proficiency in Azure DevOps, Git, GitHub, Jenkins, and CI/CD pipeline automation.
- Experience deploying and managing Kubernetes clusters (EKS, AKS) and container orchestration platforms.
- Deep understanding of cloud-native architectures, microservices, and serverless computing.
- Familiarity with Azure Synapse, ADF, ADLS, and AWS data services (EMR, Redshift, Glue) for data integration and analytics.
- Solid grasp of infrastructure as code (IaC) tools like Terraform, CloudFormation, or ARM templates.
- Experience with monitoring tools (e.g., Prometheus, Grafana) and logging solutions for cloud-based applications.
- Excellent troubleshooting skills and ability to resolve complex technical issues in production environments.

Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities


We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Requirements
● Bachelor’s degree in Computer Science, Engineering, or related field.
● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g.,
Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar).
● Understanding of observability, monitoring, and incident response tooling (e.g.,Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments
● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration,regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Success Metrics
● Achieve >95% release success rate with minimal hotfix rollbacks.
● Reduce mean release deployment time by 30% within 6 months.
● Maintain a weekly release readiness report with zero critical blockers.
● Enable full traceability of builds from commit to deployment.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.
Job Title : Senior DevOps Engineer
Experience : 5+ Years
Location : Gurgaon, Sector 39
About the Role :
We are seeking an experienced Senior DevOps Engineer to lead our DevOps practices, manage a small team, and build functional, scalable systems that enhance customer experience. You will be responsible for deployments, automation, troubleshooting, integrations, monitoring, and team mentoring while ensuring secure and efficient operations.
Mandatory Skills :
Linux Administration, Shell Scripting, CI/CD (Jenkins), Git/GitHub, Docker, Kubernetes, AWS, Ansible, Database Administration (MariaDB/MySQL/MongoDB), Apache httpd/Tomcat, HAProxy, Nagios, Keepalived, Monitoring/Logging/Alerting, and On-premise Server Management.
Key Responsibilities :
- Implement and manage integrations as per business and customer needs.
- Deploy product updates, fixes, and enhancements.
- Provide Level 2 technical support and resolve production issues.
- Build tools to reduce errors and improve system performance.
- Develop scripts and automation for CI/CD, monitoring, and visualization.
- Perform root cause analysis of incidents and implement long-term fixes.
- Ensure robust monitoring, logging, and alerting systems are in place.
- Manage on-premise servers and ensure smooth deployments.
- Collaborate with development teams for system integration.
- Mentor and guide a team of 3 to 4 engineers.
Required Qualifications & Experience :
- Bachelor’s degree in Computer Science, Software Engineering, IT, or related field (Master’s preferred).
- 5+ years of experience in DevOps engineering with team management exposure.
- Strong expertise in:
- Linux Administration & Shell Scripting
- CI/CD pipelines (Jenkins or similar)
- Git/GitHub, branching, and code repository standards
- Docker, Kubernetes, AWS, Ansible
- Database administration (MariaDB, MySQL, MongoDB)
- Web servers (Apache httpd, Apache Tomcat)
- Networking & Load Balancing tools (HAProxy, Keepalived)
- Monitoring & alerting tools (Nagios, logging frameworks)
- On-premise server management
- Strong debugging, automation, and system troubleshooting skills.
- Knowledge of security best practices including data encryption.
Personal Attributes :
- Excellent problem-solving and analytical skills.
- Strong communication and leadership abilities.
- Detail-oriented with a focus on reliability and performance.
- Ability to mentor juniors and collaborate with cross-functional teams.
- Keen interest in emerging DevOps and cloud trends.
Job Title: SRE Lead Engineer
Location: Hyderabad, India
Company: Client of Options Executive Search, AI Saas Product Development Company
We are seeking a DevOps / SRE Lead Engineer to architect and scale our client's multi-tenant SaaS platform with AI/ML at the core..
Our client, a fast-growing AI-powered SaaS company in the FinTech space, is looking for a Site Reliability Engineering (SRE) Lead Engineer to join their dynamic team. This is an opportunity to design and operate large-scale SaaS systems that integrate cutting-edge AI/ML capabilities.
About the Role
As the SRE Lead Engineer, you will be responsible for architecting, building, and maintaining infrastructure that powers a multi-tenant SaaS platform. You’ll drive reliability, scalability, and security, while supporting AI/ML pipelines in production. This is a hands-on role with significant ownership, requiring both technical depth and leadership in site reliability practices.
Key Responsibilities
- Architect, design, and deploy end-to-end infrastructure for large-scale, microservices-based SaaS platforms.
- Ensure system reliability, scalability, and security for AI/ML model integrations and data pipelines.
- Automate environment provisioning and management using Terraform in AWS (EKS-focused).
- Implement full-stack observability across applications, networks, and operating systems.
- Lead incident management and participate in 24/7 on-call rotation.
- Optimize SaaS reliability while enabling REST APIs, SSO integrations (Okta/Auth0), and cloud data services (RDS/MySQL, Elasticsearch).
- Define and maintain backup and disaster recovery for critical workloads.
Required Skills & Experience
- 8+ years in SRE/DevOps roles, managing enterprise SaaS applications in production.
- Minimum 1 year experience with AI/ML infrastructure or model-serving environments.
- Strong expertise in AWS cloud, particularly EKS, container orchestration, and Kubernetes.
- Hands-on experience with Infrastructure as Code (Terraform), Docker, and scripting (Python, Bash).
- Solid Linux OS and networking fundamentals.
- Experience in monitoring and observability with ELK, CloudWatch, or similar tools.
- Strong track record with microservices, REST APIs, SSO, and cloud databases.
Nice-to-Have Skills
- Experience with MLOps and AI/ML pipeline observability.
- Cost optimization and security hardening in multi-tenant SaaS.
- Prior exposure to FinTech or enterprise finance solutions.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related discipline.
- AWS Certified Solutions Architect (strongly preferred).
- Experience in early-stage or high-growth startups is an advantage.
Why Join?
- Be at the forefront of AI/ML-powered SaaS innovation in FinTech.
- Work with a high-energy, entrepreneurial team building next-gen infrastructure.
- Take ownership of mission-critical reliability challenges.
- Grow your career in an environment that values impact, adaptability, and innovation.
If you’re passionate about building secure, scalable, and intelligent platforms, we’d love to hear from you. Apply now to be part of our client’s journey in redefining enterprise finance operations.


About Simprosys InfoMedia:
Simprosys is a diverse team of E-commerce enthusiasts with a simple yet powerful goal of empowering E-commerce merchants with easily adaptable product listings’ and Ad management solutions.
Our crew consists of budding techies developers who build and maintain the technological interventions to automate our product listing and ad management solutions. Support executives who are digital marketers themselves. Passionate designers with exceptional UI designing, motion graphics, animation, and video editing skills. Our marketing team consists of versatile content creators and brand strategists.
Be a part of our E-commerce enthusiasts crew.
Job Title: Sr. Python Developer
Location: Ahmedabad (Onsite)
Skill Set: Python, JavaScript, Python frameworks (Flask, Django, Django Rest Framework), AWS, Data Science, Machine Learning.
Responsibilities:
- Develop and maintain Python applications, using frameworks like Flask or Django to create and manage APIs and web services.
- Integrate various data sources and databases, including SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, Redis) systems, into a unified solution.
- Model data for reporting and analysis, leveraging libraries like NumPy, Pandas, and Matplotlib to provide insights and communicate results to stakeholders.
- Utilize AWS services, such as DynamoDB and Lambda, to build and deploy efficient, cloud-based solutions.
- Manage code versions with GIT, ensuring effective tracking and collaborative development practices.
- Employ strong debugging and optimization skills to ensure high performance and resolve issues promptly.
Requirements:
- Strong knowledge and hands-on experience with Python, including its standard libraries, toolkits, and APIs.
- Experience with web frameworks like Flask or Django, and familiarity with REST framework principles for web services.
- Proficiency in database structures, with practical experience in SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, Redis) databases.
- Experience with cloud platforms, especially AWS, and knowledge of services like DynamoDB and Lambda.
- Skilled in Python libraries for data analysis, such as NumPy, Pandas, and Matplotlib, with an understanding of big data frameworks.
- Excellent analytical and problem-solving skills, capable of debugging and resolving complex issues efficiently.
- Strong grasp of data structures and algorithms, crucial for building efficient applications.
- Thoroughly understand version control systems, particularly GIT, for effective code management and collaboration.

Salary (Lacs): Up to 22 LPA
Required Qualifications
• 4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role
• Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus
• Proficient in Kubernetes administration and container technologies (Docker, containerd)
• Strong Linux fundamentals
• Scripting skills in Python and shell scripting
• Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)
• Experience in maintaining and troubleshooting production environments
• Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines
If Interested kindly share your updated resume on 82008 31681
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: jobs[at]glansolutions[dot]com
Satish; 88O 27 49 743
Google search: Glan management consultancy