50+ Kubernetes Jobs in India
Apply to 50+ Kubernetes Jobs on CutShort.io. Find your next job, effortlessly. Browse Kubernetes Jobs and apply today!

Position Overview
We are a UAE-based company seeking a skilled DevOps Engineer based in India on a work-from-home basis to join our team and help manage our cloud infrastructure and deployment processes. You will be responsible for maintaining and improving our AWS-based systems while ensuring reliable, scalable, and secure operations.
Key Responsibilities
- Design, implement, and maintain AWS cloud infrastructure using services including EKS, ECS, ECR, EC2, CloudWatch, S3, and IAM
- Manage and optimize Kubernetes clusters for container orchestration
- Build and maintain CI/CD pipelines using GitHub Actions
- Work with Docker containers for application deployment and scaling
- Monitor system performance and troubleshoot issues using CloudWatch and other monitoring tools
- Collaborate with development teams to streamline deployment processes
- Implement and maintain security best practices across all infrastructure components
- Automate routine tasks and infrastructure provisioning
- Participate in on-call rotation for production support
Required Skills & Experience
- Strong experience with AWS services, particularly EKS, ECS, ECR, EC2, CloudWatch, S3, and IAM
- Proficiency with Kubernetes for container orchestration
- Experience with Docker containerization
- Knowledge of GitHub Actions for CI/CD pipeline development
- Understanding of infrastructure as code principles
- Experience with monitoring and logging systems
- Strong problem-solving and troubleshooting skills
- Ability to work collaboratively in a team environment
Nice to Have
- Experience with Prometheus for advanced monitoring and alerting
- Knowledge of REST API development and integration
- PostgreSQL database management and query optimization skills
- Experience with additional monitoring and observability tools
- Knowledge of security scanning and compliance tools
What We Offer
- Competitive salary and benefits package
- Opportunity to work with modern cloud technologies
- Collaborative and innovative work environment
- Professional development opportunities
- Flexible work arrangements
How to Apply
Please submit your resume along with a cover letter highlighting your relevant AWS and DevOps experience.

Position : Tech Lead – Fullstack Developer
Experience : 7 to 15 Years
Location : MG Road, Bengaluru (Hybrid – 3 Days in Office)
Notice Period : Immediate / Serving / 15 Days or Less
About the Opportunity :
We are hiring a Tech Lead – Fullstack Developer for a well-funded product startup building an enterprise-grade SaaS platform in the Cybersecurity domain.
The role involves designing and delivering scalable microservices and cloud-native applications in a high-performing, Agile engineering environment.
You'll work alongside industry veterans from billion-dollar digital firms, contributing to technical design, product architecture, and engineering best practices.
Mandatory Skills : Java, Spring Boot, ReactJS (or any modern JavaScript framework), RESTful APIs, PostgreSQL, Docker, Kubernetes, CI/CD, Hibernate/JPA, Multithreading, and Microservices architecture.
Role Highlights :
- Lead fullstack product development using Java (Spring Boot) and ReactJS (or similar frameworks).
- Design, develop, test, and deploy scalable microservices and RESTful APIs.
- Collaborate with Product, DevOps, and QA teams in a fast-paced Agile environment.
- Write modular, secure, and efficient code optimized for performance and maintainability.
- Mentor junior developers and influence architecture decisions across the team.
- Participate in all stages of the product lifecycle, from design to deployment.
- Create technical documentation, UML diagrams, and contribute to knowledge-sharing through blogs or whitepapers.
Key Skills Required :
- Strong expertise in Java (mandatory) and Spring Boot.
- Proficient in frontend development using ReactJS or similar frameworks.
- Hands-on experience building and consuming RESTful APIs.
- Solid knowledge of PostgreSQL, Hibernate/JPA, and transaction management.
- Familiarity with Docker, Kubernetes, and cloud platforms (Azure/GCP).
- Understanding of API Gateway, ACID properties, multithreading, and performance tuning.
- Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI) and Agile methodologies.
- Strong debugging and profiling skills for performance bottlenecks.
Nice to Have :
- Experience with data integration tools (e.g., Pentaho, Apache NiFi).
- Exposure to the Healthcare or Cybersecurity domain.
- Familiarity with OpenAI APIs or real-time analytics tools.
- Willingness to contribute to internal documentation, blog posts, or whitepapers.
Perks & Benefits :
- Opportunity to build a product from scratch.
- Flat hierarchy and direct access to leadership.
- Strong focus on learning, mentorship, and technical innovation.
- Collaborative startup culture with long-term growth opportunities.
Interview Process :
- Technical Round – Technical Assessment
- Technical Interview – Core Development
- Advanced Technical Interview – Design & Problem Solving
- Final Round – CTO Discussion
About the Company
We are hiring for a fast-growing, well-funded product startup backed by a leadership team with a proven track record of building billion-dollar digital businesses. The company is focused on delivering enterprise-grade SaaS products in the Cybersecurity domain for B2B markets. You’ll be part of a passionate and dynamic engineering team building innovative solutions using modern tech stacks.
Key Responsibilities
- Design and develop scalable microservices using Java and Spring Boot
- Build and manage robust RESTful APIs
- Collaborate with cross-functional teams in an Agile setup
- Lead and mentor junior engineers, driving technical excellence
- Contribute to architecture discussions and code reviews
- Work with PostgreSQL, implement data integrity and consistency
- Deploy and manage services on cloud platforms like GCP or Azure
- Utilize Docker/Kubernetes for containerization and orchestration
Must-Have Skills
- Strong backend experience with Java, Spring Boot, REST APIs
- Proficiency in frontend development with React.js
- Experience with PostgreSQL and database optimization
- Hands-on with cloud platforms (GCP or Azure)
- Familiarity with Docker and Kubernetes
- Strong understanding of:
- API Gateways
- Hibernate & JPA
- Transaction management & ACID properties
- Multi-threading and context switching
Good to Have
- Experience in Cybersecurity or Healthcare domain
Exposure to CI/CD pipelines and DevOps practices
About the Role
We are looking for a highly motivated DevOps Engineer with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.
Key Responsibilities
- Lead end-to-end project planning, execution, and delivery, ensuring alignment
- Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
- Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
- Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
- Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
- Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
- Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
- Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.
Required Skills & Experience
Cloud & DevOps
- Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
- Hands-on experience with cloud deployments and infrastructure as code.
- Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).
Big Data & Data Engineering
- Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
- Proven experience in managing and optimizing big data pipelines and ensuring high performance.
Programming & Frameworks
- Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
- Familiarity with Git and version control best practices.
- Basic knowledge of Linux administration and shell scripting.
Nice to Have
- Knowledge or prior experience in the Media & Advertising domain.
- Experience in client-facing roles and handling stakeholder communications.
- Proven ability to manage technical teams (5–6 members).
Why Join Us?
- Work on cutting-edge cloud and data engineering projects
- Collaborate with a talented, fast-paced team
- Flexible work setup and culture of ownership

Role overview:
As a founding senior software engineer, you will play a key role in shaping our AI-powered visual search engine for fashion and e-commerce. Responsibilities include solving complex deep-tech challenges to build scalable AI/ML solutions, leading backend development for performance and scalability, and architecting and integrating software aligned with product strategy and innovation goals. You will collaborate with cross-functional teams to address real consumer problems and build robust AI/ML pipelines to drive product innovation.
What we’re looking for:
- 3–5 years of Python experience (Golang is a plus), with expertise in concurrency, FastAPI, restful APIs, and microservices.
- proficiency in PostgreSQL/MongoDB, cloud platforms (AWS/GCP/Azure), and containerization tools like Docker/Kubernetes.
- Strong experience in asynchronous programming, CI/CD pipelines, and version control (git).
- Excellent problem-solving and communication skills are essential.
What we offer:
- Competitive salary and ESOPs, along with hackerhouse living: live and work with a Gen-Z team in a 7bhk house on MG Road, Gurgaon.
- Hands-on experience in shipping world-class products, professional development opportunities, flexible hours, and a collaborative, supportive culture.

As a senior software engineer (AI/ML), you will play a key role in shaping our AI-powered visual search engine for fashion and e-commerce. Responsibilities include solving complex deep-tech challenges to build scalable AI/ML solutions, leading backend development for performance and scalability, and architecting and integrating software aligned with product strategy and innovation goals. You will collaborate with cross-functional teams to address real consumer problems and build robust AI/ML pipelines to drive product innovation.
What we’re looking for:
- Design and deploy advanced machine learning models in computer vision, including object detection & similarity matching
- Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready with MLOps
- 3–5 years of Python experience (Golang is a plus), with expertise in concurrency, FastAPI, restful APIs, and microservices.
- proficiency in PostgreSQL/MongoDB, cloud platforms (AWS/GCP/Azure), and containerization tools like Docker/Kubernetes.
- Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices
- Strong experience in asynchronous programming, CI/CD pipelines, and version control (git). Excellent problem-solving and communication skills are essential.
What we offer:
- Competitive salary and ESOPs, along with HackerHouse living: live and work with a Gen Z team in a 7BHK house on MG Road, Gurgaon.
- hands-on experience in shipping world-class products, professional development opportunities, flexible hours, and a collaborative, supportive culture.
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project

Job Title: Java Full Stack Developer
Experience: 6+ Years
Locations: Bangalore, Mumbai, Pune, Gurgaon
Work Mode: Hybrid
Notice Period: Immediate Joiners Preferred / Candidates Who Have Completed Their Notice Period
About the Role
We are looking for a highly skilled and experienced Java Full Stack Developer with a strong command over backend technologies and modern frontend frameworks. The ideal candidate will have deep experience with Java, ReactJS, and DevOps tools like Jenkins, Docker, and basic Kubernetes knowledge. You’ll be contributing to complex software solutions across industries, collaborating with cross-functional teams, and deploying production-grade systems in a cloud-native, CI/CD-driven environment.
Key Responsibilities
- Design and develop scalable web applications using Java (Spring Boot) and ReactJS
- Collaborate with UX/UI designers and backend developers to implement robust, efficient front-end interfaces
- Develop and maintain CI/CD pipelines using Jenkins, ensuring high-quality software delivery
- Containerize applications using Docker and ensure smooth deployment and orchestration using Kubernetes (basic level)
- Write clean, modular, and testable code and participate in code reviews
- Troubleshoot and resolve performance, reliability, and functional issues in production
- Work in Agile teams and participate in daily stand-ups, sprint planning, and retrospective meetings
- Ensure all security, compliance, and performance standards are met in the development lifecycle
Mandatory Skills
- Backend: Java, Spring Boot
- Frontend: ReactJS
- DevOps Tools: Jenkins, Docker
- Containers & Orchestration: Basic knowledge of Kubernetes
- Strong understanding of RESTful services and APIs
- Familiarity with Git and version control workflows
- Good understanding of SDLC, Agile/Scrum methodologies

Position Overview:
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
- Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
- Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Profile: Senior Java Developer
🔷 Experience: 5+ Years
🔷 Location: Remote
🔷 Shift: Day Shift
(Only immediate joiners & candidates who have completed notice period)
✨ What we want:
✅ AWS cloud services (MANDATORY)
✅ Docker containerization (MANDATORY)
✅ Spring/Spring Boot framework
✅ RESTful API development
✅ Microservices architecture
✅ Database experience (SQL/NoSQL)
✅ Git version control & CI/CD
✅ Kubernetes orchestration
About Sun King
Sun King is the world’s leading off-grid solar energy company, delivering energy access to 1.8 billion people without reliable grid connections through innovative product design, fintech solutions, and field operations.
Key highlights:
- Connected over 20 million homes to solar power across Africa and Asia, adding 200,000 homes monthly.
- Affordable ‘pay-as-you-go’ financing model; after 1-2 years, customers own their solar equipment.
- Saved customers over $4 billion to date.
- Collect 650,000 daily payments via 28,000 field agents using mobile money systems.
- Products range from home lighting to high-energy appliances, with expansion into clean cooking, electric mobility, and entertainment.
With 2,800 staff across 12 countries, our team includes experts in various fields, all passionate about serving off-grid communities.
Diversity Commitment:
44% of our workforce are women, reflecting our commitment to gender diversity.
About the role:
Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments.
What you would be expected to do:
- Work with engineering, automation, and data teams to work on various infrastructure requirements.
- Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform.
- Managing AWS services for multiple teams.
- Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services.
- Deployment and management of Kubernetes resources.
- Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution.
- Set up incident response services and design effective processes.
- Deployment and management of critical platform services like OPA and Keycloak for IAM.
- Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines.
You might be a strong candidate if you have/are:
- Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks.
- Experience working with web servers (nginx, apache) and cloud providers (preferably AWS).
- Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments.
- Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters).
- Knowledge of web architecture, distributed systems, and single points of failure.
- Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck.
- Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls.
Good to have:
- Experience with backend development and setting up databases and performance tuning using parameter groups.
- Working experience in Kubernetes cluster administration and Kubernetes deployments.
- Experience working alongside sec ops engineers.
- Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing.
- Setup and usage of open telemetry, central logging, and monitoring systems.
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry.
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
What we are looking for:
Experience: 3-6years
Education: BTech / BE / MCA / MSc Computer Science
About Role:
Primary Skills -Devops with strong circleCI, argoCD, Github, Terraform, Helm, kubernetes and google cloud experience
Required Skills and Experience:
- 3+ years of experience in DevOps, infrastructure automation, or related fields.
- Strong proficiency with CircleCI for building and managing CI/CD pipelines.
- Advanced expertise in Terraform for infrastructure as code.
- Solid experience with Helm for managing Kubernetes applications.
- Hands-on knowledge of ArgoCD for GitOps-based deployment strategies.
- Proficient with GitHub for version control, repository management, and workflows.
- Extensive experience with Kubernetes for container orchestration and management.
- In-depth understanding of Google Cloud Platform (GCP) services and architecture.
- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).
- Familiarity with monitoring and logging tools like Prometheus, Grafana, and ELK stack.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities in agile development environments.
Note :Kindly share your LinkedIn profile when applying.

About the Role
We are looking for a highly motivated Project Manager with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.
Key Responsibilities
- Lead end-to-end project planning, execution, and delivery, ensuring alignment with business goals and timelines.
- Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
- Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
- Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
- Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
- Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
- Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
- Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.
Required Skills & Experience
Cloud & DevOps
- Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
- Hands-on experience with cloud deployments and infrastructure as code.
- Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).
Big Data & Data Engineering
- Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
- Proven experience in managing and optimizing big data pipelines and ensuring high performance.
Programming & Frameworks
- Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
- Familiarity with Git and version control best practices.
- Basic knowledge of Linux administration and shell scripting.
Nice to Have
- Knowledge or prior experience in the Media & Advertising domain.
- Experience in client-facing roles and handling stakeholder communications.
- Proven ability to manage technical teams (5–6 members).
Why Join Us?
- Work on cutting-edge cloud and data engineering projects
- Collaborate with a talented, fast-paced team
- Flexible work setup and culture of ownership
- Continuous learning and upskilling environment
- Inclusive health benefits included
About Company:
Auditoria is an AI-driven SaaS automation provider for corporate finance that automates back-office business processes involving tasks, analytics, and responses in Vendor Management, Accounts Payable, Accounts Receivable, and Planning. By leveraging natural language processing, artificial intelligence, and machine learning, Auditoria removes friction and repetition from mundane tasks
while automating complex functions and providing real-time visibility into cash performance. Corporate finance and accounting teams use Auditoria to accelerate business value while minimizing heavy IT
involvement, improving business resilience, lowering attrition, and accelerating business insights.
Founded in 2019 and backed by Venrock, Workday Ventures, Neotribe Ventures, Engineering Capital, and Firebolt Ventures, we build intelligent automation by combining fine-grained analytical
orchestration of a company's typical financial and audit workflows with conversational AI, delivering rapid value to the finance/audit back office.
In 2021, Auditoria earned industry recognition by being named to the Intelligent Apps Top 40 List, SSON's Shared Services & Outsourcing Impact Awards, the Constellation Research ShortList for AI-Driven Cognitive Applications, HFS Research Hot Vendors, 2021 CRN Emerging Vendors List, TiE50 Award, and the winner of the inaugural Pitch Event by Constellation Research.
The opportunity for you:
We are building an AI/ML-enabled SAAS solution to help manage the cash performance of enterprises. You would be working on solving complex problems in the FinTech space.
Responsibilities:
- Own the design and development of the core areas of Auditoria’s product, leveraging the latest tech stack hosted on AWS cloud.
- Collaborating across multiple teams/timezones to help deliver quality solutions as per roadmap.
- Partner with business teams to deliver incremental value to clients
- Champion Auditoria’s values and tech culture.
Requirements:
- Worked with many of the following: multi-tenant SaaS, CI/CD environments, monitoring tools, Kubernetes and containers, Istio, workflow engines, data stores (RDBMS, Cassandra, Neo4j), AWS services, integrations with enterprise systems (SSO, email, etc.).
- Experience with system design & data modeling; familiarity with RDBMS and Big Data
Job Description:
- Hands-on experience building applications leveraging AWS services like Step Functions, RDS, Cassandra, Kinesis, and ELK is a big plus.
- Fluent coding skills in Node.js.
- 10+ years of professional, hands-on experience developing and shipping complex software systems.
- Embraces startup setup, can work through unknowns, resource constraints & multiple priorities with creativity & resourcefulness.
- Experience with the Agile development process and zeal for engineering best practices.
- BS or MS in Computer Science or related Engineering degree. Preferably from IIT/NIT/BITS

Senior Software Engineer - Backend
A Senior Software Backend Engineer is responsible for designing, building, and maintaining the server-side
logic and infrastructure of web applications or software systems. They typically work closely with frontend
engineers, DevOps teams, and other stakeholders to ensure that the back-end services perform optimally and
meet business requirements. Below is an outline of a typical Senior Backend Developer job profile:
Key Responsibilities:
1. System Architecture & Design:
- Design scalable, high-performance backend services and APIs.
- Participate in the planning, design, and development of new features.
- Ensure that systems are designed with fault tolerance, security, and scalability in mind.
2. Development & Implementation:
- Write clean, maintainable, and efficient code.
- Implement server-side logic, databases, and data storage solutions.
- Work with technologies like REST, GraphQL, and other backend communication methods.
- Design and optimize database schemas, queries, and indexes.
3. Performance Optimization:
- Diagnose and fix performance bottlenecks.
- Optimize backend processes and database queries for speed and efficiency.
- Implement caching strategies and load balancing.
4. Security:
- Ensure the security of the backend systems by implementing secure coding practices.
- Protect against common security threats such as SQL injection, cross-site scripting (XSS), and others.
5. Collaboration & Leadership:
- Collaborate with frontend teams, product managers, and DevOps engineers.
- Mentor junior developers and guide them in best practices.
- Participate in code reviews and ensure that the development team follows consistent coding standards.
6. Testing & Debugging:
- Develop and run unit, integration, and performance tests to ensure code quality.
- Troubleshoot, debug, and upgrade existing systems.
7. Monitoring & Maintenance:
- Monitor system performance and take preventive measures to ensure uptime and reliability.
- Maintain technical documentation for reference and reporting.
- Stay updated on emerging technologies and incorporate them into the backend tech stack.
Required Skills:
1. Programming Languages:
- Expertise in one or more backend programming languages in the list Python, Java, Go, Rust.
2. Database Management:
- Strong understanding of both relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g.,
MongoDB, Redis).
- Knowledge of data modeling, query optimization, and database scaling strategies.
3. API Design & Development:
- Proficiency in designing and implementing RESTful and GraphQL APIs.
- Experience with microservices architecture.
- Good understanding of containers
4. Cloud & DevOps:
- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.
- Understanding of DevOps principles, CI/CD pipelines, containerization (Docker), and orchestration
(Kubernetes).
5. Version Control:
- Proficiency with Git and branching strategies.
6. Testing & Debugging Tools:
- Familiarity with testing frameworks, debugging tools, and performance profiling.
7. Soft Skills:
- Strong problem-solving skills.
- Excellent communication and teamwork abilities.
- Leadership and mentorship qualities.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related field.
- 5+ years of experience in backend development or software engineering.
- Proven experience with system design, architecture, and high-scale application development.
Preferred Qualifications:
- Experience with distributed systems, event-driven architectures, and asynchronous processing.
- Familiarity with message queues (e.g., RabbitMQ, Kafka) and caching layers (e.g., Redis, Memcached).
- Knowledge of infrastructure as code (IaC) tools like Terraform or Ansible.
Tools & Technologies:
- Languages: Python, Java, Golang, Rust.
- Databases: PostgreSQL, MySQL, MongoDB, Redis, Cassandra.
- Frameworks: Django, Flask, Spring Boot, Go Micro.
- Cloud Providers: AWS, Azure, Google Cloud.
- Containerization: Docker, Kubernetes.
- CI/CD: Jenkins, GitLab CI, CircleCI.
This job profile will vary depending on the company and industry, but the core principles of designing,
developing, and maintaining back-end systems remain the same.
Minimum 5 years of experience in a customer-facing role such as pre-sales, solutions engineering or technical architecture.
- Exceptional communication and presentation skills.
- Proven ability in technical integrations and conducting POCs.
- Proficiency in coding with high-level programming languages (Java, Go, Python).
- Solid understanding of Monitoring, Observability, Log Management, SIEM.
- Background in Engineering/DevOps will be considered an advantage.
- Previous experience in Technical Sales of Log Analytics, Monitoring, APM, RUM, SIEM is desirable.
Technical Expertise :
- In-depth knowledge of Kubernetes, AWS, Azure, GCP, Docker, Prometheus, OpenTelemetry.
- Candidates should have hands-on experience and the ability to integrate these technologies into customer environments, providing tailored solutions that meet diverse operational requirements.
Job description :
Job Title: Java Full Stack Developer - Azure
Experience Level: 6–10 Years
Job Summary:
We are looking for a highly capable and motivated Full Stack Developer with strong expertise in Java (Spring Boot), React.js, Angular, and solid hands-on experience with Microsoft Azure cloud services. The role involves end-to-end application development—frontend to backend—and deploying scalable, secure, and resilient applications on Azure.
Key Responsibilities:
- Develop scalable, full-stack web applications using Java (Spring Boot) on the backend and React.js/Angular on the frontend.
- Create and consume RESTful APIs and integrate them with modern frontend UIs.
- Build responsive and dynamic user interfaces using React and Angular, as per project needs.
- Deploy and manage applications on Microsoft Azure using services like App Services, Azure SQL, Azure Functions, Azure Storage, etc.
- Containerize applications using Docker, and deploy them using Azure Kubernetes Service (AKS).
- Automate infrastructure provisioning using ARM templates or Bicep, and implement CI/CD pipelines with Azure DevOps.
- Ensure security best practices across applications using OAuth 2.0, JWT, and Azure Identity services.
- Collaborate with cross-functional teams in Agile/Scrum methodologies.
- Participate in code reviews, unit testing, and troubleshooting in a cloud-native environment.
Required Skills:
Backend Development:
- Strong experience with Java 8+, Spring Boot, Spring MVC, RESTful services
- ORM tools: JPA/Hibernate
- Experience with SQL and Azure SQL Database
Frontend Development:
- Hands-on experience with React.js and Angular 8+
- Proficient in JavaScript, TypeScript, HTML5, CSS3, and responsive design
- Familiarity with Redux, RxJS, or similar state management libraries
Cloud & DevOps (Azure):
- Deep understanding of Microsoft Azure ecosystem:
- App Services, Azure Functions, Azure SQL, Azure Blob Storage
- Azure Kubernetes Service (AKS) and Container Registry (ACR)
- Azure Key Vault, Azure Monitor, Application Insights
- CI/CD with Azure DevOps, GitHub Actions, or Jenkins
- Docker-based application packaging and deployment
Preferred Skills:
- Experience with micro-services architecture
- NoSQL databases such as Cosmos DB or MongoDB
- Basic scripting knowledge (PowerShell or Bash)
- Exposure to event-driven patterns using Azure Event Hub or Service Bus
Educational qualification:
B.E/B.Tech/MCA

Product company for financial operations automation platform

Mandatory Criteria
- Candidate must have Strong hands-on experience with Kubernetes of at least 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Requirements
- Bachelors/Masters in Computer Science or a related field
- 5-8 years of relevant experience
- Proven track record of Team Leading/Mentoring a team successfully.
- Experience with web technologies and microservices architecture both frontend and backend.
- Java, Spring framework, hibernate
- MySQL, Mongo, Solr, Redis,
- Kubernetes, Docker
- Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
- Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
- Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
- Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
- Exceptional design and architectural skills
- Experience of cloud providers/platforms like GCP and AWS
Roles & Responsibilities
- Develop new user-facing features.
- Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
- Writing clean, reusable, high-quality, high-performance, maintainable code.
- Encourage innovation and efficiency improvements to ensure processes are productive.
- Ensure the training and mentoring of the team members.
- Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
- Research and apply new technologies, techniques, and best practices.
- Team mentorship and leadership.

Product company for financial operations automation platform

Mandatory Criteria :
- Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Job description
Job Title: Cloud Migration Consultant – (AWS to Azure)
Experience: 4+ years in application assessment and migration
About the Role
We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure. You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure, with exposure to AWS as needed.
Key Responsibilities
- Assess application readiness and document architecture, dependencies, and migration strategy.
- Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, PowerShell.
- Create architecture diagrams, migration playbooks, and maintain Azure DevOps boards.
- Set up applications both on-premises and in cloud environments (primarily Azure).
- Support proof-of-concepts (PoCs) and advise on migration options.
- Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams.
- Track progress, blockers, and risks, reporting timely status to project leadership.
Required Skills
- 4+ years of experience in cloud migration and assessment
- Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.)
- Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3)
- Experience with Java (SpringBoot)/C#, .Net/Python, Angular/React.js, REST APIs
- Working knowledge of Kafka, Docker/Kubernetes, Azure DevOps
- Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs)
- IAM knowledge: OAuth, SAML, Okta/SiteMinder
- Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB
Preferred Qualifications
- Azure or AWS certifications
- Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem)
- Excellent communication and stakeholder management skills
Educational qualification:
B.E/B.Tech/MCA
Experience :
4+ Years
Key Responsibilities
- Assess application readiness and document architecture, dependencies, and migration strategy.
- Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, PowerShell.
- Create architecture diagrams, migration playbooks, and maintain Azure DevOps boards.
- Set up applications both on-premises and in cloud environments (primarily Azure).
- Support proof-of-concepts (PoCs) and advise on migration options.
- Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams.
- Track progress, blockers, and risks, reporting timely status to project leadership.
Required Skills
- 4+ years of experience in cloud migration and assessment
- Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.)
- Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3)
- Experience with Java (SpringBoot)/C#, .Net/Python, Angular/React.js, REST APIs
- Working knowledge of Kafka, Docker/Kubernetes, Azure DevOps
- Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs)
- IAM knowledge: OAuth, SAML, Okta/SiteMinder
- Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB
Preferred Qualifications
- Azure or AWS certifications
- Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem)
- Excellent communication and stakeholder management skills
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Role Overview:
As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.
What You'll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:
- Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
- Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
- Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
- Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
- Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
- Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
- Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
- Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.
What makes you a great fit?
Must-Haves:
- 2+ yrs Python back-end experience (FastAPI)
- Strong with Docker & container orchestration
- Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
- SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals
Nice-to-Haves
- k8s at scale, Terraform,
- Experience with AI/ML inference services (LLMs, vector DBs)
- Go / Rust for high-perf services
- Observability: Prometheus, Grafana, OpenTelemetry
About Us:
At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders:
LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us?
At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Who we are
At CoinCROWD, were building the next-gen wallet for real-world crypto utility. Our flagship product, CROWD Wallet, is secure, intuitive, gasless, and designed to bring digital currencies into everyday spending from a coffee shop to cross-border payments.
Were redefining the wallet experience for everyday users, combining the best of Web3 + AI to create a secure, scalable, and delightful platform
Were more than just a blockchain company, were an AI-native, crypto-forward startup. We ship fast, think long, and believe in building agentic, self-healing infrastructure that can scale across geographies and blockchains. If that excites you, lets talk.
What Youll Be Doing :
As the DevOps Lead at CoinCROWD, youll own our infrastructure from end to end, designing, deploying, and scaling secure systems to support blockchain transactions, AI agents, and token operations across global users.
You will :
- Lead the CI/CD, infra automation, observability, and multi-region deployments of CoinCROWD products.
- Manage cloud and container infrastructure using GCP, Docker, Kubernetes, Terraform.
- Deploy and maintain scalable, secure blockchain infrastructure using QuickNode, Alchemy, Web3Auth, and other Web3 APIs.
- Implement infrastructure-level AI agents or scripts for auto-scaling, failure prediction, anomaly detection, and alert management (using LangChain, LLMs, or tools like n8n).
- Ensure 99.99% uptime for wallet systems, APIs, and smart contract layers.
- Build and optimize observability across on-chain/off-chain systems using tools like Prometheus,
Grafana, Sentry, Loki, ELK Stack.
- Create auto-healing, self-monitoring pipelines that reduce human ops time via Agentic AI workflows.
- Collaborate with engineering and security teams on smart contract deployment pipelines, token rollouts, and app store release automation.
Agentic Ops : What it means
- Use GPT-based agents to auto-document infra changes or failure logs.
- Run LangChain agents that triage alerts, perform log analysis, or suggest infra optimizations.
- Build CI/CD workflows that self-update or auto-tune based on system usage.
- Integrate AI to detect abnormal wallet behaviors, fraud attempts, or suspicious traffic spikes
What Were Looking For :
- 5 to 10 years of DevOps/SRE experience, with at least 2 to 3 years in Web3, fintech, or high-scale infra.
- Deep expertise with Docker, Kubernetes, Helm, and cloud providers (GCP preferred).
- Hands-on with Terraform, Ansible, GitHub Actions, Jenkins, or similar IAC and pipeline tools.
- Experience maintaining or scaling blockchain infra (EVM nodes, RPC endpoints, APIs).
- Understanding of smart contract CI/CD, token lifecycle (ICO, vesting, etc.), and wallet integrations.
- Familiarity with AI DevOps tools, or interest in building LLM-enhanced internal tooling.
- Strong grip on security best practices, key management, and secrets infrastructure (Vault, SOPS, AWS KMS).
Bonus Points :
- You've built or run infra for a token launch, DEX, or high-TPS crypto wallet.
- You've deployed or automated a blockchain node network at scale.
- You've used AI/LLMs to write ops scripts, manage logs, or analyze incidents.
- You've worked with systems handling real-money movement with tight uptime and security requirements.
Why Join CoinCROWD :
- Equity-first model: Build real value as we scale.
- Be the architect of infrastructure that supports millions of real-world crypto transactions.
- Build AI-powered ops that scale without a 24/7 pager culture
- Work remotely with passionate people who ship fast and iterate faster.
- Be part of one of the most ambitious crossovers of AI + Web3 in 2025.
Job Title: QA Engineer
Location- Bangalore / Hybrid
Desired skills- Java / Node.js, Docker, Kubernetes, Nomad, Grafana, Kibana
Exp range- 5-8 yrs
Job Description:
- Develop automated tests and test frameworks to enhance software quality.
- Conduct functional and non-functional testing, including performance, security, resiliency, and chaos testing.
- Optimize and improve CI/CD pipelines for faster and more efficient deployments.
- Identify and implement quality processes to enhance engineering efficiency.
- Automate various aspects of the software development lifecycle.
- Review code and designs, providing constructive feedback for improvement.
- Continuously upskill and mentor team members to drive growth and excellence.
- Strong automation testing expertise for REST APIs, with coding proficiency in Node.js Or Java.
- Prior software development experience before transitioning into testing and automation is a plus.
- Hands-on experience with testing frameworks and CI/CD tools like Jenkins or GitHub Actions.
- Experience working with Docker, Kubernetes, and Nomad for containerized environments.
- Familiarity with cloud platforms, particularly AWS,
- Experience with observability and monitoring tools like Grafana and Kibana.
- Ensure comprehensive test coverage through automation and manual testing where necessary.
- Research automation tools and infrastructure for improvement
- Review plans and consult junior QAs
- Excellent problem-solving skills and attention to detail.
- Strong communication and interpersonal skills.
- Define E2E testing requirements (scenarios, conditions, testing types, metrics formonitoring)
- Execute E2E manual tests
- Automate E2E regression
Role: Sr. Java Developer
Experience: 6+ Years
Location: Bangalore (Whitefield)
Work Mode: Hybrid (2-3 days WFO)
Shift Timing: Regular Morning Shift
About the Role:
We are looking for a seasoned Java Developer with 6+ years of experience to join our growing engineering team. The ideal candidate should have strong expertise in Java, Spring Boot, Microservices, and cloud-based deployment using AWS or DevOps tools. This is a hybrid role based out of our Whitefield, Bangalore location.
Key Responsibilities:
- Participate in agile development processes and scrum ceremonies.
- Translate business requirements into scalable and maintainable technical solutions.
- Design and develop applications using Java, Spring Boot, and Microservices architecture.
- Ensure robust and reliable code through full-scale unit testing and TDD/BDD practices.
- Contribute to CI/CD pipeline setup and cloud deployments.
- Work independently and as an individual contributor on complex features.
- Troubleshoot production issues and optimize application performance.
Mandatory Skills:
- Strong Core Java and Spring Boot expertise.
- Proficiency in AWS or DevOps (Docker & Kubernetes).
- Experience with relational and/or non-relational databases (SQL, NoSQL).
- Sound understanding of Microservices architecture and RESTful APIs.
- Containerization experience using Docker and orchestration via Kubernetes.
- Familiarity with Linux-based development environments.
- Exposure to modern SDLC tools – Maven, Git, Jenkins, etc.
- Good understanding of CI/CD pipelines and cloud-based deployment.
Soft Skills:
- Self-driven, proactive, and an individual contributor.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal abilities.
- Able to plan, prioritize, and manage tasks independently.
Nice-to-Have Skills:
- Exposure to frontend technologies like Angular, JavaScript, HTML5, and CSS.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two

Job Title : Senior Consultant (Java / NodeJS + Temporal)
Experience : 5 to 12 Years
Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore
Work Mode : Remote (Must be open to travel for occasional team meetups)
Notice Period : Immediate Joiners or Serving Notice
Interview Process :
- R1 : Tech Interview (60 mins)
- R2 : Technical Interview
- R3 : (Optional) Interview with Client
Job Summary :
We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.
The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.
You will work on high-scale systems, collaborating closely with cross-functional teams.
Mandatory Skills :
Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI
Key Responsibilities :
- Design and implement scalable backend services using Node.js or Java.
- Build and manage complex workflow orchestrations using Temporal.io.
- Integrate with IAM solutions like Keycloak for role-based access control.
- Work with React (v17+), TypeScript, and component-driven frontend design.
- Use PostgreSQL for structured data persistence and optimized queries.
- Manage infrastructure using Terraform and orchestrate via Kubernetes.
- Leverage Azure Services like Blob Storage, API Gateway, and AKS.
- Write and maintain API documentation using Swagger/Postman/Insomnia.
- Conduct unit and integration testing using Jest.
- Participate in code reviews and contribute to architectural decisions.
Must-Have Skills :
- Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
- Node.js + TypeScript (preferred) or strong Java experience
- React.js (v17+) and component-driven UI development
- Keycloak IAM, PostgreSQL, and modern API design
- Infrastructure automation with Terraform, Kubernetes
- Experience in using GitFlow, OpenAPI, Jest for testing
Nice-to-Have Skills :
- Blockchain integration experience for secure KYC/identity flows
- Custom Camunda Connectors or exporter plugin development
- CI/CD experience using Azure DevOps or GitHub Actions
- Identity-based task completion authorization enforcement
About the Role
We are looking for a skilled Backend Engineer with strong experience in building scalable microservices, integrating with distributed data systems, and deploying web APIs that serve UI applications in the cloud. You’ll work on high-performance systems involving Kafka, DynamoDB, Redis, and other modern backend technologies.
Responsibilities
- Design, develop, and deploy backend microservices and APIs that power UI applications.
- Implement event-driven architectures using Apache Kafka or similar messaging platforms.
- Build scalable and highly available systems using NoSQL databases (e.g., DynamoDB, MongoDB).
- Optimize backend systems using caching layers like Redis to enhance performance.
- Ensure seamless deployment and operation of services in cloud environments (AWS, GCP, or Azure).
- Write clean, maintainable, and well-tested code; contribute to code reviews and architecture discussions.
- Collaborate closely with frontend, DevOps, and product teams to deliver integrated solutions.
- Monitor and troubleshoot production issues and participate in on-call rotations as needed.
Required Qualifications
- 3–7 years of professional experience in backend development.
- Strong programming skills in one or more languages: Java, Python, Go, Node.js.
- Hands-on experience with microservices architecture and API design (REST/gRPC).
- Practical experience with Kafka, RabbitMQ, or other event streaming/message queue systems.
- Solid knowledge of NoSQL databases, especially DynamoDB or equivalents.
- Experience using Redis or Memcached for caching or pub/sub mechanisms.
- Proficiency with cloud platforms (preferably AWS – e.g., Lambda, ECS, EKS, API Gateway).
- Familiarity with Docker, Kubernetes, and CI/CD pipelines.
Job Description
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
Behavioral Skills :
• Good Communication skills
Skills
PRIMARY COMPETENCY : Cloud Infra PRIMARY SKILL : DevOps PRIMARY SKILL PERCENTAGE : 100
About the Role
We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-
performance infrastructure for our next-generation healthcare platform. You will be
responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,
ensuring seamless deployment and operations.
Responsibilities
1. Infrastructure & Cloud Management
• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)
• Implement containerization (Docker, Kubernetes) and microservices orchestration
• Optimize infrastructure cost, scalability, and performance
2. CI/CD & Automation
• Build and maintain CI/CD pipelines for automated deployments
• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation
• Implement GitOps practices for streamlined deployments
3. Security & Compliance
• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards
• Implement role-based access controls, encryption, and network security best
practices
• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance
audits
4. Monitoring & Incident Management
• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,
Datadog, etc.)
• Optimize system reliability and automate incident response mechanisms
• Improve MTTR (Mean Time to Recovery) and system uptime KPIs
5. Collaboration & Process Improvement
• Work closely with development and QA teams to streamline deployments
• Improve DevSecOps practices and cloud security policies
• Participate in architecture discussions and performance tuning
Required Skills & Qualifications
• 2+ years of experience in DevOps, cloud infrastructure, and automation
• Hands-on experience with AWS and Kubernetes
• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)
• Experience with Terraform, Ansible, or CloudFormation
• Strong knowledge of Linux, shell scripting, and networking
• Experience with cloud security, monitoring, and logging solutions
Nice to Have
• Experience in healthcare or other regulated industries
• Familiarity with serverless architectures and AI-driven infrastructure automation
• Knowledge of big data pipelines and analytics workflows
What You'll Gain
• Opportunity to build and scale a mission-critical healthcare infrastructure
• Work in a fast-paced startup environment with cutting-edge technologies
• Growth potential into Lead DevOps Engineer or Cloud Architect roles
Job Role : Azure DevSecOps Engineer (Security-Focused)
Experience : 12 to 18 Years
Location : Preferably Delhi NCR (Hybrid); Remote possible with 1–2 office visits per quarter (Gurgaon)
Joining Timeline : Max 45 days (Buyout option available)
Work Mode : Full-time | 5 Days Working
About the Role :
We are looking for a highly experienced Azure DevSecOps Engineer with a strong focus on cloud security practices.
This role is 60–70% security-driven, involving threat modeling, secure cloud architecture, and infrastructure security on Azure using Terraform.
Key Responsibilities :
- Architect and maintain secure, scalable Azure cloud infrastructure using Terraform.
- Implement security best practices : IAM, threat modeling, network security, data protection, and compliance (e.g., GDPR).
- Build CI/CD pipelines and automate deployments using Azure DevOps, Jenkins, Prometheus.
- Monitor, analyze, and proactively improve security posture.
- Collaborate with global teams to ensure secure design, development, and operations.
- Stay updated on cloud security trends and lead mitigation efforts.
Mandatory Skills :
Azure, Terraform, DevSecOps, Cloud Security, Threat Modelling, IAM, CI/CD (Azure DevOps), Docker, Kubernetes, Prometheus, Infrastructure as Code (IaC), Compliance Frameworks (GDPR)
Preferred Certifications :
Certified DevSecOps Professional (CDP), Microsoft Azure Certifications

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.

You will:
- Collaborate with the I-Stem Voice AI team and CEO to design, build and ship new agent capabilities
- Develop, test and refine end-to-end voice agent models (ASR, NLU, dialog management, TTS)
- Stress-test agents in noisy, real-world scenarios and iterate for improved robustness and low latency
- Research and prototype cutting-edge techniques (e.g. robust speech recognition, adaptive language understanding)
- Partner with backend and frontend engineers to seamlessly integrate AI components into live voice products
- Monitor agent performance in production, analyze failure cases, and drive continuous improvement
- Occasionally demo our Voice AI solutions at industry events and user forums
You are:
- An AI/Software Engineer with hands-on experience in speech-centric ML (ASR, NLU or TTS)
- Skilled in building and tuning transformer-based speech models and handling real-time audio pipelines
- Obsessed with reliability: you design experiments to push agents to their limits and root-cause every error
- A clear thinker who deconstructs complex voice interactions from first principles
- Passionate about making voice technology inclusive and accessible for diverse users
- Comfortable moving fast in a small team, yet dogged about code quality, testing and reproducibility
We are seeking an experienced and passionate Cloud and DevOps Trainer to join our training and development team. The trainer will be responsible for delivering high-quality, hands-on training in Cloud technologies (such as AWS, Azure, or GCP) and DevOps tools and practices to students or working professionals.


Skill Sets:
- Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow)
- Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models
- Strong experience in NLP, fine-tuning transformer models, and dataset preparation
- Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI)
- Experience in containerization (Docker, Kubernetes) and CI/CD pipelines
- Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning)
- Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection
Roles and Responsibilities:
- Design and implement end-to-end ML pipelines from data ingestion to production
- Develop, fine-tune, and optimize ML models, ensuring high performance and scalability
- Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc)
- Automate model retraining, monitoring, and drift detection
- Collaborate with engineering teams for seamless ML integration
- Mentor junior team members and enforce best practices
What you’ll do
- Tame data → pull, clean, and shape structured & unstructured data.
- Orchestrate pipelines → Airflow / Step Functions / ADF… your call.
- Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI.
- Scale → Spark / Databricks for the heavy lifting.
- Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow.
- Pair up → work with engineers, architects, and business folks to solve real problems, fast.
What you bring
- 3+ yrs hands-on MLOps (4-5 yrs total software experience).
- Proven chops on one hyperscaler (AWS, Azure, or GCP).
- Confidence with Databricks / Spark, Python, SQL, TensorFlow / PyTorch / Scikit-learn.
- You debug Kubernetes in your sleep and treat Dockerfiles like breathing.
- You prototype with open-source first, choose the right tool, then make it scale.
- Sharp mind, low ego, bias for action.
Nice-to-haves
- Sagemaker, Azure ML, or Vertex AI in production.
- Love for clean code, clear docs, and crisp PRs.
Devops CICD
∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools
∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks
∙Actively troubleshoot issues that arise during development and production
∙Owning, learning, and deploying SW in support of customer-facing applications
∙Help establish DevOps best practices
∙Actively work to reduce system costs
∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies
∙Actively work with CI/CD, GIT and other component parts of the build and deployment system
∙Leading skills with AWS cloud stack
∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts)
at scale
∙Proven experience with Kubernetes at scale
∙Proven experience with cloud management tools beyond AWS console (k9s, lens)
∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator
∙Solid team player
∙Strong experience with Linux-based infrastructures and AWS
∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others
∙Strong knowledge of JavaScript, GIT
∙Agile practitioner

Job Title : Python Data Engineer
Experience : 4+ Years
Location : Bangalore / Hyderabad (On-site)
Job Summary :
We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.
The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.
Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)
Key Responsibilities :
- Design, develop, and maintain backend systems using Python.
- Build and manage RESTful APIs and microservices architectures.
- Work extensively with AWS cloud services for deployment and data storage.
- Implement and manage SQL, PostgreSQL, and NoSQL databases.
- Containerize applications using Docker and orchestrate with Kubernetes.
- Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
- Collaborate with teams to ensure scalable and reliable software delivery.
- Troubleshoot and optimize application performance.
Must-Have Skills :
- 4+ years of hands-on experience in Python backend development.
- Strong experience with AWS cloud infrastructure.
- Proficiency in building microservices and APIs.
- Good knowledge of relational and NoSQL databases.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD tools and DevOps processes.
- Strong problem-solving and collaboration skills.

A leader in telecom, fintech, AI-led marketing automation.

We are looking for a talented MERN Developer with expertise in MongoDB/MySQL, Kubernetes, Python, ETL, Hadoop, and Spark. The ideal candidate will design, develop, and optimize scalable applications while ensuring efficient source code management and implementing Non-Functional Requirements (NFRs).
Key Responsibilities:
- Develop and maintain robust applications using MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Design efficient database architectures (MongoDB/MySQL) for scalable data handling.
- Implement and manage Kubernetes-based deployment strategies for containerized applications.
- Ensure compliance with Non-Functional Requirements (NFRs), including source code management, development tools, and security best practices.
- Develop and integrate Python-based functionalities for data processing and automation.
- Work with ETL pipelines for smooth data transformations.
- Leverage Hadoop and Spark for processing and optimizing large-scale data operations.
- Collaborate with solution architects, DevOps teams, and data engineers to enhance system performance.
- Conduct code reviews, troubleshooting, and performance optimization to ensure seamless application functionality.
Required Skills & Qualifications:
- Proficiency in MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Strong understanding of database technologies (MongoDB/MySQL).
- Experience working with Kubernetes for container orchestration.
- Hands-on knowledge of Non-Functional Requirements (NFRs) in application development.
- Expertise in Python, ETL pipelines, and big data technologies (Hadoop, Spark).
- Strong problem-solving and debugging skills.
- Knowledge of microservices architecture and cloud computing frameworks.
Preferred Qualifications:
- Certifications in cloud computing, Kubernetes, or database management.
- Experience in DevOps, CI/CD automation, and infrastructure management.
- Understanding of security best practices in application development.
Minimum requirements
5+ years of industry software engineering experience (does not include internships nor includes co-ops)
Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)
Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success
Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial
Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users
Preferred Qualifications
Experience with large-scale financial tracking systems
Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.
Job Title : Java Backend Developer
Experience : 3 to 6 Years
Locations : Bangalore / Gurgaon (Hybrid – 3 Days Work From Office)
Shift Timings : 11:00 AM – 8:00 PM IST
Notice Period : Immediate to 15 Days Only
Job Description :
We are looking for experienced Java Backend Developers with strong expertise in building scalable microservices-based architectures. The ideal candidate should have hands-on experience with Spring Boot, containerized deployments, and DevOps tools.
✅ Must-Have Skills :
- Java – Strong programming skills in core Java.
- Spring Boot (2.x / 3.x) – Deep understanding of microservices architecture and patterns.
- Microservices – Design and implementation experience.
- Kubernetes – Experience deploying and managing microservices
- Jenkins & Maven – Build and CI/CD pipeline experience
- PostgreSQL – Experience with relational database management
✨ Good-to-Have Skills :
- Git – Source control management
- CI/CD Pipeline Tools – End-to-end pipeline automation
- Cloud & DevOps Knowledge – Experience with cloud-based deployments
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 3-6 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
Job description
● Design effective, scalable architectures on top of cloud technologies such as AWS and Kubernetes
● Mentor other software engineers, including actively participating in peer code and architecture review
● Participate in all parts of the development lifecycle from design to coding to deployment to maintenance and operations
● Kickstart new ideas, build proof of concepts and jumpstart newly funded projects
● Demonstrate ability to work independently with minimal supervision
● Embed with other engineering teams on challenging initiatives and time sensitive projects
● Collaborate with other engineering teams on challenging initiatives and time sensitive projects
Education and Experience
● BS degree in Computer Science or related technical field or equivalent practical experience.
● 9+ years of professional software development experience focused on payments and/or billing and customer accounts. Worked with worldwide payments, billing systems, PCI Compliance & payment gateways.
Technical and Functional
● Extensive knowledge of micro service development using Spring, Spring Boot, Java - built on top of Kubernetes and public cloud computing such as AWS, Lambda, S3.
● Experience with relational databases (MySQL, DB2 or Oracle) and NoSQL databases
● Experience with unit testing and test driven development
Technologies at Constant Contact
Working on the Constant Contact platform provides our engineers with an opportunity to produce high impact work inside of our multifaceted platform (Email, Social, SMS, E-Commerce, CRM, Customer Data Platform, MLBased Recommendations & Insights, and more).
As a member of our team, you'll be utilizing the latest technologies and frameworks (React/SPA, JavaScript/TypeScript, Swift, Kotlin, GraphQL, etc) and deploying code to our cloud-first microservice infrastructure (declarative CI/CD, GitOps managed kubernetes) with regular opportunities to level up your skills.
● Past experience of working with and integrating payment gateways and processors, online payment methods, and billing systems.
● Familiar with integrating Stripe/Plaid/PayPal/Adyen/Cybersource or similar systems along with PCI compliance.
● International software development and payments experience is a plus.
● Knowledge of DevOps and CI/CD, automated test and build tools ( Jenkins & Gradle/Maven)
● Experience integrating with sales tax engines is a plus.
● Familiar with tools like Splunk, New relic or similar tools like datadog, elastic elk, amazon
cloudwatch.
● Good to have - Experience with React, Backbone, Marionette or other front end frameworks.
Cultural
● Strong verbal and written communication skills.
● Flexible attitude and willingness to frequently move between different teams, software architectures and priorities.
● Desire to collaborate with our other product teams to think strategically about how to solve problems.
Our team
● We focus on cross-functional team collaboration where engineers, product managers, and designers all work together to solve customer problems and build exciting features.
● We love new ideas and are eager to see what your experiences can bring to help influence our technical and product vision.
● Collaborate/Overlap with the teams in Eastern Standard Time (EST), USA.
About PGAGI:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.
Key Responsibilities:
AI Model Deployment & Integration
- Assist in containerizing and deploying AI/ML models into production using Docker.
- Support integration of models into existing systems and APIs.
Infrastructure Management
- Help manage cloud and on-premise environments to ensure scalability and consistency.
- Work with Kubernetes for orchestration and environment scaling.
CI/CD Pipeline Automation
- Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
- Implement basic automated testing and rollback mechanisms.
Hosting & Web Environment Management
- Assist in managing hosting platforms, web servers, and CDN configurations.
- Support DNS, load balancer setups, and ensure high availability of web services.
Monitoring, Logging & Optimization
- Set up and maintain monitoring/logging tools like Prometheus and Grafana.
- Participate in troubleshooting and resolving performance bottlenecks.
Security & Compliance
- Apply basic DevSecOps practices including security scans and access control implementations.
- Follow security and compliance checklists under supervision.
Cost & Resource Management
- Monitor resource usage and suggest cost optimization strategies in cloud environments.
Documentation
- Maintain accurate documentation for deployment processes and incident responses.
Continuous Learning & Innovation
- Suggest improvements to workflows and tools.
- Stay updated with the latest DevOps and AI infrastructure trends.
Requirements:
- Around 6 months of experience in a DevOps or related technical role (internship or professional).
- Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
- Exposure to scripting languages (e.g., Bash, Python) is a plus.
- Strong problem-solving skills and eagerness to learn.
- Good communication and documentation abilities.
Compensation
- Joining Bonus: INR 2,500 one-time bonus upon joining.
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now
#Devops #Docker #Kubernetes #DevOpsIntern
About the Role:
We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.
Key Responsibilities:
- Manage and maintain AWS cloud infrastructure and services.
- Lead and support application migration projects from on-prem to cloud.
- Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
- Monitor cloud environments and optimize cost, performance, and reliability.
- Collaborate with development, operations, and security teams to implement DevOps best practices.
- Troubleshoot and resolve infrastructure and deployment issues.
Required Skills:
- 3–5 years of experience in AWS cloud environment.
- Proven experience with on-premises to cloud application migration.
- Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
- Solid scripting skills (Python, Bash, or similar).
Good to Have:
- Experience with Terraform for Infrastructure as Code.
- Familiarity with Kubernetes for container orchestration.
- Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.

🔹 Job Title: Full Stack Java Developer
🔹 Experience: 4 to 6 Years
🔹 Location: Gurugram (Hybrid)
📝 Job Description:
Deqode is hiring a skilled Full Stack Java Developer to join our team of technology experts building scalable, enterprise-grade systems. We’re looking for passionate developers who thrive in fast-paced environments and love solving real-world challenges.
Key Responsibilities:
- Design, develop, and deploy scalable Java-based microservices using Spring Boot.
- Develop front-end components using any modern JavaScript framework.
- Build robust APIs and integrate third-party services.
- Work with Quarkus to enhance Java runtime performance (preferred).
- Implement containerized services using Kubernetes and manage deployments.
- Ensure clean, testable, and maintainable code in an Agile environment.
- Collaborate with cross-functional teams to define and deliver high-quality products.
Must-Have Skills:
- 4–6 years of hands-on experience in Java and Spring Boot
- Proven experience in building and deploying Microservices
- Experience with Kubernetes and Kafka
- Proficiency in any modern frontend framework (React, Angular, etc.)
- Exposure to Quarkus is a strong plus
- Strong understanding of API design and RESTful services
- Excellent problem-solving and communication skills