50+ Kubernetes Jobs in India
Apply to 50+ Kubernetes Jobs on CutShort.io. Find your next job, effortlessly. Browse Kubernetes Jobs and apply today!
About Autonomize AI
Autonomize AI is on a mission to help organizations make sense of the world's data. We help organizations harness the full potential of data to unlock business outcomes. Unstructured dark data contains nuggets of information that when paired with human context will unlock some of the most impactful insights for most organizations, and it’s our goal to make that process effortless and accessible.
We are an ambitious team committed to human-machine collaboration. Our founders are serial entrepreneurs passionate about data and AI and have started and scaled several companies to successful exits. We are a global, remote company with expertise in building amazing data products, captivating human experiences, disrupting industries, being ridiculously funny, and of course scaling AI.
The Opportunity
We’re seeking a Senior DevOps Engineer to design, build, and secure our cloud infrastructure. You’ll play a key role in delivering scalable, highly secure systems — with a strong focus on Azure Cloud, Kubernetes, automation, observability, and cloud security best practices. Experience with Google Cloud is a plus.
What you'll do:
- Design, deploy, and maintain secure and scalable Kubernetes clusters in production.
- Develop and manage Helm charts for deploying applications securely.
- Implement GitOps workflows using ArgoCD, ensuring secure and auditable deployments.
- Set up and manage observability stacks, including Prometheus, Grafana, and Loki, for monitoring, alerting, and logging.
- Implement security best practices, including network policies, RBAC, pod security standards, and secrets management in Kubernetes.
- Automate infrastructure provisioning and security compliance using Terraform, Ansible, or Pulumi.
- Secure cloud infrastructure and enforce security policies in AWS, Azure, or GCP, focusing on IAM, encryption, VPC security, and firewall rules.
- Implement CI/CD pipelines with security scanning (SAST, DAST, container image scanning, and dependency management).
- Enhance system reliability, security, and performance through continuous monitoring, auditing, and automated remediation.
- Collaborate with development and security teams to ensure security and compliance in all DevOps processes.
- Respond to security incidents, conduct forensic analysis, and apply remediation measures.
You’re a Fit If You Have
- 6+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles.
- Strong expertise in Kubernetes security, including RBAC, network policies, pod security, and secrets management.
- Hands-on experience with Helm for secure and automated Kubernetes deployments.
- Proficiency in ArgoCD and GitOps methodologies for managing infrastructure as code securely.
- Experience with observability tools such as Prometheus, Grafana, and Loki.
- Expertise in one or more cloud providers (AWS, Azure, or GCP), including IAM, VPC security, and compliance.
- Strong knowledge of Terraform, Ansible, or Pulumi for infrastructure security automation.
- Experience securing CI/CD pipelines using SAST, DAST, container security scanning (Trivy, Aqua, or Snyk).
- Proficiency in scripting languages like Bash, Python, or Go for security automation.
- Strong understanding of network security, firewall management, TLS, and certificate management.
- Experience with logging, security monitoring, SIEM solutions, and automated alerting.
Bonus Points
- Experience with Service Mesh security (Istio, Linkerd, or Consul).
- Hands-on experience with Zero Trust Security models and policy-as-code frameworks (OPA/Gatekeeper).
- Knowledge of container runtime security using tools like Falco or Sysdig.
- Familiarity with SOC 2, HIPAA or other compliance frameworks.
- Experience with incident response, forensic analysis , and security auditing
Note : Apply only if applicants are open to work on-site in Bangalore.
About Autonomize AI
Autonomize AI is revolutionizing healthcare by streamlining knowledge workflows with AI. We reduce administrative burdens and elevate outcomes, empowering professionals to focus on what truly matters — improving lives. We're growing fast and looking for bold, driven teammates to join us.
The Opportunity
Senior Software Development Engineer is responsible for Architecting, Developing & Testing the Autonomize Health platform & applications including the UI, middleware, backend data pipelines, APIs, and cloud operations architecture.
You’re a Fit If You Have
- 7+ years of experience on having built web applications from scratch.
- Experience in designing/architecting scalable and resilient web application/s
- In-depth of JavaScript, Typescript, React.js and Node.js
- Sound exposure on Python and/or Javascript
- Sound exposure on Microservices architecture
- Have used frameworks i.e. Django, Flask or similar
- Have worked on at least one public cloud offerings- AWS, Azure, GCP
- Experience in integrating & deploying Machine Learning/Deep Learning models
- Experience in containerization concept like Docker, Kubernetes and others
- Familiarity with Agile/Rapid development processes
- Strong interpersonal communication and client interaction skills
Bonus Points:
- Experience leading a team of developers, driving quality and deliverables through direction, mentoring and code reviews
- Experience working with stakeholders to translate business needs into actionable development requirements
- Familiarity with visualization libraries such as D3.js
- Familiarity with AI/ML, Data Analytics frameworks
- Familiarity with various databases like Postgres, Elasticsearch (ELK), or GraphDBs (neo4j, Tiger Graph etc)
- Exposure on OpenAI APIs.
- Worked on Kafka, ActiveMQ and/or REDIS
- Owner mentality - self-directed, needs minimal supervision,hands-on leader to drive quality in usability and experience
- Always experimenting than hypothesizing (Learn it all vs. know it all)
- Ability to design, architect and quickly complete projects with minimal supervision and direction
- Work as a team while owning key product deliverables
- You are passionate, unafraid & loyal to the team & mission
- You love to learn & win together
- You communicate well through voice, writing, chat or video, and work well with a remote/global team
What we offer:
- Influence & Impact: Lead and shape the future of healthcare AI implementations
- Outsized Opportunity: Join at the ground floor of a rapidly scaling, VC-backed startup
- Ownership, Autonomy & Mastery: Full-stack ownership of customer programs, with freedom to innovate
- Learning & Growth: Constant exposure to founders, customers, and new technologies—plus a professional development budget
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability, or other legally protected statuses.
About JobTwine
JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.
Role Overview
We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment
Key Skills & Requirements
- 4–5 years of hands-on DevOps experience
- Experience in product-based companies and startups
- Strong expertise in CI/CD pipelines
- Hands-on experience with AWS / GCP / Azure
- Experience with Docker & Kubernetes
- Strong knowledge of Linux and Shell scripting
- Infrastructure as Code: Terraform / CloudFormation
- Monitoring & logging: Prometheus, Grafana, ELK stack
- Experience in scalability, reliability and automation
What You Will Do
- Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
- Build, optimize, and scale infrastructure supporting AI-driven products
- Ensure high availability, security and performance of production systems
- Collaborate with engineering teams to improve deployment and release processes
Why Join JobTwine ?
- Direct exposure to leadership and real product decision-making
- Steep learning curve with high ownership and accountability
- Opportunity to build and scale a core B2B SaaS produc
Job Description
Key Responsibilities
- API & Service Development:
- Build RESTful and GraphQL APIs for e-commerce, order management, inventory, pricing, and promotions.
- Database Management:
- Design efficient schemas and optimize performance across SQL and NoSQL data stores.
- Integration Development:
- Implement and maintain integrations with ERP (SAP B1, ERPNext), CRM, logistics, and third-party systems.
- System Performance & Reliability:
- Write scalable, secure, and high-performance code to support real-time retail operations.
- Collaboration:
- Work closely with frontend, DevOps, and product teams to ship new features end-to-end.
- Testing & Deployment:
- Contribute to CI/CD pipelines, automated testing, and observability improvements.
- Continuous Improvement:
- Participate in architecture discussions and propose improvements to scalability and code quality.
Requirements
Required Skills & Experience
- 3–5 years of hands-on backend development experience in Node.js, Python, or Java.
- Strong understanding of microservices, REST APIs, and event-driven architectures.
- Experience with databases such as MySQL/PostgreSQL (SQL) and MongoDB/Redis (NoSQL).
- Hands-on experience with AWS / GCP and containerization (Docker, Kubernetes).
- Familiarity with Git, CI/CD, and code review workflows.
- Good understanding of API security, data protection, and authentication frameworks.
- Strong problem-solving skills and attention to detail.
Nice to Have
- Experience in e-commerce or omnichannel retail platforms.
- Exposure to ERP / OMS / WMS integrations.
- Familiarity with GraphQL, Serverless, or Kafka / RabbitMQ.
- Understanding of multi-brand or multi-country architecture challenges.
Profile: Devops Lead
Location: Gurugram
Experience: 08+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
Hey there budding tech wizard! Are you ready to take on a new challenge?
As a Senior Software Developer 1 (Full Stack) at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff, mentoring and guiding others. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.
Responsibilities
We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:
- Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
- Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
- Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
- Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
- Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
- Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
- Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
- Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
- Participate in release planning and design complex modules & features.
- Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
- Feel empowered by managing deployments and assisting in infra management.
- Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.
Qualifications
We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:
- You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
- You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
- You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
- You’ve 5+ years of experience with backend web framework Django and DRF.
- You’ve 5+ years of experience with frontend web framework React.
- Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
- You have experience with testing, code, and design reviews.
- You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
- You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
- You've demonstrated your ability to lead a small team of developers.
- And most important, you're also excited to learn about new things and try out new ideas.
Compensation:
We know you're passionate and talented, and we want to reward you for that. That's why we're offering a compensation package of 15 - 17 LPA!
This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!
We are located in Delhi. This post may require relocation.
About the Role
Hudson Data is looking for a Senior / Mid-Level SQL Engineer to design, build, optimize, and manage our data platforms. This role requires strong hands-on expertise in SQL, Google Cloud Platform (GCP), and Linux to support high-performance, scalable data solutions.
We are also hiring Python Programers / Software Developers / Front end and Back End Engineers
Key Responsibilities:
1.Develop and optimize complex SQL queries, views, and stored procedures
- Build and maintain data pipelines and ETL workflows on GCP (e.g., BigQuery, Cloud SQL)
- Manage database performance, monitoring, and troubleshooting
- Work extensively in Linux environments for deployments and automation
- Partner with data, product, and engineering teams on data initiatives
Required Skills & Qualifications
Must-Have Skills (Essential)
- Expert GCP mandatory
- Strong Linux / shell scripting mandatory
Nice to Have
- Experience with data warehousing and ETL frameworks
- Python / scripting for automation
- Performance tuning and query optimization experience
Soft Skills
- Strong analytical, problem-solving, and critical-thinking abilities.
- Excellent communication and presentation skills, including data storytelling.
- Curiosity and creativity in exploring and interpreting data.
- Collaborative mindset, capable of working in cross-functional and fast-paced environments.
Education & Certifications
- Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
- Masters degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
⸻
Why Join Hudson Data
At Hudson Data, youll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools from AI and ML frameworks to cloud-based analytics platforms to solve meaningful problems. Youll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
About the role
We are looking for an experienced AWS Cloud Engineer with strong Java and Python/Golang expertise to design, modernize, and migrate applications and infrastructure to AWS. The ideal candidate will have hands-on experience with cloud-native development, Java application modernization, and end-to-end AWS migrations, with a strong focus on scalability, security, performance, and cost optimization.
This role involves working across application migration, cloud-native development, and infrastructure automation, collaborating closely with DevOps, security, and product teams.
Key Responsibilities
- Lead and execute application and infrastructure migrations from on-premises or other cloud platforms to AWS
- Assess legacy Java-based applications and define migration strategies (rehost, re-platform, refactor)
- Design and develop cloud-native applications and services using Java, Python, or Golang
- Modify and optimize applications for AWS readiness and scalability
- Design and implement AWS-native architectures ensuring high availability, security, and cost efficiency
- Build and maintain serverless and containerized solutions on AWS
- Develop RESTful APIs and microservices for system integrations
- Implement Infrastructure as Code (IaC) using CloudFormation, Terraform, or AWS CDK
- Support and improve CI/CD pipelines for deployment and migration activities
- Plan and execute data migration, backup, and disaster recovery strategies
- Monitor, troubleshoot, and resolve production and migration-related issues with minimal downtime
- Ensure adherence to AWS security best practices, governance, and compliance standards
- Create and maintain architecture diagrams, runbooks, and migration documentation
- Perform post-migration validation, performance tuning, and optimization
Required Skills & Experience
- 5–10 years of overall IT experience with strong AWS exposure
- Hands-on experience with AWS services, including:
- EC2, Lambda, S3, RDS
- ECS / EKS, API Gateway
- VPC, Subnets, Route Tables, Security Groups
- IAM, Load Balancers (ALB/NLB), Auto Scaling
- CloudWatch, SNS, CloudTrail
- Strong development experience in Java (8+), Python, or Golang
- Experience migrating Java applications (Spring / Spring Boot preferred)
- Strong understanding of cloud-native, serverless, and microservices architectures
- Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.)
- Hands-on experience with Linux/UNIX environments
- Proficiency with Git-based version control
- Strong troubleshooting, analytical, and problem-solving skills
Good to Have / Nice to Have
- Experience with Docker and Kubernetes (EKS)
- Knowledge of application modernization patterns
- Experience with Terraform, CloudFormation, or AWS CDK
- Database experience: MySQL, PostgreSQL, Oracle, DynamoDB
- Understanding of the AWS Well-Architected Framework
- Experience in large-scale or enterprise migration programs
- AWS Certifications (Developer Associate, Solutions Architect, or Professional)
Education
- Bachelor’s degree in Computer Science, Engineering, or a related field
Responsibilities: 1. Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models 2. Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes 3. Automate the training, testing and deployment processes for machine learning models 4. Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability 5. Implement best practices for version control, model reproducibility and governance 6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness 7. Troubleshoot and resolve issues related to model deployment and performance 8. Ensure compliance with security and data privacy standards in all MLOps activities 9. Keep up to date with the latest MLOps tools, technologies and trends 10. Provide support and guidance to other team members on MLOps practices
Required skills and experience: • 3-10 years of experience in MLOps, DevOps or a related field • Bachelor’s degree in computer science, Data Science or a related field • Strong understanding of machine learning principles and model lifecycle management • Experience in Jenkins pipeline development • Experience in automation scripting
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Backend Engineer (Python / Django + DevOps)
Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)
About SurgePV
SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.
Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.
As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.
Role Overview
We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.
This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.
Key Responsibilities
- Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
- Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
- Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
- Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
- Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
- Implement caching strategies and performance optimizations where required.
- Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
- Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.
Required Skills & Qualifications (Must-Have)
- 2–5 years of experience as a Backend Engineer.
- Strong proficiency in Python and Django / Django REST Framework.
- Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
- Proven experience designing and maintaining REST APIs in production environments.
- Hands-on DevOps experience, including:
- Docker and containerized services
- CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
- Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
- Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
- Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
- Ownership mindset with the ability to take systems from spec → implementation → production → iteration.
Good-to-Have Skills
- Experience working in early-stage startups or building 0→1 products.
- Familiarity with Kubernetes or other container orchestration tools.
- Experience with Infrastructure as Code (Terraform, Pulumi).
- Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
- Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.
What We Offer
- Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
- Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
- A mission-driven, fast-growing product focused on sustainability and clean energy.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Key Responsibilities
- Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
- Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
- Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
- CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
- Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
- Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
- Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
- Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
- Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
- Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.
About PGAGI:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.
Key Responsibilities:
AI Model Deployment & Integration
- Assist in containerizing and deploying AI/ML models into production using Docker.
- Support integration of models into existing systems and APIs.
Infrastructure Management
- Help manage cloud and on-premise environments to ensure scalability and consistency.
- Work with Kubernetes for orchestration and environment scaling.
CI/CD Pipeline Automation
- Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
- Implement basic automated testing and rollback mechanisms.
Hosting & Web Environment Management
- Assist in managing hosting platforms, web servers, and CDN configurations.
- Support DNS, load balancer setups, and ensure high availability of web services.
Monitoring, Logging & Optimization
- Set up and maintain monitoring/logging tools like Prometheus and Grafana.
- Participate in troubleshooting and resolving performance bottlenecks.
Security & Compliance
- Apply basic DevSecOps practices including security scans and access control implementations.
- Follow security and compliance checklists under supervision.
Cost & Resource Management
- Monitor resource usage and suggest cost optimization strategies in cloud environments.
Documentation
- Maintain accurate documentation for deployment processes and incident responses.
Continuous Learning & Innovation
- Suggest improvements to workflows and tools.
- Stay updated with the latest DevOps and AI infrastructure trends.
Requirements:
- Around 6 months of experience in a DevOps or related technical role (internship or professional).
- Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
- Exposure to scripting languages (e.g., Bash, Python) is a plus.
- Strong problem-solving skills and eagerness to learn.
- Good communication and documentation abilities.
Compensation
- Joining Bonus: INR 2,500 one-time bonus upon joining.
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now
#Devops #Docker #Kubernetes #DevOpsIntern
We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.
You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.
You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
- Deploy, manage, and monitor applications on AWS, Azure, or GCP
- Ensure high availability, scalability, and performance of production environments
- Implement security best practices across infrastructure and DevOps workflows
- Automate environment provisioning, deployments, backups, and monitoring
- Configure and manage containerized applications using Docker and Kubernetes
- Collaborate with developers to improve build, release, and deployment processes
- Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
- Perform root cause analysis (RCA) and support production incident response
Required Skills & Experience
- 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with AWS, Azure, or GCP
- Proven experience in setting up and managing CI/CD pipelines
- Proficiency in Docker, Kubernetes, and container orchestration
- Experience with Terraform, Ansible, or similar IaC tools
- Knowledge of monitoring, logging, and alerting systems
- Strong scripting skills using Shell, Bash, or Python
- Good understanding of Git, version control, and branching strategies
- Experience supporting production-grade SaaS or enterprise platforms
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Key Responsibilities
- Lead the design, development, and evolution of our AWS platform services and automation frameworks.
- Build scalable, secure, and reusable infrastructure using Terraform, AWS CDK, and GitOps workflows.
- Develop and maintain CI/CD pipelines to support rapid delivery and reliable releases. nerdrabbit.com
- Architect solutions leveraging core AWS services such as EKS, Lambda, IAM, CloudFront, Control Tower, and other platform components.
- Define, enforce, and monitor security, governance, and compliance guardrails using AWS Config, SCPs, AWS SSO, and least-privilege IAM policies.
- Enable observability and reliability with tools like CloudWatch, Grafana, Prometheus, and OpenTelemetry.
- Collaborate with cross-functional teams to deliver internal platform capabilities with clear SLAs and support models.
- Mentor and guide teams in AWS best practices to improve productivity, operational excellence, and cloud cost optimization.
Mandatory Skills & Expertise
- AWS Platform Mastery: In-depth experience with Kubernetes (EKS), serverless (Lambda), IAM, CloudFront, Control Tower, and related platform services.
- Infrastructure as Code: Expert in Terraform, AWS CDK, or equivalent for building production-grade IaC.
- CI/CD & Observability: Strong experience with GitHub Actions and observability stacks (CloudWatch, Grafana, Prometheus, OpenTelemetry).
- Security & Governance: Demonstrated ability to implement governance guardrails (SCPs, AWS Config) and enforce least-privilege access.
- Platform-as-a-Product: Mindset for treating internal platform as a product with service ownership, SLAs, and measurable developer experience improvements.
🚀 Hiring: Java Developer at Deqode
⭐ Experience: 2+ Years
📍 Location: Mumbai
⭐ Work Mode:- 5 Days Work from Office
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We are looking for a Java Developer (Mid/Senior) to join our Implementation & Application Support team supporting critical fintech platforms. The role involves backend development, application monitoring, incident management, and close collaboration with customers. Senior developers will handle escalations, mentor juniors, and drive operational excellence.
Key Responsibilities (Brief)
✅ Develop and support Java applications (Spring Boot / Quarkus).
✅Monitor applications and resolve production issues.
✅Manage incidents, perform root cause analysis, and handle ITSM tickets.
✅Collaborate with customers and internal teams.
✅(Senior) Lead escalations and mentor junior engineers.
Top Skills Required
✅ Java, Spring Boot, Quarkus
✅Application Support & Incident Management
✅ServiceNow / JIRA / ITSM tools
✅Monitoring & Production Support
✅Kafka, Redis, Solace, Aerospike (Good to have)
✅Docker, Kubernetes, CI/CD (Plus)
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation.
Desired Skills
- 5+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 5+ years in a UNIX-based large-scale web operations role.
- 5+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
About Poshmark
Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation
Desired Skills
- 4+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 4+ years in a UNIX-based large-scale web operations role.
- 4+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
About the company:
Aptroid Consulting (India) Pvt Ltd is a Web Development company focused on helping marketers transforms the customer experience increasing engagement and driving revenue, customer data to inform and drive it in every interaction in real time and with each individual behavior possibly.
About the Role:
We are hiring for Senior Full Stack Developers to strengthen the LiveIntent engineering team. The role requires strong backend depth combined with solid frontend expertise to build and scale high-performance, data-intensive systems.
Candidates are expected to demonstrate excellent analytical and problem-solving skills, along with strong system design capabilities for large-scale, distributed applications. Prior experience in AdTech or similar high-throughput domains is highly desirable.
Required Skills & Experience:
- 7–12 years of hands-on experience in full-stack development
- Strong proficiency in Python with Django (ORM, REST APIs, performance tuning)
- Solid experience with Angular (modern versions, component architecture)
- Hands-on experience with Docker and Kubernetes in production environments
- Strong understanding of MySQL, including query optimization and schema design
- Experience using Datadog for monitoring, metrics, and observability
- Excellent analytical, problem-solving, and debugging skills
- Proven experience in system design for scalable, distributed systems
Good to Haves:
- Experience with Node.js
- Strong background in database schema design and data modeling
- Prior experience working in AdTech / MarTech / digital advertising platforms
- Exposure to event-driven systems, real-time data pipelines, or high-volume traffic systems
- Experience with CI/CD pipelines and cloud platforms (AWS)
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications using Python (Django) and Angular
- Build and optimize backend services handling large data volumes and high request throughput
- Design and implement RESTful APIs with a focus on performance, security, and reliability
- Lead and contribute to system design discussions covering scalability, fault tolerance, and observability
- Containerize applications using Docker and deploy/manage workloads on Kubernetes
- Design, optimize, and maintain MySQL database schemas, queries, and indexes
- Implement monitoring, logging, and alerting using Datadog •
- Perform deep debugging and root-cause analysis of complex production issues
- Collaborate with product, platform, and data teams to deliver business-critical features
- Mentor junior engineers and promote engineering best practices
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Job Summary:
We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.
Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
We're Hiring: Golang Developer
Location:Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Job Title:Full Stack Developer
Location: Bangalore, India
About Us:
Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.
Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.
Role Overview:
As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.
Your Core Impact
- You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
- You’ll translate learning and AI requirements into tangible, performant product features.
- Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.
Key Responsibilities:
Platform Architecture & Backend Development
- Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
- Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
- Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
- Ensure security, uptime, and performance across all services.
Front-End Development & User Experience
- Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
- Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
- Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.
AI Integration & Support
- Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
- Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
- Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.
Data, Analytics & Reporting
- Build dashboards and data views for educators and product teams to derive insights from learner behavior.
- Implement secure data storage and export pipelines for progress analytics.
Collaboration & Engineering Culture
- Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
- Participate in code reviews, architectural discussions, and system design decisions.
- Help define engineering best practices that balance innovation, maintainability, and performance.
Required Qualifications & Skills
- 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
- Strong proficiency in Python or Node.js for backend services.
- Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
- Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
- Experience with real-time data systems (WebSockets or event-driven architectures).
- Exposure to AI/ML integrations or data-intensive backends.
- Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
- Strong problem-solving mindset and attention to detail.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Dear Candidate
Candidate must have:
- Minimum 3-5 years of experience working as a NOC Engineer / Senior NOC Engineer in the telecom/Product (preferably telecom monitoring) industry.
- BE in CS, EE, or Telecommunications from a recognized university.
- Knowledge of NOC Process
- Technology exposure towards Telecom – 5G,4G,IMS with a solid understanding of Telecom Performance KPI’s, and/or Radio Access Network. Knowledge of call flows will be advantage
- Experience with Linux OS and SQL – mandatory.
- Residence in Delhi – mandatory.
- Ready to work in a 24×7 environment.
- Ability to monitor alarms based on our environment.
- Capability to identify and resolve issues occurring in the RADCOM environment.
- Any relevant technical certification will be an added advantage.
Responsibilities:
- Based in RADCOM India offices, Delhi.
- Responsible for all NOC Monitoring and technical support -T1/T2 aspects required by the process for RADCOM’s solutions.
- Ready to participate under Customer Planned activities / execution and monitoring.
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems, share your resume.
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.




















