Cutshort logo
Kubernetes jobs

50+ Kubernetes Jobs in India

Apply to 50+ Kubernetes Jobs on CutShort.io. Find your next job, effortlessly. Browse Kubernetes Jobs and apply today!

icon
Remote only
1 - 4 yrs
₹5L - ₹10L / yr
DevOps
Azure
Cloud
skill iconDocker
skill iconKubernetes

About the role

 

We’re looking for a hands-on Junior System/Cloud Engineer to help keep our cloud infrastructure and internal IT humming. You’ll work closely with a senior IT/Cloud Engineer across Azure, AWS, and Google Cloud—provisioning VMs/EC2/Compute Engine instances, basic database setup, and day-to-day maintenance. You’ll also install and maintain team tools (e.g., Elasticsearch, Tableau, MSSQL) and pitch in with end-user support for laptops and any other system issues when needed.

 

What you’ll do

 

Cloud provisioning & operations (multi-cloud)

·         Create, configure, and maintain virtual machines and related resources in Azure, AWS (EC2), and Google Compute Engine (networks, security groups/firewalls, storage, OS patching and routine maintenance/backups).

·         Assist with database setup (managed services or VM-hosted), backups, and access controls under guidance from senior engineers.

·         Implement tagging, least-privilege IAM, and routine patching for compliance and cost hygiene.

Tooling installation & maintenance

·         Install, configure, upgrade, and monitor tools/softwares required like Elasticsearch and Tableau server and any other and manage service lifecycle (systemd/Windows service), security basics, and health checks.

·         Document installation steps, configurations, and runbooks.

Monitoring & incident support

·         Set up basic monitoring/alerts using cloud-native tools and logs.

·         End-user & endpoint support (as needed)

·         Provide first-line support for laptops/desktops (Windows/macOS), peripherals, conferencing, VPN, and common apps; escalate when appropriate.

·         Assist with device setup, imaging, patching, and inventory; keep tickets and resolutions well documented.

What you’ll bring

·         Experience: 1–3 years in DevOps/System Admin/IT Ops with exposure to at least one major cloud (primarily Azure, good to have skills on AWS, or GCP); eagerness to work across all three.

Core skills:

·         Linux and Windows server admin basics (users, services, networking, storage).

·         VM provisioning and troubleshooting in Azure, AWS EC2, or GCE; understanding of security groups/firewalls, SSH/RDP, and snapshots/images.

·         Installation/maintenance of team tools (ex: Elasticsearch, Tableau Server etc).

·         Scripting (Bash and/or PowerShell); Git fundamentals; comfort with ticketing systems.

·         Clear documentation and communication habits.

Nice to have:

·         Terraform or ARM/CloudFormation basics; container fundamentals (Docker).

·         Monitoring/logging familiarity (Elastic/Azure Monitor).

·         Basic networking (DNS, HTTP, TLS, VPN, Nginx)

·         Azure certification (AZ-104, AZ-204)

 

To process your resume for the next process, please fill out the Google form with your updated resume.


https://forms.gle/b8JeStRaWrWv3tzLA

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
5 - 9 yrs
₹10L - ₹17L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
IAC
Terraform
skill iconPython
+4 more

5+ yrs of experience in Cloud/DevOps roles.


 Strong hands-on experience with AWS architecture, operations & automation (70% focus).


Solid Kubernetes/EKS administration experience (30% focus).


IaC experience (Terraform preferred). 


Scripting (Python / Bash).


 CI/CD tools (Jenkins, GitLab, GitHub Actions). 


Experience working with BFSI or Managed Service projects is mandatory

Read more
Aptroid Consulting India PVT LTD
Eman Khan
Posted by Eman Khan
Hyderabad
7 - 12 yrs
Upto ₹47L / yr (Varies
)
skill iconPython
skill iconAngular (2+)
skill iconReact.js
skill iconKubernetes
SQL

About the company:

Aptroid Consulting (India) Pvt Ltd is a Web Development company focused on helping marketers transforms the customer experience increasing engagement and driving revenue, customer data to inform and drive it in every interaction in real time and with each individual behavior possibly.


About the Role:

We are hiring for Senior Full Stack Developers to strengthen the LiveIntent engineering team. The role requires strong backend depth combined with solid frontend expertise to build and scale high-performance, data-intensive systems.


Candidates are expected to demonstrate excellent analytical and problem-solving skills, along with strong system design capabilities for large-scale, distributed applications. Prior experience in AdTech or similar high-throughput domains is highly desirable.


Required Skills & Experience:

  • 7–12 years of hands-on experience in full-stack development
  • Strong proficiency in Python with Django (ORM, REST APIs, performance tuning)
  • Solid experience with Angular (modern versions, component architecture)
  • Hands-on experience with Docker and Kubernetes in production environments
  • Strong understanding of MySQL, including query optimization and schema design
  • Experience using Datadog for monitoring, metrics, and observability
  • Excellent analytical, problem-solving, and debugging skills
  • Proven experience in system design for scalable, distributed systems


Good to Haves:

  • Experience with Node.js
  • Strong background in database schema design and data modeling
  • Prior experience working in AdTech / MarTech / digital advertising platforms
  • Exposure to event-driven systems, real-time data pipelines, or high-volume traffic systems
  • Experience with CI/CD pipelines and cloud platforms (AWS)


Key Responsibilities:

  • Design, develop, and maintain scalable full-stack applications using Python (Django) and Angular
  • Build and optimize backend services handling large data volumes and high request throughput
  • Design and implement RESTful APIs with a focus on performance, security, and reliability
  • Lead and contribute to system design discussions covering scalability, fault tolerance, and observability
  • Containerize applications using Docker and deploy/manage workloads on Kubernetes
  • Design, optimize, and maintain MySQL database schemas, queries, and indexes
  • Implement monitoring, logging, and alerting using Datadog •
  • Perform deep debugging and root-cause analysis of complex production issues
  • Collaborate with product, platform, and data teams to deliver business-critical features
  • Mentor junior engineers and promote engineering best practices 
Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
7 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconKubernetes
skill iconDocker
Terraform
+5 more

Hiring for SRE Lead


Exp: 7 - 12 yrs

Work Location : Mumbai ( Kurla West )

WFO


Skills :

Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet). 


Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.


 System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.

Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGit
skill iconDocker
skill iconKubernetes

Key Responsibilities:

  • Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
  • Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
  • Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
  • Manage and monitor production deployments to ensure high availability and performance
  • Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
  • Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
  • Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
  • Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
  • Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
  • Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
  • Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
  • Collaborate with development and QA teams to align infrastructure with application needs
  • Troubleshoot infrastructure and deployment issues efficiently and proactively
  • Ensure cloud cost optimization and usage tracking


Required Skills & Experience:

  • 3-4 years of hands-on experience in a DevOps
  • Strong expertise with both AWS and Azure cloud platforms
  • Proficient in Git, branching strategies, and pull request workflows
  • Deep understanding of CI/CD concepts and experience with pipeline tools
  • Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
  • Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
  • Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
  • Experience with Infrastructure as Code tools (Terraform preferred)
  • Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
  • Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
  • Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
  • Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
  • Working knowledge of monitoring and logging tools
  • Strong troubleshooting and problem-solving skills


Good to Have (Bonus Points):

  • Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
  • Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
  • Experience with compliance/security best practices (SOC2, ISO, etc.)
  • Familiarity with Service Mesh (Istio, Linkerd) and API gateways
  • Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Ariba Khan
Posted by Ariba Khan
Mumbai, Hyderabad
8 - 11 yrs
Upto ₹35L / yr (Varies
)
skill iconKubernetes
skill iconJenkins
skill iconDocker
Ansible
sonarqube

About the role:

We are seeking an experienced DevOps Engineer with deep expertise in Jenkins, Docker, Ansible, and Kubernetes to architect and maintain secure, scalable infrastructure and CI/CD pipelines. This role emphasizes security-first DevOps practices, on-premises Kubernetes operations, and integration with data engineering workflows.


🛠 Required Skills & Experience

Technical Expertise

  • Jenkins (Expert): Advanced pipeline development, DSL scripting, security integration, troubleshooting
  • Docker (Expert): Secure multi-stage builds, vulnerability management, optimisation for Java/Scala/Python
  • Ansible (Expert): Complex playbook development, configuration management, automation at scale
  • Kubernetes (Expert - Primary Focus): On-premises cluster operations, security hardening, networking, storage management
  • SonarQube/Code Quality (Strong): Integration, quality gate enforcement, threshold management
  • DevSecOps (Strong): Security scanning, compliance automation, vulnerability remediation, workload governance
  • Spark ETL/ETA (Moderate): Understanding of distributed data processing, job configuration, runtime behavior


Core Competencies

  • Deep understanding of DevSecOps principles and security-first automation
  • Strong troubleshooting and problem-solving abilities across complex distributed systems
  • Experience with infrastructure-as-code and GitOps methodologies
  • Knowledge of compliance frameworks and security standards
  • Ability to mentor teams and drive best practice adoption


🎓Qualifications

  • 6 - 10 Years years of hands-on DevOps
  • Proven track record with Jenkins, Docker, Kubernetes, and Ansible in production environments
  • Experience managing on-premises Kubernetes clusters (bare-metal preferred)
  • Strong background in security hardening and compliance automation
  • Familiarity with data engineering platforms and big data technologies
  • Excellent communication and collaboration skills


🚀 Key Responsibilities

1.CI/CD Pipeline Architecture & Security

  • Design, implement, and maintain enterprise-grade CI/CD pipelines in Jenkins with embedded security controls:
  • Build greenfield pipelines and enhance/stabilize existing pipeline infrastructure
  • Diagnose and resolve build, test, and deployment failures across multi-service environments
  • Integrate security gates, compliance checks, and automated quality controls at every pipeline stage
  • Manage and optimize SonarQube and static code analysis tooling:
  • Enforce code quality and security scanning standards across all services
  • Maintain organizational coding standards, vulnerability thresholds, and remediation workflows
  • Automate quality gates as integral components of CI/CD processes
  • Engineer optimized Docker images for Java, Scala, and Python applications:
  • Implement multi-stage builds, layer optimization, and minimal base images
  • Conduct image vulnerability scanning and enforce compliance policies
  • Apply containerization best practices for security and performance
  • Develop comprehensive Ansible automation:
  • Create modular, reusable, and secure playbooks for configuration management
  • Automate environment provisioning and application lifecycle operations
  • Maintain infrastructure-as-code standards and version control

2.Kubernetes Platform Operations & Security

  • Lead complete lifecycle management of on-premises/bare-metal Kubernetes clusters:
  • Cluster provisioning, version upgrades, node maintenance, and capacity planning
  • Configure and manage networking (CNI), persistent storage solutions, and ingress controllers
  • Troubleshoot workload performance, resource constraints, and reliability issues
  • Implement and enforce Kubernetes security best practices:
  • Design and manage RBAC policies, service account isolation, and least-privilege access models
  • Apply Pod Security Standards, network policies, secrets encryption, and certificate lifecycle management
  • Conduct cluster hardening, security audits, monitoring, and policy governance
  • Provide technical leadership to development teams:
  • Guide secure deployment patterns and containerized application best practices
  • Establish workload governance frameworks for distributed systems
  • Drive adoption of security-first mindsets across engineering teams

3.Data Engineering Support

  • Collaborate with data engineering teams on Spark-based workloads:
  • Support deployment and operational tuning of Spark ETL/ETA jobs
  • Understand cluster integration, job orchestration, and performance optimization
  • Debug and troubleshoot Spark workflow issues in production environments
Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1yr+
Upto ₹10L / yr (Varies
)
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+3 more

Role Overview

We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal

candidate will bridge the gap between development and operations, implementing and maintaining our

cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our

client projects.


Responsibilities:

  • Design, implement, and maintain CI/CD pipelines.
  • Containerize applications using Docker and orchestrate deployments
  • Manage and optimize cloud infrastructure on AWS and Azure platforms
  • Monitor system performance and implement automation for operational tasks to ensure optimal
  • performance, security, and scalability.
  • Troubleshoot and resolve infrastructure and deployment issues
  • Create and maintain documentation for processes and configurations
  • Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
  • Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.


Requirements:

  • 2+ years of hands-on experience with AWS cloud services
  • Strong proficiency in CI/CD pipeline configuration
  • Expertise in Docker containerisation and container management
  • Proficiency in shell scripting (Bash/Power-Shell)
  • Working knowledge of monitoring and logging tools
  • Knowledge of network security and firewall configuration
  • Strong communication and collaboration skills, with the ability to work effectively within a team
  • environment
  • Understanding of networking concepts and protocols in AWS and/or Azure
Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
3 - 5 yrs
₹20L - ₹35L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
NOSQL Databases
Google Cloud Platform (GCP)
+7 more

What you'll be doing:

As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.

  • Design & Implement: Lead the design, implementation and management of Trential’s products.
  • Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
  • Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
  • Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
  • Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

  • 3+ years of experience in backend development.
  • Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
  • Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
  • Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.

Preferred Qualifications (Nice to Have)

  • Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
  • Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
  • Experience integrating AI/ML models into verification or data extraction workflows.
Read more
Bengaluru (Bangalore)
5 - 7 yrs
₹5L - ₹8L / yr
HP LoadRunner
JMeter
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.

Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Mumbai, Navi Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
Microservices
skill iconElastic Search
RabbitMQ
skill iconKubernetes
+2 more


We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and

a deep interest in scalable, low-latency systems.


Key Responsibilities

• Develop, maintain, and optimize backend applications using Python.

• Build and integrate RESTful APIs and microservices.

• Work with relational and NoSQL databases for data storage, retrieval, and optimization.

• Write clean, efficient, and reusable code while following best practices.

• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.

• Participate in code reviews to maintain high coding standards.

• Troubleshoot, debug, and upgrade existing applications.

• Ensure application security, performance, and scalability.


Required Skills & Qualifications:

• 2–4 years of hands-on experience in Python development.

• Strong command over Python frameworks such as Django, Flask, or FastAPI.

• Solid understanding of Object-Oriented Programming (OOP) principles.

• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.

• Proficiency in writing and consuming REST APIs.

• Familiarity with Git and version control workflows.

• Experience with unit testing and frameworks like PyTest or Unittest.

• Knowledge of containerization (Docker) is a plus.

Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer

Location:Banaglore

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems.

Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Mumbai, Navi Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Mumbai

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems

Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Remote only
5 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
+1 more

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Meraki Labs
Bengaluru (Bangalore)
3 - 4 yrs
₹30L - ₹50L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconNextJs (Next.js)
RESTful APIs
+4 more

Job Title:Full Stack Developer 

Location: Bangalore, India


About Us:


Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.


Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.


Role Overview:


As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.

Your Core Impact

  • You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
  • You’ll translate learning and AI requirements into tangible, performant product features.
  • Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.


Key Responsibilities:


Platform Architecture & Backend Development

  • Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
  • Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
  • Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
  • Ensure security, uptime, and performance across all services.

Front-End Development & User Experience

  • Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
  • Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
  • Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.

AI Integration & Support

  • Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
  • Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
  • Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.

Data, Analytics & Reporting

  • Build dashboards and data views for educators and product teams to derive insights from learner behavior.
  • Implement secure data storage and export pipelines for progress analytics.

Collaboration & Engineering Culture

  • Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
  • Participate in code reviews, architectural discussions, and system design decisions.
  • Help define engineering best practices that balance innovation, maintainability, and performance.


Required Qualifications & Skills

  • 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
  • Strong proficiency in Python or Node.js for backend services.
  • Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
  • Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
  • Experience with real-time data systems (WebSockets or event-driven architectures).
  • Exposure to AI/ML integrations or data-intensive backends.
  • Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
  • Strong problem-solving mindset and attention to detail.
Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
RADCOM
Shreya Tiwari
Posted by Shreya Tiwari
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 6 yrs
₹4L - ₹10L / yr
Linux/Unix
SQL
DOS/4G
Telecom
skill iconKubernetes
+1 more

Dear Candidate


Candidate must have:

 

  • Minimum 3-5 years of experience working as a NOC Engineer / Senior NOC Engineer in the telecom/Product (preferably telecom monitoring) industry.
  • BE in CS, EE, or Telecommunications from a recognized university.
  • Knowledge of NOC Process
  • Technology exposure towards Telecom – 5G,4G,IMS with a solid understanding of Telecom Performance KPI’s, and/or Radio Access Network. Knowledge of call flows will be advantage
  • Experience with Linux OS and SQL – mandatory.
  • Residence in Delhi – mandatory.
  • Ready to work in a 24×7 environment.
  • Ability to monitor alarms based on our environment.
  • Capability to identify and resolve issues occurring in the RADCOM environment.
  • Any relevant technical certification will be an added advantage.

 

Responsibilities:

 

  • Based in RADCOM India offices, Delhi.
  • Responsible for all NOC Monitoring and technical support -T1/T2 aspects required by the process for RADCOM’s solutions.
  • Ready to participate under Customer Planned activities / execution and monitoring.

 

Read more
Meraki Labs
Agency job
via ENTER by Rajkishor Mishra
Bengaluru (Bangalore)
8 - 12 yrs
₹60L - ₹70L / yr
skill iconMachine Learning (ML)
Generative AI
skill iconPython
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+9 more

Job Overview:


As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along  with strong expertise in cloud-based architectures.


Key Responsibilities:


AI Tutor & Simulation Intelligence

  • Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
  • Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
  • Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
  • Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.


Platform & System Architecture

  • Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
  • Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
  • Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.


Reliability, Security & Analytics

  • Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
  • Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
  • Set up real-time learning analytics to measure comprehension and identify concept gaps.


Leadership & Collaboration

  • Mentor and elevate engineers across backend, ML, and front-end teams.
  • Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
  • Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.


Qualifications & Skills:


  • 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
  • Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
  • Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
  • Experience designing microservices and API ecosystems for high-concurrency platforms.
  • Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
  • Demonstrated ability to work with educational data, content pipelines, and real-time systems.


Bonus Skills (Nice to Have):

  • Experience with multi-modal AI models (text, image, audio, video).
  • Knowledge of AI safety, ethical AI, and explain ability techniques.
  • Prior work in AI-powered automation tools or AI-driven SaaS products.
Read more
IndArka Energy Pvt Ltd

at IndArka Energy Pvt Ltd

3 recruiters
Mita Hemant
Posted by Mita Hemant
Bengaluru (Bangalore)
4 - 5 yrs
₹15L - ₹18L / yr
Microsoft Windows Azure
CI/CD
Scripting language
skill iconDocker
skill iconKubernetes
+3 more

About Us

At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.

Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.

 

What We're Looking For

We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.

Key Responsibilities

  • Design and implement CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and configuration in the Azure cloud environment.
  • Monitor and manage system health, performance, and security.
  • Collaborate with development teams to ensure smooth and secure deployment of applications.
  • Troubleshoot and resolve issues related to deployment and operations.
  • Implement best practices for configuration management and infrastructure as code.
  • Maintain documentation of processes and solutions.

 

Requirements

  • Total relevant experience of 4 to 5 years.
  • Proven experience as a DevOps Engineer, specifically with Azure.
  • Experience with CI/CD tools and practices.
  • Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
  • Knowledge of scripting languages such as PowerShell or Python.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Good to have – knowledge on AWS, Digital Ocean, GCP
  • Excellent troubleshooting and problem-solving skills
  • High ownership, self-starter attitude, and ability to work independently
  • Strong aptitude and reasoning ability with a growth mindset

 

Nice to Have

·        Experience working in a SaaS or product-driven startup

·        Familiarity with solar industry (preferred but not required)

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Payal
Payal Sangoi
Posted by Payal Sangoi
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Mumbai

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems.

Read more
IT Industry

IT Industry

Agency job
via Truetech by Nithya A
Remote only
4 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Key Responsibilities

●     Design and maintain high-performance backend applications and microservices

●     Architect scalable, cloud-native systems and collaborate across engineering teams

●     Write high-quality, performant code and conduct thorough code reviews

●     Build and operate CI/CD pipelines and production systems

●     Work with databases, containerization (Docker/Kubernetes), and cloud platforms

●     Lead agile practices and continuously improve service reliability

Required Qualifications

●     4+ years of professional software development experience

●     2+ years contributing to service design and architecture

●     Strong expertise in modern languages like Golang, Python

●     Deep understanding of scalable, cloud-native architectures and microservices

●     Production experience with distributed systems and database technologies

●     Experience with Docker, software engineering best practices

●     Bachelor's Degree in Computer Science or related technical field

Preferred Qualifications

●     Experience with Golang, AWS, and Kubernetes

●     CI/CD pipeline experience with GitHub Actions

●     Start-up environment experience

Read more
Tradelab Software Private Limited
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Banaglore


We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.


Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability


Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred


If you are passionate about backend engineering and want to build fast, scalable trading systems, share your resume.

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Pune
6 - 8 yrs
₹12L - ₹22L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
skill iconGo Programming (Golang)
Elixir
+10 more


Job Description – Full Stack Developer (React + Node.js)

Experience: 5–8 Years

Location: Pune

Work Mode: WFO

Employment Type: Full-time


About the Role

We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications using React and Node.js.
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
  • Work with relational databases such as PostgreSQL or MySQL.
  • Deploy and manage applications in cloud environments (preferably GCP or AWS).
  • Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
  • Utilize containerization tools like Docker for efficient development and deployment workflows.
  • Integrate third-party services and APIs, including AI APIs and tools.
  • Contribute to improving development processes, documentation, and best practices.


Required Skills

  • Strong experience with React.js (frontend).
  • Solid hands-on experience with Node.js (backend).
  • Good understanding of relational databases: PostgreSQL / MySQL.
  • Experience working in production environments and debugging live systems.
  • Strong understanding of OOP or Functional Programming, and clean coding standards.
  • Knowledge of Docker or other containerization tools.
  • Experience with cloud platforms (GCP or AWS).
  • Excellent written and verbal communication skills.


Good to Have

  • Experience with Golang or Elixir.
  • Familiarity with Kubernetes, RabbitMQ, Redis, etc.
  • Contributions to open-source projects.
  • Previous experience working with AI APIs or machine learning tools.


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
2 - 5 yrs
₹7L - ₹18L / yr
skill iconDocker
skill iconKubernetes
CI/CD
skill iconJenkins

Job Title: DevOps Engineer

Location: Mumbai

Experience: 2–4 Years

Department: Technology

About InCred

InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]

Role Overview

As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.

Key Responsibilities

  • Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
  • CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
  • Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
  • Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
  • Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
  • Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
  • Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
  • Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]

Required Skills

  • 2–4 years of hands-on experience in DevOps roles.
  • Strong knowledge of Linux administration and shell scripting (Bash/Python).
  • Experience with AWS services and cloud architecture.
  • Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
  • Familiarity with Docker, Kubernetes, and container orchestration.
  • Knowledge of Terraform or similar IaC tools.
  • Understanding of networking, security, and performance tuning.
  • Exposure to monitoring tools (Prometheus, Grafana) and log management.

Preferred Qualifications

  • Experience in financial services or fintech environments.
  • Knowledge of microservices architecture and enterprise-grade SaaS setups.
  • Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).

Why Join InCred?

  • Culture: High-performance, ownership-driven, and innovation-focused environment.
  • Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
  • Rewards: Competitive compensation, ESOPs, and performance-based incentives.
  • Impact: Be part of a mission-driven organization transforming India’s credit landscape.


Read more
Tradelab Software Private Limited
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
CI/CD

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.

• You thrive on challenges, not on perks or financial rewards.

• You measure success by your own growth, not external validation.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and optimize high-performance backend systems in Golang for trading platforms and financial

services.

• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.

• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

• Own your work—no babysitting, no micromanagement.

• Work alongside equally driven engineers who expect nothing less than brilliance.

• Learn faster than you ever thought possible.


Must-Have Skills:

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

• Deep understanding of concurrency, memory management, and system design.

• Experience with Trading, market data processing, or low-latency systems.

• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

• Hands-on with Docker, Kubernetes, and CI/CD pipelines.

• A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

• Past experience in fintech.

• Contributions to open-source Golang projects.

• A history of building something impactful from scratch.

• Understanding of FIX protocol, WebSockets, and streaming APIs.

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹45L / yr
TypeScript
skill iconMongoDB
Microservices
MVC Framework
Google Cloud Platform (GCP)
+14 more

Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js

 

Criteria:

Need candidates from Growing startups or Product based companies only

1. 4–8 years’ experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring

 

Description 

About the opportunity

We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.

As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.

 

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.

 

Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.

 

 

Read more
GrowthArc

at GrowthArc

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore)
7yrs+
Upto ₹38L / yr (Varies
)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconDocker
Terraform
Amazon EC2
+7 more

Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.


Key Responsibilities:

  • Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
  • Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
  • Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
  • Optimize workloads post-migration for cost, performance, and security.
  • Collaborate with stakeholders to define migration roadmaps and timelines.
  • Perform data migration, application re-architecture, and hybrid cloud setups.
  • Monitor migration progress, troubleshoot issues, and ensure business continuity.
  • Document processes and provide post-migration support and training.
  • Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.


Required Qualifications:

  • 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
  • AWS Certified Solutions Architect or Migration Specialty certification preferred.
  • Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
  • Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
  • Experience with infrastructure as code (CloudFormation, Terraform).
  • Proficiency in scripting (Python, PowerShell) and automation.
  • Familiarity with security best practices (IAM, encryption, compliance).
  • Hands-on experience with Kubernetes/EKS networking components and best practices.


Preferred Skills:

  • Experience with hybrid/multi-cloud environments.
  • Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
  • Excellent problem-solving and communication skills.
Read more
Blurgs AI

at Blurgs AI

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹25L / yr (Varies
)
skill iconPython
Data engineering
skill iconMongoDB
SQL
skill iconDocker
+1 more

We are seeking a Technical Lead with strong expertise in backend engineering, real-time data streaming, and platform/infrastructure development to lead the architecture and delivery of our on-premise systems.

You will design and build high-throughput streaming pipelines (Apache Pulsar, Apache Flink), backend services (FastAPI), data storage models (MongoDB, ClickHouse), and internal dashboards/tools (Angular).

In this role, you will guide engineers, drive architectural decisions, and ensure reliable systems deployed on Docker + Kubernetes clusters.


Key Responsibilities

1. Technical Leadership & Architecture

  • Own the end-to-end architecture for backend, streaming, and data systems.
  • Drive system design decisions for ingestion, processing, storage, and DevOps.
  • Review code, enforce engineering best practices, and ensure production readiness.
  • Collaborate closely with founders and domain experts to translate requirements into technical deliverables.

2. Data Pipeline & Streaming Systems

  • Architect and implement real-time, high-throughput data pipelines using Apache Pulsar and Apache Flink.
  • Build scalable ingestion, enrichment, and stateful processing workflows.
  • Integrate multi-sensor maritime data into reliable, unified streaming systems.

3. Backend Services & Platform Engineering

  • Lead development of microservices and internal APIs using FastAPI (or equivalent backend frameworks).
  • Build orchestration, ETL, and system-control services.
  • Optimize backend systems for latency, throughput, resilience, and long-term maintainability.

4. Data Storage & Modeling

  • Design scalable, efficient data models using MongoDB, ClickHouse, and other on-prem databases.
  • Implement indexing, partitioning, retention, and lifecycle strategies for large datasets.
  • Ensure high-performance APIs and analytics workflows.

5. Infrastructure, DevOps & Containerization

  • Deploy and manage distributed systems using Docker and Kubernetes.
  • Own observability, monitoring, logging, and alerting for all critical services.
  • Implement CI/CD pipelines tailored for on-prem and hybrid cloud environments.

6. Team Management & Mentorship

  • Provide technical guidance to engineers across backend, data, and DevOps teams.
  • Break down complex tasks, review designs, and ensure high-quality execution.
  • Foster a culture of clarity, ownership, collaboration, and engineering excellence.

Required Skills & Experience

  • 5–10+ years of strong software engineering experience.
  • Expertise with streaming platforms like Apache Pulsar, Apache Flink, or similar technologies.
  • Strong backend engineering proficiency — preferably FastAPI, Python, Java, or Scala.
  • Hands-on experience with MongoDB and ClickHouse.
  • Solid experience deploying, scaling, and managing services on Docker + Kubernetes.
  • Strong understanding of distributed systems, high-performance data flows, and system tuning.
  • Experience working with Angular for internal dashboards is a plus.
  • Excellent system-design, debugging, and performance-optimization skills.
  • Prior experience owning critical technical components or leading engineering teams.

Nice to Have

  • Experience with sensor data (AIS, Radar, SAR, EO/IR).
  • Exposure to maritime, defence, or geospatial technology.
  • Experience with bare-metal / on-premise deployments.
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
ECS
Amazon Redshift
+14 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker

Notice period - 0 to 15days only

Hybrid work mode- 3 days office, 2 days at home

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
8 - 13 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
skill iconGo Programming (Golang)
+13 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.



Work Location: Pune


Job Type: Hybrid

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
One2n

at One2n

3 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
6yrs+
Upto ₹35L / yr (Varies
)
skill iconKubernetes
Monitoring
skill iconAmazon Web Services (AWS)
JVM
skill iconDocker
+7 more

About the role:

We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.

You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.


Responsibilities:

  • Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
  • Provide technical guidance and mentorship to young engineers.
  • Participate in code reviews and contribute to best practices for development and operations.
  • Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
  • Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
  • Improve Developer Experience (DX) to help engineers improve their productivity.
  • Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
  • Help SRE teams set up on-call rosters and coach them for effective on-call management.
  • Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
  • Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.


Requirements:

  • 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
  • Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
  • Working knowledge of programming using Golang, Python, Java, or equivalent.
  • Skilled in diagnosing and resolving Linux operating system issues.
  • Strong proficiency in scripting and automation to build monitoring and analytics solutions.
  • Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
  • Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconReact.js
skill iconDjango
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.
Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹35L / yr
skill iconPython
FastAPI
skill iconDjango
TypeScript
skill iconNextJs (Next.js)
+11 more

Job Title : Full Stack Engineer (Python + React.js/Next.js)

Experience : 1 to 6+ Years

Location : Bengaluru (Indiranagar)

Employment : Full-Time

Working Days : 5 Days WFO

Notice Period : Immediate to 30 Days


Role Overview :

We are seeking Full Stack Engineers to build scalable, high-performance fintech products.

You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.


Main Tech Stack :

Frontend : Typescript, Next.js, React

Backend : Python, FastAPI, Django

Database : PostgreSQL, MongoDB, Redis

Cloud : AWS/GCP, Docker, Kubernetes

Tools : Git, GitHub, CI/CD, Elasticsearch


Key Responsibilities :

  • Develop full-stack applications with clean, scalable code.
  • Build fast, responsive UIs using Typescript, Next.js, React.
  • Develop backend APIs using Python, FastAPI, Django.
  • Collaborate with product/design to implement solutions.
  • Own development lifecycle: design → build → deploy → monitor.
  • Ensure performance, reliability, and security.


Requirements :

Must-Have :

  • 1–6+ years of full-stack experience.
  • Product-based company background.
  • Strong DSA + problem-solving skills.
  • Proficiency in either frontend or backend with familiarity in both.
  • Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
  • Strong communication & ownership mindset.

Good-to-Have :

  • Experience with containers, system design, observability tools.

Interview Process :

  1. Coding Round : DSA + problem solving
  2. System Design : LLD + HLD, scalability, microservices
  3. CTO Round : Technical deep dive + cultural fit
Read more
Gemba Concepts

at Gemba Concepts

1 candid answer
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹22L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconKubernetes
Microservices
skill iconAngular (2+)

About Gemba Concepts:

Gemba Concepts is an innovative IT solutions provider helping organizations transform digitally with scalable and reliable software solutions. We focus on delivering business value through cutting-edge technologies, quality engineering, and a strong culture of collaboration.

Role Overview


Role overview:

We are seeking a Full Stack Developer (Angular + Java) with 4–6 years of experience to join our dynamic development team. The role requires strong expertise in frontend development with Angular and backend development with Java/Spring Boot, along with good knowledge of modern software engineering practices.


Key Responsibilities:

  • Design, develop, and maintain web applications using Angular (frontend) and Java/Spring Boot (backend).
  • Collaborate with Team Lead, QA, and DevOps to deliver high-quality software within timelines.
  • Experience working with RESTful services, authentication and state management.
  • Build responsive, scalable, and secure applications following best practices.
  • Write clean, efficient, and reusable code while adhering to coding standards.
  • Troubleshoot and debug issues across the application stack.
  • Work with DevOps team on application deployment and CI/CD pipelines.
  • Contribute to knowledge sharing, documentation, and process improvement.


Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 4–6 years of professional experience as a Full Stack Developer.
  • Strong expertise in Angular 12+ (TypeScript, RxJS, HTML5, CSS3, PrimeNG, JavaScript, IONIC,SCSS).
  • Solid experience in Java, Spring Boot, REST APIs, JPA/Hibernate.
  • Proficiency in relational databases (MySQL, PostgreSQL, or Oracle).
  • Good understanding of Microservices Architecture.
  • Experience with Git, CI/CD tools (Jenkins, GitLab CI, GitHub Actions, or Azure DevOps).
  • Strong problem-solving and debugging skills.
  • Excellent communication and teamwork abilities.


Good to Have:

  • Exposure to cloud platforms (AWS, Azure, or GCP).
  • Familiarity with Docker and Kubernetes.
  • Knowledge of testing frameworks (JUnit, Jasmine/Karma).
  • Experience in Agile/Scrum development environments.


What We Offer:

  • Opportunity to work on challenging and impactful projects.
  • Collaborative team culture with mentorship and career growth.
  • Competitive compensation and benefits.
  • Learning and upskilling opportunities in the latest technologies.
Read more
Watsoo Express
Gurgaon Udyog vihar phase 5
6 - 10 yrs
₹9L - ₹11L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+9 more

Profile: Sr. Devops Engineer

Location: Gurugram

Experience: 05+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
skill iconAndroid Development
skill iconDocker
skill iconKubernetes
skill iconKotlin
skill iconLeadership
+1 more

Experience: 5-8 years of professional experience in software engineering, with a strong

background in developing and deploying scalable applications.

● Technical Skills:

○ Architecture: Demonstrated experience in architecture/ system design for scale,

preferably as a digital public good

○ Full Stack: Extensive experience with full-stack development, including mobile

app development and backend technologies.

○ App Development: Hands-on experience building and launching mobile

applications, preferably for Android.

○ Cloud Infrastructure: Familiarity with cloud platforms and containerization

technologies (Docker, Kubernetes).

○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.

● Soft Skills:

○ Experience in hiring team members

○ A proactive and independent problem-solver, comfortable working in a fast-paced

environment.

○ Excellent communication and leadership skills, with the ability to mentor junior

engineers.

○ A strong desire to use technology for social good.


Preferred Qualifications

● Experience working in a startup or smaller team environment.

● Familiarity with the healthcare or public health sector.

● Experience in developing applications for low-resource environments.

● Experience with data management in privacy and security-sensitive applications.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
Tops Infosolutions
Zurin Momin
Posted by Zurin Momin
Ahmedabad
5 - 12 yrs
₹9.5L - ₹14L / yr
skill iconLaravel
skill iconJavascript
DevOps
CI/CD
skill iconKubernetes
+2 more

Job Title: DevOps Engineer


Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.


Responsibilities:

Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.

Automate application deployment and environment provisioning using AWS and containerization tools.

Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).

Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.

Build and manage containerized environments using Docker (Kubernetes is a plus).

Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.

Ensure system security, data integrity, and high availability across environments.

Collaborate with development teams to streamline builds, testing, and deployments.

Troubleshoot and resolve infrastructure and deployment-related issues.


Required Skills:

AWS (EC2, ECS, RDS, S3, IAM, Lambda)

CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy

Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible

Containers: Docker (Kubernetes preferred)

Scripting: Bash, Python

Version Control: Git, GitHub, GitLab

Web Servers: Apache, Nginx (preferred)

Databases: MySQL, MongoDB (preferred)


Qualifications:

3+ years of experience as a DevOps Engineer in a production environment.

Proven experience supporting Laravel, Node.js, and Python-based applications.

Strong understanding of CI/CD, containerization, and automation practices.

Experience with infrastructure monitoring, logging, and performance optimization.

Familiarity with agile and collaborative development processes.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort