50+ Docker Jobs in India
Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!
At BigThinkCode, our technology solves complex problems. We are looking for talented Cloud Devops engineer to join our Cloud Infrastructure team at Chennai.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Role: Devops Engineer
Experience required: 2–3 years
Work location: Chennai
Joining time: Immediate – 4 weeks
Work Mode: Work from office (Hybrid)
About the Role:
We are looking for a DevOps Engineer with 2+ years of hands-on experience to support our infrastructure, CI/CD pipelines, and cloud environments. The candidate will work closely with senior DevOps engineers and development teams to ensure reliable deployments, system stability, and operational efficiency. The ideal candidate should be eager to learn new tools and technologies as required by the project.
Key Responsibilities:
· Assist in designing, implementing, and maintaining CI/CD pipelines using Jenkins, GitHub Actions, or similar tools
· Deploy, manage, and monitor applications on AWS cloud environments
· Manage and maintain Linux servers (Ubuntu/CentOS)
· Support Docker-based application deployments and container lifecycle management
· Work closely with developers to troubleshoot build, deployment, and runtime issues
· Assist in implementing security best practices, including IAM, secrets management, and basic system hardening
· Document processes, runbooks, and standard operating procedures
Core Requirements:
· 2–3 years of experience in a DevOps / Cloud / Infrastructure role
· Core understanding of DevOps principles and cloud computing
· Strong understanding of Linux fundamentals
· Hands-on experience with Docker for application deployment and container management
· Working knowledge of CI/CD tools, especially Jenkins and GitHub Actions
· Experience with AWS services, including EC2, IAM, S3, VPC, RDS, and other basic services.
· Good understanding of networking concepts, including:
o DNS, ports, firewalls, security groups
o Load balancing basics
· Experience working with web servers such as Nginx or Apache
· Understanding of SSL/TLS certificates and HTTPS configuration
· Basic knowledge of databases (PostgreSQL/MySQL) from an operational perspective
· Ability to troubleshoot deployment, server, and application-level issues
· Willingness and ability to learn new tools and technologies as required for project needs
Nice to Have
· Experience with monitoring and logging tools such as Prometheus, Grafana, CloudWatch, ELK.
· Basic experience or understanding of Kubernetes.
· Basic Python or shell scripting to automate routine operational tasks.
Why Join Us:
· Collaborative work environment.
· Exposure to modern tools and scalable application architectures.
· Medical cover for employee and eligible dependents.
· Tax beneficial salary structure.
· Comprehensive leave policy
· Competency development training programs.
Hiring: Hyperledger Fabric Developer
📍 Location: Gurugram
🏢 Work Mode: Onsite
📅 Working Days: Monday to Saturday (6 Days)
🔹 Role Overview
Join our team to build and maintain secure, scalable private blockchain networks. You’ll play a key role in deployments, upgrades, and live network management, ensuring high performance and stability.
🔹 Key Responsibilities
- Develop and maintain chaincode and Fabric network components
- Setup and manage Fabric 1.4 private networks (Peers, Orderers, Channels, MSPs, CAs)
- Support Fabric 2.x lifecycle processes (packaging, approvals, upgrades)
- Handle production environments (monitoring, troubleshooting, performance tuning)
- Add and onboard new peers into live networks with minimal/no downtime
- Collaborate with DevOps for automation, security, and deployments
🔹 Must-Have Skills
- Strong hands-on experience with Hyperledger Fabric 1.4
- Experience in end-to-end deployment of private networks
- Exposure to Fabric 2.x environments
- Experience working in live production setups
- Deep understanding of:
- MSP updates & certificate handling
- Channel configuration & peer onboarding
- Anchor peers & endorsement policies
🔹 Good-to-Have
- Experience with Docker, Kubernetes, Helm, CI/CD pipelines
- Knowledge of Fabric CA, TLS, and network security
- Strong debugging skills across chaincode and network logs
🔹 Preferred Skills
- Proficiency in Go (Golang) for chaincode
- Linux, scripting, and Git
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
🚀 Job Title : Backend Engineer (Go / Python / Java)
Experience : 3+ Years
Location : Bangalore (Client Location – Work From Office)
Notice Period : Immediate to 15 Days
Open Positions : 4
Working Days : 6 Days a Week
🧠 Job Summary :
We are looking for a highly skilled Backend Engineer to build scalable, reliable, and high-performance systems in a fast-paced product environment.
You will own large features end-to-end — from design and development to deployment and monitoring — while collaborating closely with product, frontend, and infrastructure teams.
This role requires strong backend fundamentals, distributed systems exposure, and a mindset of operational ownership.
⭐ Mandatory Skills :
Strong backend development experience in Go / Python (FastAPI) / Java (Spring Boot) with hands-on expertise in Microservices, REST APIs, PostgreSQL, Redis, Kafka/SQS, AWS/GCP, Docker, Kubernetes, CI/CD, and strong DSA & System Design fundamentals.
🔧 Key Responsibilities :
- Design, develop, test, and deploy backend services end-to-end.
- Build scalable, modular, and production-grade microservices.
- Develop and maintain RESTful APIs.
- Architect reliable distributed systems with performance and fault tolerance in mind.
- Debug complex cross-system production issues.
- Implement secure development practices (authentication, authorization, data integrity).
- Work with monitoring dashboards, alerts, and performance metrics.
- Participate in code reviews and enforce engineering best practices.
- Contribute to CI/CD pipelines and release processes.
- Collaborate with product, frontend, and DevOps teams.
✅ Required Skills :
- Strong proficiency in Go OR Python (FastAPI) OR Java (Spring Boot).
- Hands-on experience building Microservices-based architectures.
- Strong understanding of REST APIs & distributed systems.
- Experience with PostgreSQL and Redis.
- Exposure to Kafka / SQS or other messaging systems.
- Hands-on experience with AWS or GCP.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD pipelines.
- Strong knowledge of Data Structures & System Design.
- Ability to independently own features and solve ambiguous engineering problems.
⭐ Preferred Background :
- Experience in product-based companies.
- Exposure to high-throughput or event-driven systems.
- Strong focus on code quality, observability, and reliability.
- Comfortable working in high-growth, fast-paced environments.
🧑💻 Interview Process :
- 1 Internal Screening Round
- HR Discussion (Project & Communication Evaluation)
- 3 Technical Rounds with Client
This is a fresh requirement, and interviews will be scheduled immediately.
About Us:
Join a dynamic startup environment where you will work in a small, agile team closely with the founders. This is a great opportunity to be part of an innovative company building impactful tools.
Responsibilities:
- Develop and maintain server-side applications using Node.js
- Build RESTful APIs and internal tools
- Integrate third-party services and AI platforms, including Agentic AI SDK
- Manage and optimize databases (MongoDB/PostgreSQL)
- Collaborate with front-end developers to integrate user-facing elements
- Write unit tests and ensure high code quality
- Handle deployment and maintenance on VPS
- Use Docker and Git for development and deployment workflows
- Implement data validation using Zod, ensuring code is self-documenting so documentation stays coherent with code logic
- Develop real-time features using WebSockets
Nice-to-Have:
- Experience with Redis, BullMQ, and GraphQL
- Familiarity with CI/CD pipelines
- Exposure to front-end frameworks
- Knowledge of AWS or other cloud platforms
- Experience working with AI platforms and SDKs, especially Agentic AI SDK
Hiring for Azure Devops Engineer
Exp : 5 - 9 yrs
Edu : BE/B.Tech
Work Location : Noida WFO
Notice Period : Immediate - 15 days
Skills :
5+ years of hands-on experience in Azure DevOps, cloud deployment, and security.
Strong expertise in designing and implementing CI/CD pipelines using Azure DevOps or similar tools (Jenkins, GitLab CI/CD).
Experience with Azure services such as Azure App Service, AKS, Azure SQL, Azure AD, and networking components (VNet, NSG).
Solid understanding of Infrastructure as Code (IaC) using Terraform, ARM templates, or Azure Bicep.
Experience with containerization technologies like Docker and Kubernetes (AKS preferred).
Knowledge of DevSecOps practices and integrating security into CI/CD pipelines.
Proficiency in scripting and automation using PowerShell, Bash, or Python.
Familiarity with monitoring and logging tools such as Azure Monitor, Log Analytics, Application Insights, Prometheus, Grafana, or ELK stack.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
Responsibilities:
• End-to-end design, development, and deployment of enterprise-grade AI solutions leveraging Azure AI, Google Vertex AI, or comparable cloud platforms.
• Architect and implement advanced AI systems, including agentic workflows, LLM integrations, MCP-based solutions, RAG pipelines, and scalable microservices.
• Oversee the development of Python-based applications, RESTful APIs, data processing pipelines, and complex system integrations.
• Define and uphold engineering best practices, including CI/CD automation, testing frameworks, model evaluation procedures, observability, and operational monitoring.
• Partner closely with product owners and business stakeholders to translate requirements into actionable technical designs, delivery plans, and execution roadmaps.
• Provide hands-on technical leadership, conducting code reviews, offering architectural guidance, and ensuring adherence to security, governance, and compliance standards.
• Communicate technical decisions, delivery risks, and mitigation strategies effectively to senior leadership and cross-functional teams.
About Company:
Snapsight is an AI-powered platform that delivers real-time event summaries in 75+ languages. We work with conferences worldwide and won the 2024 Skift Award for Most Innovative Event Tech. We're an early-stage startup scaling fast.
Join us if you want to become part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Location: Remote/Work From Home
What you'll be doing:
- Writing reusable, testable, and efficient code in Node.js for back-end services.
- Ensuring optimal and high-performance code logic for the data from/to the database.
- Collaborating with front-end developers on the integrations.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Preparing technical specification documents for the developed features.
- Providing technical recommendations and suggesting improvements to the product.
- Writing unit test cases for APIs.
- Documenting code standards and practicing it.
- Staying updated on the advancements in the field of Node.js development.
- Should be open to new challenges and be comfortable in taking up new exploration tasks.
Skills:
- 3-5 years of strong proficiency in Node.js and its core principles.
- Experience in test-driven development.
- Experience with NoSQL databases like MongoDB is required
- Experience with MySQL database
- RESTful/GraphQL API design and development
- Docker and AWS experience is a plus
- Extensive knowledge of JavaScript, PHP, web stacks, libraries, and frameworks.
- Strong interpersonal, communication, and collaboration skills.
- Exceptional analytical and problem-solving aptitude
- Experience with a version control system like Git
- Knowledge about the Software Development Life Cycle Model, secure development best practices and standards, source control, code review, build and deployment, continuous integration
The Senior Developer will work on core product features, uphold architectural standards, and collaborate closely with the Team Leader to deliver high-quality, scalable solutions.
Responsibilities:
- Develop and maintain major product features
- Follow technical guidance from the Team Leader
- Ensure adherence to coding standards and system architecture
- Perform thorough testing of assigned tasks
Required Skills:
- Strong expertise in Next.js, React.js, Express.js, and PostgreSQL
- Solid understanding of system design, clean code practices, and performance optimization
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Role Overview
We are hiring a Principal Datacenter Backend Developer to architect and build highly scalable, reliable backend platforms for modern data centers. This role owns control-plane and data-plane services powering orchestration, monitoring, automation, and operational intelligence across large-scale on-prem, hybrid, and cloud data center environments.
This is a hands-on principal IC role with strong architectural ownership and technical leadership responsibilities.
Key Responsibilities
- Own end-to-end backend architecture for datacenter platforms (orchestration, monitoring, DCIM, automation).
- Design and build high-availability distributed systems at scale.
- Develop backend services using Java (Spring Boot / Micronaut / Quarkus) and/or Python (FastAPI / Flask / Django).
- Build microservices for resource orchestration, telemetry ingestion, capacity and asset management.
- Design REST/gRPC APIs and event-driven systems.
- Drive performance optimization, scalability, and reliability best practices.
- Embed SRE principles, observability, and security-by-design.
- Mentor senior engineers and influence technical roadmap decisions.
Required Skills
- Strong hands-on experience in Java and/or Python.
- Deep understanding of distributed systems and microservices.
- Experience with Kubernetes, Docker, CI/CD, and cloud-native deployments.
- Databases: PostgreSQL/MySQL, NoSQL, time-series data.
- Messaging systems: Kafka / Pulsar / RabbitMQ.
- Observability tools: Prometheus, Grafana, ELK/OpenSearch.
- Secure backend design (OAuth2, RBAC, audit logging).
Nice to Have
- Experience with DCIM, NMS, or infrastructure automation platforms.
- Exposure to hyperscale or colocation data centers.
- AI/ML-based monitoring or capacity planning experience.
Why Join
- Architect mission-critical platforms for large-scale data centers.
- High-impact principal role with deep technical ownership.
- Work on complex, real-world distributed systems problems.
About the role:
We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.
At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.
Key responsibilities:
- Own and drive reliability and infrastructure strategy across multiple products or client engagements
- Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
- Lead architecture discussions around observability, scalability, availability, and cost efficiency.
- Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
- Build and review production-grade CI/CD and IaC systems used across teams
- Act as an escalation point for complex production issues and incident retrospectives.
- Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
- Mentor young engineers through design reviews, technical guidance, and best practices.
- Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
- Help teams mature their on-call processes, reliability culture, and operational ownership.
- Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice
About you:
- 9+ years of experience in SRE, DevOps, or software engineering roles
- Strong experience designing and operating Kubernetes-based systems on AWS at scale
- Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
- Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
- Strong understanding of distributed systems, microservices, and containerized workloads.
- Ability to write and review production-quality code (Golang, Python, Java, or similar)
- Solid Linux fundamentals and experience debugging complex system-level issues
- Experience driving cross-team technical initiatives.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Nice to have:
- Experience working in consulting or multi-client environments.
- Exposure to cost optimization, or large-scale AWS account management
- Experience building internal platforms or shared infrastructure used by multiple teams.
- Prior experience influencing or defining engineering standards across organizations.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹15,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Details
- Job Title: Specialist I - Software Engineering-.Net Fullstack Lead-TVM
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-9 years
- Employment Type: Full Time
- Job Location: Trivandrum, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
· Minimum 5+ years experienced senior/Lead .Net developer, including experience of the full development lifecycle, including post-live support.
· Significant experience delivering software using Agile iterative delivery methodologies.
· JIRA knowledge preferred.
· Excellent ability to understand requirement/story scope and visualise technical elements required for application solutions.
· Ability to clearly articulate complex problems and solutions in terms that others can understand.
· Lots of experience working with .Net backend API development.
· Significant experience of pipeline design, build and enhancement to support release cadence targets, including Infrastructure as Code (preferably Terraform).
· Strong understanding of HTML and CSS including cross-browser, compatibility, and performance.
· Excellent knowledge of unit and integration testing techniques.
· Azure knowledge (Web/Container Apps, Azure Functions, SQL Server).
· Kubernetes / Docker knowledge. Knowledge of JavaScript UI frameworks, ideally Vue Extensive experience with source control (preferably Git).
· Strong understanding of RESTful services (JSON) and API Design.
· Broad knowledge of Cloud infrastructure (PaaS, DBaaS).
· Experience of mentoring and coaching engineers operating within a co-located environment.
Skills: .Net Fullstack, Azure Cloudformation, Javascript, Angular
Must-Haves:
.Net (5+ years), Agile methodologies, RESTful API design, Azure (Web/Container Apps, Functions, SQL Server), Git source control
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
F2F Weekend Interview on 14th Feb 2026
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
About us:
Trential is engineering the future of digital identity with W3C Verifiable Credentials—secure, decentralized, privacy-first. We make identity and credentials verifiable anywhere, instantly.
We are looking for a Team lead to architect, build, and scale high-performance web applications that power our core products. You will lead the full development lifecycle—from system design to deployment—while mentoring the team and driving best engineering practices across frontend and backend stacks.
Design & Implement: Lead the design, implementation and management of Trential products.
Lead by example: Be the most senior and impactful engineer on the team, setting the technical bar through your direct contributions.
Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
Experience: 5+ years of experience in software development, with at least 2 years as a Technical Lead.
Technical Depth: Deep proficiency in JavaScript and experience in building and operating distributed, fault-tolerant systems.
Cloud & Infrastructure: Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
Databases: Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
Identity & Credentials: Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
Experience integrating AI/ML models into verification or data extraction workflows
Job Location: Kharadi, Pune
Job Type: Full-Time
About Us:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.
Qualifications:
- Strong Experience in JavaScript and React
- Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
- Experience with design patterns
- Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
- Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
- A strong foundation in computer science, with competencies in data structures, algorithms, and software design
- Bachelor's / Master's Degree in CS
- Experience in GIT in mandatory
- Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
ABOUT THE ROLE
The Principal Software Engineer – Java will play a pivotal role in designing, developing, mentoring and maintaining high-performance IT system for Hapag-Lloyd. The role requires deep expertise in Java and microservices-based architecture, along with a strong focus on code quality, performance, and scalability. The ideal candidate will be a passionate engineer who thrives in a Java development agile environment and brings a solution-oriented mindset to collaborative product development. As a senior technical expert, you will be responsible for low-level and high-level architectural design, mentoring developers, and working closely with team in Hamburg & Chennai. The ideal candidate is a passionate, solutionoriented engineer with a proven track record in leading technical projects within an agile environment.
WORK EXPERIENCE:
5–10 years of hands-on experience in development using Java, JEE, JPA, JUnit, Kafka, and Microservices. Good Experience in AWS. Strong experience in architectural design – both low-level and high-level. Experience building distributed systems and working in microservices-based architecture. Proficient with Kafka and message-driven architecture. Strong experience with relational databases (e.g., PostgreSQL). Sound understanding of modern DevOps practices, including CI/CD pipelines, containerization, and cloud deployment. Experience working in Agile/Scrum-based teams with exposure to software lifecycle tools (e.g., Git, Jenkins, JIRA).
Technical Skills Java, JEE, JPA, JUnit, Microservices Kafka (Desired), REST API development SQL, PostgreSQL Git, Maven, Jenkins (Desired), Familiarity with Docker, Kubernetes, and cloud platforms (eg. AWS)
EDUCATION & QUALIFICATIONS Bachelor’s degree in computer science, Engineering, or related discipline
KEY RESPONSIBILITIES & TASKS
Software Development & Design Design and develop scalable, reliable, and high-performance applications using Java, JPA, Kafka, Microservices, Junit, API and PostgreSQL. Lead low-level and high-level design discussions and decisions for scalable architecture. Build and maintain microservices architecture using industry best practices. Drive technology innovation. Write clean, efficient, well-documented code with high unit test coverage using JUnit.
Mentorship :
Mentor and guide developers and team members in coding standards, best practices, and problem-solving. Conduct regular code reviews, peer programming, and provide technical leadership to ensure code quality and continuous improvement.
Systems Integration & Tools:
Work with messaging systems such as Kafka to build real-time data processing services. Implement and optimize data access using SQL /PostgreSQL databases. Participate in the design and implementation of DevOps pipelines for CI/CD.
Quality, Testing & Documentation:
Conduct regular code reviews and participate in peer programming and Peer coding Review. Perform system testing, validation, and verification across development stages. Contribute to technical documentation throughout the software development lifecycle
Agile Collaboration & Continuous Improvement:
Collaborate closely with Product Managers, Engineering Managers, Scrum Masters, and developers in agile teams. Participate in sprint planning, retrospectives, and demos. Remain current on new technologies and drive adoption of best engineering practices across the team.
BEHAVIOURS & APPROACH
Strong analytical and problem-solving skills Team-oriented with excellent communication and collaboration skills Passion for clean code, architecture, and continuous learning Ability to work independently with a proactive approach to problem-solving
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
Job Title : Senior Backend Developer (Node.js + AWS + MongoDB)
Experience : 4+ Years
Location : Andheri, Mumbai (Work From Office)
About the Role :
We are looking for a highly skilled Senior Backend Developer with strong expertise in Node.js (NestJS), AWS, and MongoDB to join our growing engineering team.
This role requires someone who takes ownership, is proactive, and enjoys building scalable, high-performance backend systems in a fast-paced environment.
Key Responsibilities :
- Architect, design, and develop scalable backend services using Node.js (NestJS).
- Design and manage cloud infrastructure on AWS Services (EC2, ECS, RDS, Lambda, etc.).
- Develop and maintain high-performance database solutions using MongoDB.
- Work with Kafka, Docker, and serverless frameworks (SST) for efficient deployments.
- Optimize system performance, scalability, and reliability across services.
- Ensure application security, best practices, and compliance standards.
- Collaborate with cross-functional teams to deliver robust product features.
- Take end-to-end ownership of features from design to deployment.
Technical Requirements :
- 4+ years of backend development experience.
- 3+ years of hands-on experience with Node.js.
- 2+ years of hands-on experience with AWS.
- Strong experience with NestJS framework.
- Solid experience with MongoDB and database design.
- Experience with Kafka, Docker, and serverless architecture.
- Understanding of system design, scalability, and performance optimization.
Good to Have (Bonus Skills) :
- Experience with Python or other backend languages.
- Exposure to Agentic AI use cases or implementations.
- Strong understanding of security best practices.
What We’re Looking For :
- Curious mindset and eagerness to learn new technologies.
- Proactive problem solver with strong ownership attitude.
- Strong team player with effective communication skills.
- Positive, energetic, and passionate about building great systems.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Job Title:
Senior Full Stack Developer
Experience: 5 to 7 Years. Minimum 5yrs FSD exp mandatory
Location: Bangalore (Onsite)
About ProductNova:
ProductNova is a fast-growing product development organization that partners with ambitious companies to build, modernize, and scale high-impact digital products. Our teams of product leaders, engineers, AI specialists, and growth experts work at the intersection of strategy, technology, and execution to help organizations create differentiated product portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+ large-scale, AI-powered products and platforms across industries. We specialize in solving complex business problems through thoughtful product design, robust engineering, and responsible use of AI.
What We Do
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
● Product discovery and problem definition
● User research and product strategy
● Experience design and rapid prototyping
● AI-enabled engineering, testing, and platform architecture
● Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with customers to iterate based on user feedback and expand products across new use cases, customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into viable products, identifying target customers, achieving product-market fit, and supporting go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying opportunities to modernize and scale existing products, enter new geographies, and build entirely new product lines. Our teams enable innovation through AI, platform re-architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Senior Full Stack Developer with strong expertise in frontend development using React JS, backend microservices architecture in C#/Python, and hands-on experience with AI-enabled development tools. The ideal candidate should be comfortable working in an onsite environment and collaborating closely with cross-functional teams to deliver scalable, high-quality applications.
Key Responsibilities:
• Develop and maintain responsive, high-performance frontend applications using React JS
• Design, develop, and maintain microservices-based backend systems using C#, Python
• Experienced in building Data Layer and Databases using MS SQL, Cosmos DB, PostgreSQL
• Leverage AI-assisted development tools (Cursor / GitHub Copilot) to improve coding
efficiency and quality
• Collaborate with product managers, designers, and backend teams to deliver end-to-end
solutions
• Write clean, reusable, and well-documented code following best practices
• Participate in code reviews, debugging, and performance optimization
• Ensure application security, scalability, and reliability
Mandatory Technical Skills:
• Strong hands-on experience in React JS (Frontend Coding) – 3+ yrs
• Solid experience in Microservices Architecture C#, Python – 3+ yrs
• Experience building Data Layer and Databases using MS SQL – 2+ yrs
• Practical exposure to AI-enabled development using Cursor or GitHub Copilot – 1yr
• Good understanding of REST APIs and system integration
• Experience with version control systems (Git), ADO
Good to Have:
• Experience with cloud platforms (Azure)
• Knowledge of containerization tools like Docker and Kubernetes
• Exposure to CI/CD pipelines
• Understanding of Agile/Scrum methodologies
Why Join ProductNova
● Work on real-world, high-impact products used at scale
● Collaborate with experienced product, engineering, and AI leaders
● Solve complex problems with ownership and autonomy
● Build AI-first systems, not experimental prototypes
● Grow rapidly in a culture that values clarity, execution, and learning
If you are passionate about building meaningful products, solving hard problems, and shaping the future of AI-driven software, ProductNova offers the environment and challenges to grow your career.
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.
You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.
You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
- Deploy, manage, and monitor applications on AWS, Azure, or GCP
- Ensure high availability, scalability, and performance of production environments
- Implement security best practices across infrastructure and DevOps workflows
- Automate environment provisioning, deployments, backups, and monitoring
- Configure and manage containerized applications using Docker and Kubernetes
- Collaborate with developers to improve build, release, and deployment processes
- Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
- Perform root cause analysis (RCA) and support production incident response
Required Skills & Experience
- 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with AWS, Azure, or GCP
- Proven experience in setting up and managing CI/CD pipelines
- Proficiency in Docker, Kubernetes, and container orchestration
- Experience with Terraform, Ansible, or similar IaC tools
- Knowledge of monitoring, logging, and alerting systems
- Strong scripting skills using Shell, Bash, or Python
- Good understanding of Git, version control, and branching strategies
- Experience supporting production-grade SaaS or enterprise platforms
JOB DETAILS:
* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 5 -7 years
* Location: Trivandrum, Thiruvananthapuram
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Skill Examples:
- Explain and communicate the design / development to the customer
- Perform and evaluate test results against product specifications
- Break down complex problems into logical components
- Develop user interfaces business software components
- Use data models
- Estimate time and effort required for developing / debugging features / components
- Perform and evaluate test in the customer or target environment
- Make quick decisions on technical/project related challenges
- Manage a Team mentor and handle people related issues in team
- Maintain high motivation levels and positive dynamics in the team.
- Interface with other teams’ designers and other parallel practices
- Set goals for self and team. Provide feedback to team members
- Create and articulate impactful technical presentations
- Follow high level of business etiquette in emails and other business communication
- Drive conference calls with customers addressing customer questions
- Proactively ask for and offer help
- Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
- Build confidence with customers by meeting the deliverables on time with quality.
- Estimate time and effort resources required for developing / debugging features / components
- Make on appropriate utilization of Software / Hardware’s.
- Strong analytical and problem-solving abilities
Knowledge Examples:
- Appropriate software programs / modules
- Functional and technical designing
- Programming languages – proficient in multiple skill clusters
- DBMS
- Operating Systems and software platforms
- Software Development Life Cycle
- Agile – Scrum or Kanban Methods
- Integrated development environment (IDE)
- Rapid application development (RAD)
- Modelling technology and languages
- Interface definition languages (IDL)
- Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.
The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.
You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.
Key Responsibilities
Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.
Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).
Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.
Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.
Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.
Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.
Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.
Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.
Apply resilience patterns using Resilience4j to build fault-tolerant systems.
Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).
Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.
Utilize feature flagging tools like Optimizely to enable controlled rollouts.
Mandatory Skills & Technologies Languages:
Kotlin (JVM, Coroutines, Serialization),
Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)
Serialization: Jackson, Kotlin Serialization
Testing: JUnit 5, Mockk, ArchUnit
Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)
Cloud: AWS (DynamoDB, Lambda, IAM)
CI/CD: GitLab CI Containers: Docker
Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)
Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)
API: OpenAPI (Swagger), REST API, JSON API
Resilience: Resilience4j
Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)
Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)
Methodologies: Scrum, Agile
Skills: Kotlin, Java, Spring Boot, Aws
Must-Haves
Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
Virtual Weekend Interview on 7th Feb 2026 - Saturday
JOB DETAILS:
* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 6 -9 years
* Location: Hyderabad
Job Description
Role Overview
We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using .NET Core and C#.
- Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
- Collaborate with architects and senior engineers on solution design and implementation.
- Write clean, scalable, and well‑documented code.
- Use Postman to build and test RESTful APIs.
- Participate in code reviews and provide technical guidance to junior developers.
- Troubleshoot and optimize application performance.
- Work closely with QA, DevOps, and Product teams in an Agile setup.
- (Optional) Contribute to frontend development using React.
- (Optional) Use Datadog for monitoring, logging, and performance metrics.
Required Skills & Experience
- 6+ years of experience in backend development.
- Strong proficiency in C# and .NET Core.
- Experience building RESTful services and microservices.
- Hands‑on experience with AWS cloud platform.
- Solid understanding of API testing using Postman.
- Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
- Strong problem‑solving and debugging skills.
- Experience working in Agile/Scrum teams.
Good to Have
- Experience with React for frontend development.
- Exposure to Datadog for monitoring and logging.
- Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
- Containerization experience (Docker, Kubernetes).
Soft Skills
- Strong communication and collaboration abilities.
- Ability to work in a fast‑paced environment.
- Ownership mindset with a focus on delivering high‑quality solutions.
Skills
.NET Core, C#, AWS, Postman
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Interview: 7th Feb 2026
First round will be Virtual
2nd round will be F2F
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Develop, maintain, and execute automation test scripts using Python.
- Build reliable and reusable test automation frameworks for web and cloud-based applications.
- Work with AWS cloud services for test execution, environment management, and integration needs.
- Perform functional, regression, and integration testing as part of the QA lifecycle.
- Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
- Participate in requirement review, test planning, and strategy discussions.
- Contribute to CI/CD setup and integration of automation suites.
Required Experience:
- Strong hands-on experience in Automation Testing.
- Proficiency in Python for automation scripting and framework development.
- Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
- Good knowledge of QA methodologies, SDLC/STLC, and defect management.
- Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
- Experience with Git or other version control systems.
Good to Have:
- API testing experience (REST, Postman, REST Assured).
- Knowledge of Docker/Kubernetes.
- Exposure to Agile/Scrum environment.
Skills: Automation testing, Python, Java, ETL, AWS
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation Testing + Python + Azure)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and execute automation test scripts using Python.
- Build and maintain scalable test automation frameworks.
- Work with Azure DevOps for CI/CD, pipeline automation, and test management.
- Perform functional, regression, and integration testing for web and cloud‑based applications.
- Analyze test results, log defects, and collaborate with developers for timely closure.
- Participate in requirement analysis, test planning, and strategy discussions.
- Ensure test coverage, maintain script quality, and optimize automation suites.
Required Experience:
- Strong hands-on expertise in automation testing for web/cloud applications.
- Solid proficiency in Python for creating automation scripts and frameworks.
- Experience working with Azure services and Azure DevOps pipelines.
- Good understanding of QA methodologies, SDLC/STLC, and defect lifecycle.
- Experience with tools like Selenium, PyTest, or similar frameworks (good to have).
- Familiarity with Git or other version control tools.
Good to Have:
- Experience with API testing (REST, Postman, or similar tools)
- Knowledge of Docker/Kubernetes
- Exposure to Agile/Scrum environments
Skills: automation testing, python, java, azure
Job Title : Python Developer (Crawlers / APIs / Async Programming)
Experience : 3 to 6 Years
Notice Period : Immediate to 15 Days
Job Location : Bangalore
Interview Process : 1 Internal Round + 2 Client Rounds
Mandatory Skill :
Strong Python experience with crawlers, REST APIs, async/multithreading, and PostgreSQL/MySQL in a cloud environment.
Role Overview :
We are looking for a highly skilled Python Developer with strong hands-on experience in building web crawlers, REST APIs, and advanced Python applications. The ideal candidate should be proficient in writing clean, efficient, and scalable code, and comfortable working with asynchronous programming, multithreading, and cloud-native environments.
Key Responsibilities :
- Build and ship new features and fixes in a fast-paced environment.
- Design, develop, test, and deploy scalable Python applications.
- Develop robust web crawlers and RESTful APIs.
- Write clean, secure, and maintainable code following SOLID principles.
- Design and document features, components, and systems.
- Collaborate closely with cross-functional teams.
- Support, monitor, and maintain existing products.
- Continuously learn and improve technical expertise.
Mandatory Skills :
- 3 to 5 years of strong hands-on experience in Python
- Experience in building web crawlers and REST APIs
- Strong knowledge of multithreading and async I/O in Python
- Experience with PostgreSQL or MySQL
- Strong understanding of OOP/Functional Programming and clean coding practices
- Experience with Docker / Containers
- Exposure to Cloud platforms (AWS or GCP)
- Excellent written and verbal communication skills
- High ownership mindset and accountability
Good to Have :
- Experience with Kubernetes, RabbitMQ, Redis
- Contributions to Open Source Projects
- Experience working with AI APIs and tools
What the Interview Will Focus On :
- Problem-solving and coding skills
- Hands-on Python programming
- Low-Level Design
- Database Design concepts
About the Role:
We are seeking a highly skilled and motivated individual to join our development team. The ideal candidate will have extensive experience with Node.js, AWS, and MongoDB, and a strong pro-activeness and ownership mindset.
Technical Expertise:
- Architect, design, and develop scalable and efficient backend services using Node.js (Nest.js).
- Design and manage cloud-based infrastructure on AWS, including EC2, ECS, RDS, Lambda, and other services.
- Work with MongoDB to design, implement, and maintain high-performance database solutions.
- Leverage Kafka, Docker and serverless technologies like SST to streamline deployments and infrastructure management.
- Optimize application performance and scalability across the stack.
- Ensure security and compliance standards are met across all development and deployment processes.
Bonus Points:
- Experience with other backend languages like Python and worked on Agentic AI
- Security knowledge and best practices.
Notice period - Immediate Joiners
Work Mode - Remote
About the Role
We are seeking a Senior Cybersecurity Engineer to design, implement, and govern enterprise-grade security architectures across cloud-based healthcare platforms. The role involves working closely with Data, Software, ML, and Cloud Architects to secure complex systems such as AI platforms, digital diagnosis solutions, and software-as-a-medical-device offerings.
Key Responsibilities
- Design, implement, and maintain robust security architectures for applications, infrastructure, and cloud platforms
- Conduct threat modeling, security reviews, vulnerability assessments, and penetration testing
- Identify security gaps in existing and proposed architectures and recommend remediation
- Implement and support DevSecOps practices, including code and infrastructure security
- Define and enforce security policies, standards, and procedures
- Respond to security incidents with root-cause analysis and corrective actions
- Mentor development and security teams on secure design and best practices
- Evaluate and integrate third-party security tools and technologies
Mandatory Skills (Top 3)
- Cloud Security Expertise
- Hands-on experience with cloud security (AWS / GCP / Azure), including secure architecture design and cloud risk assessments.
- DevSecOps & Container Security
- Strong experience in DevSecOps/SecOps environments with tools and technologies such as Docker, Kubernetes, IDS, SIEM, SAST/DAST, and EDR.
- Threat Modeling & Vulnerability Management
- Proven expertise in threat modeling, penetration testing, vulnerability assessment, and risk management, including automation-driven security testing.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Job Title : Full Stack Developer
Experience : 5+ Years (Mandatory)
Mandatory Tech Stack : Node.js (NestJS), React.js (Next.js), React Native, PostgreSQL, AWS (Hybrid with On-Premise infrastructure) & Docker Swarm & Portainer
Location : Remote
Working Days : Monday to Saturday
Shift : Night Shift
Job Summary :
We are scaling rapidly and looking for a high-impact Full Stack Developer who thrives on solving complex problems across Web, Mobile, and Cloud Infrastructure.
The ideal candidate is hands-on, adaptable, and comfortable working in distributed systems and hybrid cloud environments, delivering end-to-end solutions with ownership and accountability.
Mandatory Technical Skills :
- Backend : Node.js with NestJS
- Frontend (Web) : React.js with Next.js
- Mobile : React Native
- Database : PostgreSQL
- Cloud : AWS (Hybrid with On-Premise infrastructure)
- OS : Linux
- Containers & Orchestration : Docker Swarm
- Container Management : Portainer
🎯 Key Responsibilities :
- Design, develop, and maintain scalable full-stack applications (Web + Mobile)
- Build and manage microservices and RESTful APIs
- Work in distributed and hybrid cloud environments
- Develop cloud-ready solutions and manage deployments
- Handle containerized applications using Docker Swarm & Portainer
- Collaborate closely with Product, DevOps, and Engineering teams
- Ensure application performance, security, and reliability
- Participate in code reviews and follow best engineering practices
- Troubleshoot, debug, and optimize applications across the stack
✅ Required Qualifications :
- Strong hands-on experience with Node.js (NestJS)
- Solid expertise in React.js (Next.js) and React Native
- Experience with PostgreSQL and backend data modeling
- Working knowledge of AWS services in hybrid environments
- Good understanding of Linux systems
- Hands-on experience with Docker Swarm & Portainer
- Strong understanding of microservices architecture
- Ability to manage end-to-end full-stack delivery
⭐ Good-to-Have Skills :
- Experience with CI/CD pipelines
- Exposure to monitoring & logging tools
- Knowledge of event-driven systems
- Experience working in high-availability systems
Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION
Mentor Interns
Own Production
AI/ML based development
Full stack developer
with react.js
node.js
Python with flex or Django
RESTful service
Database
Docker
CI/CD
AWS
Our Technology
• Front-end: JavaScript, Angular (or good understanding of React, Vue JS, Knockout JS or similar)
• Back-end: C#, ASP.NET, Web API, MVC, Entity Framework
• Database: SQL Server. Knowledge of non-SQL databases is a plus
• Cloud: Microsoft Azure, AWS
Responsibilities
• Design of the overall architecture of the web application
• Implementation of a robust set of services and APIs to power the web application
• Building reusable code and libraries for future use
• Optimization of the application for maximum speed and scalability
• Implementation of security and data protection
• Translation of UI/UX wireframesto visual elements
• Integration of the front-end and back-end aspects of the web application
Additiontionalresponsibilitiesfor Project Lead
• Active participation in design\build cycle of the software engineering life cycle (prototyping, architecture, detailed design, development, testing and deployment).
• Providing expertise in technical analysis and solving technical issues during project delivery.
• Code reviews, test case reviews and ensure code developed meets the requirements.
• Collaborate with product management and engineering to define and implement innovative solutions for the product direction, visuals and experience.
• Requirement gathering and understanding, analyze and convert functional requirements into concrete technical tasks and able to provide reasonable effort estimates
• Mentor and develop skills of junior software engineers in the team.
Tech Skills and Qualifications
• Contract Length : 1 year
• Expert knowledge of JavaScript and Node.js, good understanding of Angular and JavaScript testing frameworks (such as Jest,
Mocha etc.)
• Good understanding of Cloud Native architecture, containerisation, Docker, Microsoft Azure/AWS, CI/CD, and DevOps
culture.
• Knowledge of cloud-based SaaS applications/architecture.
• Practical experience in the use of leading engineering practices and principles.
• Practical experience of building robust solutions at large scale.
• Appreciation for functions of Product and Design, experience working in cross-functional teams.
• Understanding differences between multiple delivery platforms (such as mobile vs. desktop), and optimizing output to
match the specific platform.
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies
Job Details
- Job Title: Java Full Stack Developer
- Industry: Global digital transformation solutions provider
- Domain: Information technology (IT)
- Experience Required: 5-7 years
- Working Mode: 3 days in office, Hybrid model.
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
SDET (Software Development Engineer in Test)
Job Responsibilities:
• Test Automation: • Develop, maintain, and execute automated test scripts using test automation frameworks. • Design and implement testing tools and frameworks to support automated testing.
• Software Development: • Participate in the design and development of software components to improve testability. • Write code actively, contribute to the development of tools, and work closely with developers to debunk complex issues.
• Quality Assurance: • Collaborate with the development team to understand software features and technical implementations. • Develop quality assurance standards and ensure adherence to the best testing practices.
• Integration Testing: • Conduct integration and functional testing to ensure that components work as expected individually and when combined.
• Performance and Scalability Testing: • Perform performance and scalability testing to identify bottlenecks and optimize application performance. • Test Planning and Execution: • Create detailed, comprehensive, and well-structured test plans and test cases. • Execute manual and/or automated tests and analyze results to ensure product quality.
• Bug Tracking and Resolution: • Identify, document, and track software defects using bug tracking tools. • Verify fixes and work closely with developers to resolve issues. • Continuous Improvement: • Stay updated on emerging tools and technologies relevant to the SDET role. • Constantly look for ways to improve testing processes and frameworks.
Skills and Qualifications: • Strong programming skills, particularly in languages such as COBOL, JCL, Java, C#, Python, or JavaScript. • Strong experience in Mainframe environments. • Experience with test automation tools and frameworks like Selenium, JUnit, TestNG, or Cucumber. • Excellent problem-solving skills and attention to detail. • Familiarity with CI/CD tools and practices, such as Jenkins, Git, Docker, etc. • Good understanding of web technologies and databases is often beneficial. • Strong communication skills for interfacing with cross-functional teams.
Qualifications • 5+ years of experience as a software developer, QA Engineer, or SDET. • 5+ years of hands-on experience with Java or Selenium. • 5+ years of hands-on experience with Mainframe environments. • 4+ years designing, implementing, and running test cases. • 4+ years working with test processes, methodologies, tools, and technology. • 4+ years performing functional and UI testing, quality reporting. • 3+ years of technical QA management experience leading on and offshore resources. • Passion around driving best practices in the testing space. • Thorough understanding of Functional, Stress, Performance, various forms of regression testing and mobile testing. • Knowledge of software engineering practices and agile approaches. • Experience building or improving test automation frameworks. • Proficiency CICD integration and pipeline development in Jenkins, Spinnaker or other similar tools. • Proficiency in UI automation (Serenity/Selenium, Robot, Watir). • Experience in Gherkin (BDD /TDD). • Ability to quickly tackle and diagnose issues within the quality assurance environment and communicate that knowledge to a varied audience of technical and non-technical partners. • Strong desire for establishing and improving product quality. • Willingness to take challenges head on while being part of a team. • Ability to work under tight deadlines and within a team environment. • Experience in test automation using UFT and Selenium. • UFT/Selenium experience in building object repositories, standard & custom checkpoints, parameterization, reusable functions, recovery scenarios, descriptive programming and API testing. • Knowledge of VBScript, C#, Java, HTML, and SQL. • Experience using GIT or other Version Control Systems. • Experience developing, supporting, and/or testing web applications. • Understanding of the need for testing of security requirements. • Ability to understand API – JSON and XML formats with experience using API testing tools like Postman, Swagger or SoapUI. • Excellent communication, collaboration, reporting, analytical and problem-solving skills. • Solid understanding of Release Cycle and QA /testing methodologies • ISTQB certification is a plus.
Skills: Python, Mainframe, C#
Notice period - 0 to 15days only
About the Company
We are a well-established and growing software product company with decades of experience delivering innovative and scalable technology solutions. The organization continues to expand year on year through strong market presence, continuous investment in technology, and a culture that promotes learning and collaboration. Exciting growth opportunities lie ahead for motivated professionals.
Job Description
We are looking for a Full Stack Developer to join our engineering team. The ideal candidate will be comfortable working across both front-end and back-end technologies, with a preference for either side being acceptable, as long as you are open to contributing across the full stack.
Technology Stack
Front-end:
- JavaScript
- Angular (or good understanding of React, Vue.js, Knockout.js, or similar frameworks)
Back-end:
- C#
- ASP.NET
- Web API
- MVC
- Entity Framework
Database:
- SQL Server (knowledge of NoSQL databases is a plus)
Cloud:
- Microsoft Azure and/or AWS
Key Responsibilities
- Design and develop the overall architecture of the web application
- Implement robust services and APIs to support the application
- Build reusable code and libraries for future use
- Optimize applications for maximum speed and scalability
- Implement security and data protection measures
- Translate UI/UX wireframes into visual and functional elements
- Integrate front-end and back-end components seamlessly
Additional Responsibilities (For Senior / Lead-Level Candidates)
- Participate actively in the full SDLC (design, development, testing, deployment)
- Provide technical analysis and resolve complex issues during delivery
- Conduct code and test case reviews
- Collaborate with product and design teams on innovative solutions
- Convert functional requirements into technical tasks and effort estimates
- Mentor junior developers
Skills & Qualifications
- Bachelor’s degree in Software Engineering or related field
- 3–5 years of relevant experience in Full Stack development
- Strong experience in C#, ASP.NET MVC, Web API, and SQL Server
- Good knowledge of JavaScript and modern front-end frameworks
- Understanding of cloud-native architecture and SaaS applications
- Experience with CI/CD, Docker, and DevOps practices is a plus
- Experience working in cross-functional teams
- Ability to build scalable and robust enterprise-grade solutions
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Senior Full Stack Developer – Analytics Dashboard
Job Summary
We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.
The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.
Key Responsibilities
- Design and develop a full-stack web application using modern technologies.
- Build scalable backend APIs to handle data ingestion, processing, and storage.
- Develop interactive dashboards and data visualisations for business reporting.
- Implement secure user authentication and role-based access.
- Integrate with third-party APIs using OAuth and REST protocols.
- Design efficient database schemas for analytical workloads.
- Implement background jobs and scheduled tasks for data syncing.
- Ensure performance, scalability, and reliability of the system.
- Write clean, maintainable, and well-documented code.
- Collaborate with product and design teams to translate requirements into features.
Required Technical Skills
Frontend
- Strong experience with React.js
- Experience with Next.js
- Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
- Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)
Backend
- Strong experience with Node.js (Express or NestJS)
- REST and/or GraphQL API development
- Background job systems (cron, queues, schedulers)
- Experience with OAuth-based integrations
Database
- Strong experience with PostgreSQL
- Data modelling and performance optimisation
- Writing complex analytical SQL queries
DevOps / Infrastructure
- Cloud platforms (AWS)
- Docker and basic containerisation
- CI/CD pipelines
- Git-based workflows
Experience & Qualifications
- 5+ years of professional full stack development experience.
- Proven experience building production-grade web applications.
- Prior experience with analytics, dashboards, or data platforms is highly preferred.
- Strong problem-solving and system design skills.
- Comfortable working in a fast-paced, product-oriented environment.
Nice to Have (Bonus Skills)
- Experience with data pipelines or ETL systems.
- Knowledge of Redis or caching systems.
- Experience with SaaS products or B2B platforms.
- Basic understanding of data science or machine learning concepts.
- Familiarity with time-series data and reporting systems.
- Familiarity with meta ads/Google ads API
Soft Skills
- Strong communication skills.
- Ability to work independently and take ownership.
- Attention to detail and focus on code quality.
- Comfortable working with ambiguous requirements.
Ideal Candidate Profile (Summary)
A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.
Key Responsibilities:
- 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
- Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Solid experience with Kubernetes and containerized applications.
- Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
- Scripting/programming skills in Python, Shell, or Go for automation.
- Hands-on experience with monitoring, logging, and incident management.
- Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning).
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting





















