50+ Docker Jobs in India
Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
About the role:
We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.
You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.
Responsibilities:
- Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
- Provide technical guidance and mentorship to young engineers.
- Participate in code reviews and contribute to best practices for development and operations.
- Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
- Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
- Improve Developer Experience (DX) to help engineers improve their productivity.
- Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
- Help SRE teams set up on-call rosters and coach them for effective on-call management.
- Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
- Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.
Requirements:
- 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
- Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
- Working knowledge of programming using Golang, Python, Java, or equivalent.
- Skilled in diagnosing and resolving Linux operating system issues.
- Strong proficiency in scripting and automation to build monitoring and analytics solutions.
- Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
- Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Job Description: Senior Backend
Location: Bangalore (Onsite)
Skills Required:
- Deep expertise in backend architecture using Python (FastAPI, Django), Node.js (NestJS, Express), or GoLang.
- Strong experience with cloud infrastructure - AWS, GCP, Azure, and containerization (Docker, Kubernetes).
- Proficiency in infrastructure-as-code (Terraform, Pulumi, Ansible).
- Mastery in CI/CD pipelines, GitOps workflows, and deployment automation (GitHub Actions, Jenkins, ArgoCD, Flux).
- Experience building high-performance distributed systems, APIs, and microservices architectures.
- Understanding of event-driven systems, message queues, and streaming platforms (Kafka, RabbitMQ, Redis Streams).
- Familiarity with database design and scaling - PostgreSQL, MongoDB, DynamoDB, TimescaleDB.
- Deep understanding of system observability, tracing, and performance tuning (Prometheus, Grafana, OpenTelemetry).
- Familiarity with AI integration stacks - deploying and scaling LLMs, vector databases (Pinecone, Weaviate, Milvus), and inference APIs (vLLM, Ollama, TensorRT).
- Awareness of DevSecOps practices, zero-trust architecture, and cloud cost optimization.
- Bonus: Hands-on with Rust, WebAssembly, or edge computing platforms (Fly.io, Cloudflare Workers, AWS Greengrass).
Who We Are Looking For:
Upsurge Labs builds across robotics, biotech, AI, and creative tech, each product running on the backbone of precision-engineered software.
We are looking for a Senior Backend / DevOps Engineer who can architect scalable, resilient systems that power machines, minds, and media.
You should be someone who is :
- Disciplined and detail-oriented, thriving in complex systems without compromising reliability.
- Organized enough to manage chaos and gritty enough to debug at 3 a.m. if that’s what the mission demands.
- Obsessed with clean code, system resilience, and real-world impact.
- Finds satisfaction in building infrastructure where reliability, scalability, and performance are central.
- Comfortable working at the intersection of AI, automation, and distributed systems.
- Understands that this work is challenging and fast-paced, but rewarding for those who push boundaries.
At Upsurge Labs, only the best minds build the foundations for the future. If you’ve ever dreamed of engineering systems that enable breakthroughs in AI and robotics, this is your arena.
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
Experience: 5-8 years of professional experience in software engineering, with a strong
background in developing and deploying scalable applications.
● Technical Skills:
○ Architecture: Demonstrated experience in architecture/ system design for scale,
preferably as a digital public good
○ Full Stack: Extensive experience with full-stack development, including mobile
app development and backend technologies.
○ App Development: Hands-on experience building and launching mobile
applications, preferably for Android.
○ Cloud Infrastructure: Familiarity with cloud platforms and containerization
technologies (Docker, Kubernetes).
○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.
● Soft Skills:
○ Experience in hiring team members
○ A proactive and independent problem-solver, comfortable working in a fast-paced
environment.
○ Excellent communication and leadership skills, with the ability to mentor junior
engineers.
○ A strong desire to use technology for social good.
Preferred Qualifications
● Experience working in a startup or smaller team environment.
● Familiarity with the healthcare or public health sector.
● Experience in developing applications for low-resource environments.
● Experience with data management in privacy and security-sensitive applications.
Experience: 5-8 years of professional experience in software engineering, with a strong
background in developing and deploying scalable applications.
● Technical Skills:
○ Architecture: Demonstrated experience in architecture/ system design for scale,
preferably as a digital public good
○ Full Stack: Extensive experience with full-stack development, including mobile
app development and backend technologies.
○ App Development: Hands-on experience building and launching mobile
applications, preferably for Android.
○ Cloud Infrastructure: Familiarity with cloud platforms and containerization
technologies (Docker, Kubernetes).
○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.
● Soft Skills:
○ Experience in hiring team members
○ A proactive and independent problem-solver, comfortable working in a fast-paced
environment.
○ Excellent communication and leadership skills, with the ability to mentor junior
engineers.
○ A strong desire to use technology for social good.
Preferred Qualifications
● Experience working in a startup or smaller team environment.
● Familiarity with the healthcare or public health sector.
● Experience in developing applications for low-resource environments.
● Experience with data management in privacy and security-sensitive applications.
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: DevOps Engineer
Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.
Responsibilities:
Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.
Automate application deployment and environment provisioning using AWS and containerization tools.
Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).
Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.
Build and manage containerized environments using Docker (Kubernetes is a plus).
Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.
Ensure system security, data integrity, and high availability across environments.
Collaborate with development teams to streamline builds, testing, and deployments.
Troubleshoot and resolve infrastructure and deployment-related issues.
Required Skills:
AWS (EC2, ECS, RDS, S3, IAM, Lambda)
CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy
Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible
Containers: Docker (Kubernetes preferred)
Scripting: Bash, Python
Version Control: Git, GitHub, GitLab
Web Servers: Apache, Nginx (preferred)
Databases: MySQL, MongoDB (preferred)
Qualifications:
3+ years of experience as a DevOps Engineer in a production environment.
Proven experience supporting Laravel, Node.js, and Python-based applications.
Strong understanding of CI/CD, containerization, and automation practices.
Experience with infrastructure monitoring, logging, and performance optimization.
Familiarity with agile and collaborative development processes.
Job Title: Full-Stack developer
Experience: 5 to 8+ Years
ASAP Start Immediately
Key Responsibilities
Develop and maintain end-to-end web applications, including frontend interfaces and backend services.
Build responsive and scalable UIs using React.js and Next.js.
Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.
Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.
Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.
Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.
Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.
Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.
Troubleshoot, debug, and improve application performance and security.
Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.
Must-Have Qualifications
Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .
Proficiency in frontend development using React.js and Next.js
Experience in building and consuming RESTful APIs
Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis
Practical experience with cloud platforms, preferably Azure, or AWS
Familiarity with containerization and orchestration tools like Docker and Kubernetes
Working knowledge of Infrastructure as Code (IaC) using Terraform
Experience with CI/CD pipelines using GitHub Actions or Azure DevOps
Ability to work in an agile development environment with cross-functional teams
Strong problem-solving, debugging, and communication skills
Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.
Technical Stack
Frontend: React.js, Next.js
Backend: Python, FastAPI, Django, Spring Boot, Node.js
DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform
CI/CD: GitHub Actions, Azure DevOps
Databases: PostgreSQL, MongoDB, Redis
Role Summary
We are hiring a Senior Java Developer with strong backend development experience to build scalable, high-performance applications and lead technical initiatives.
Key Responsibilities
- Develop and maintain applications using Java 8/11/17, Spring Boot, and REST APIs.
- Design and implement microservices and backend components.
- Work with SQL/NoSQL databases, API integrations, and performance optimization.
- Collaborate with cross-functional teams and participate in code reviews.
- Deploy applications using CI/CD, Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP).
Skills Required
- Strong in Core Java, OOPS, multithreading, collections.
- Hands-on with Spring Boot, Hibernate/JPA, Microservices.
- Experience with REST APIs, Git, and CI/CD pipelines.
- Knowledge of Docker/Kubernetes and cloud basics.
- Good understanding of database queries and performance tuning.
Nice to Have
- Experience with messaging systems (Kafka/RabbitMQ).
- Basic frontend understanding (React/Angular).
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
DevOps Engineer (1–2 Years Experience)
Job Description
We are seeking a motivated and enthusiastic DevOps Engineer with 1–2 years of hands-on experience in cloud and DevOps technologies. The ideal candidate will work closely with development and operations teams to support, automate, and optimize the deployment pipeline, cloud infrastructure, and application performance across AWS and Azure environments.
Roles & Responsibilities
- Deploy, manage, and monitor applications in AWS and Azure cloud environments.
- Work with Linux-based systems to ensure reliable performance, configuration, and maintenance.
- Build and maintain CI/CD pipelines using modern DevOps tools (GitHub Actions, GitLab CI, Jenkins, etc.).
- Create and maintain containerized applications using Docker.
- Assist in implementing IaC (Infrastructure as Code) and automation processes.
- Troubleshoot infrastructure, deployment, and application-related issues.
- Collaborate with development teams to enable smooth DevOps processes.
- Write scripts using Bash or Python to automate routine operational tasks.
- Participate in continuous improvement of deployment reliability and scalability.
Required Skills
- 1–2 years of hands-on experience in DevOps or Cloud Engineering.
- Practical working knowledge of AWS and Azure cloud services.
- Strong understanding and experience with Linux operating systems.
- Experience using Docker, Git, and CI/CD pipeline tools.
- Basic knowledge of Kubernetes concepts (pods, deployments, services).
- Ability to write automation scripts using Bash or Python.
- Strong analytical thinking, problem-solving, and troubleshooting skills.
Good to Have Skills
- Experience with Terraform or other Infrastructure-as-Code tools.
- Familiarity with monitoring and logging tools (Prometheus, Grafana, CloudWatch, ELK, etc.).
- Understanding of networking fundamentals (TCP/IP, DNS, load balancing, routing).
- Exposure to microservices architecture and container orchestration concepts.
Job Type
- Full-time
Location
- On-site
Salary
- As per industry standards
Location: Bengalore, India, Exp: 3-5 Yrs
Backend Developer (Golang) - Trading & Fintech
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What we expect:
You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
You thrive on challenges, not on perks or financial rewards.
You measure success by your own growth, not external validation.
Taking calculated risks excites you—you’re here to build, break, and learn.
You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
You understand the stakes—milliseconds can make or break trades, and precision is everything.
What you will do:
Develop and optimize high-performance backend systems in Golang for trading platforms and financial services.
Architect low-latency, high-throughput microservices that push the boundaries ofspeed and efficiency.
Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
Own your work—no babysitting, no micromanagement.
Work alongside equally driven engineers who expect nothing less than brilliance.
Must have skills:
Learn faster than you ever thought possible.
Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
Deep understanding of concurrency, memory management, and system design.
Experience with Trading, market data processing, or low-latency systems.
Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
Hands-on with Docker, Kubernetes, and CI/CD pipelines.
A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
Past experience in fintech, trading systems, or algorithmic trading.
Contributions to open-source Golang projects.
A history of building something impactful from scratch.
Understanding of FIX protocol, WebSockets, and streaming APIs.
Why Join Us?
Work with a team that expects and delivers excellence.
A culture where risk-taking is rewarded, and complacency is not.
Limitless opportunities for growth—if you can handle the pace.
A place where learning is currency, and outperformance is the only metric that matters.
The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech. This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
About the Role
We’re looking for an Elixir Developer who is passionate about building scalable, high performance backend systems. You’ll work closely with our engineering team to design, develop, and maintain reliable applications that power mission-critical systems.
Key Responsibilities
• Develop and maintain backend services using Elixir and Phoenix framework.
• Build scalable, fault-tolerant, and distributed systems.
• Integrate APIs, databases, and message queues for real-time applications.
• Optimize system performance and ensure low latency and high throughput.
• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.
• Write clean, maintainable, and testable code with proper documentation.
• Participate in code reviews, architectural discussions, and deployment automation.
Required Skills & Experience
• 2–4 years of hands-on experience in Elixir (or strong functional programming background).
• Experience with Phoenix, Ecto, and RESTful API development.
• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.
• Proficiency in PostgreSQL, Redis, or similar databases.
• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).
• Understanding of CI/CD pipelines, version control (Git), and agile development.
Good to Have
• Experience with microservices architecture or real-time data systems.
• Knowledge of GraphQL, LiveView, or PubSub.
• Exposure to performance profiling, observability, or monitoring tools.
Job Description:
Technical Lead – Full Stack
Experience: 8–12 years (Strong candidates Java 50% - React 50%)
Location – Bangalore/Hyderabad
Interview Levels – 3 Rounds
Tech Stack: Java, Spring Boot, Microservices, React, SQL
Focus: Hands-on coding, solution design, team leadership, delivery ownership
Must-Have Skills (Depth)
Java (8+): Streams, concurrency, collections, JVM internals (GC), exception handling.
Spring Boot: Security, Actuator, Data/JPA, Feign/RestTemplate, validation, profiles, configuration management.
Microservices: API design, service discovery, resilience patterns (Hystrix/Resilience4j), messaging (Kafka/RabbitMQ) optional.
React: Hooks, component lifecycle, state management, error boundaries, testing (Jest/RTL).
SQL: Joins, aggregations, indexing, query optimization, transaction isolation, schema design.
Testing: JUnit/Mockito for backend; Jest/RTL/Cypress for frontend.
DevOps: Git, CI/CD, containers (Docker), familiarity with deployment environments.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Strong Software Engineer, Engineering Manager Profiles
Mandatory (Experience) – Must have minimum 4+ years of hands on experience in software development
Mandatory (Tech Skillset) – Must have 3+ years of hands-on experience in backend development using Java / Node.js / Go / Python (any 1) OR 3+ YOE in frontend Development using React / Angular / Vue (any 1)
Mandatory (Mentoring Skillset) – Must have at least 1+ year of experience leading or mentoring a team of software engineers.
Mandatory (Tech Skills) – Must have a solid understanding of microservices architecture, APIs, and cloud platforms (AWS / GCP / Azure).
Mandatory (DevOps Skillset) – Must have hands-on experience working with Docker, Kubernetes, and CI/CD pipelines.
Mandatory (Company) – Top-tier/renowned product-based company (preferred Entreprise B2B SaaS)
Mandatory (Education) – Undergraduation from Tier-1 Engineering College (IIT, BITS, IIIT, NSUT, DTU, etc.)
Mandatory (Note): No hire policy from Sprinklr
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
Job Description: Node.js Developer (3+ Years Experience)
Division/Department: Engineering
Industry: Insurance / Fintech
Employment Type: Full-Time, Permanent
Job Summary
We are looking for a skilled Node.js Developer who is experienced with JavaScript/TypeScript, databases, and modern AI development tools. You will build and maintain backend applications, design scalable systems, and use AI tools to enhance development productivity. The role involves backend development, database management, API development, and cloud deployment.
What You’ll Be Doing
Backend Development
- Build and maintain Node.js applications, services, and APIs.
- Write clean, readable JavaScript/TypeScript code.
- Create and manage REST APIs; work with GraphQL when required.
- Develop microservices and containerized applications (Docker).
- Participate in code reviews and help maintain coding standards.
Database Work
- Design database schemas for PostgreSQL, MongoDB, and Redis.
- Write optimized SQL and NoSQL queries.
- Implement indexing and manage database performance tuning.
- Handle database scaling and caching strategies.
AI Tools & Productivity
- Use AI coding assistants (GitHub Copilot, Cursor AI, Tabnine).
- Integrate AI APIs (OpenAI, Claude) into backend applications.
- Build features powered by AI/ML capabilities.
- Use AI tools to debug and enhance code quality.
Deployment & Operations
- Deploy applications on AWS, Azure, or GCP.
- Set up CI/CD pipelines (GitHub Actions, Jenkins).
- Work with Docker and Kubernetes.
- Monitor backend services and perform troubleshooting.
Job Requirements
Must-Have (2–7 Years Experience)
- Strong hands-on experience with Node.js and Express.js.
- Excellent understanding of JavaScript/TypeScript, ES6+, async/await.
- Experience with at least one major database: PostgreSQL, MongoDB, Redis.
- Knowledge of API development (REST, GraphQL).
- Hands-on experience with Git and version control.
- Experience with testing tools like Jest, Mocha, or similar.
AI Tools Experience
- Familiarity with AI coding assistants.
- Working with AI APIs (OpenAI, Claude, etc.).
- Experience using AI tools for debugging and code optimization.
- Basic understanding of prompt engineering.
Bonus Skills (Good to Have)
- Cloud platforms: AWS, Azure, Google Cloud.
- Docker, Kubernetes.
- CI/CD pipelines.
- Frontend exposure (React/Vue/Angular).
- Message queues: RabbitMQ, Kafka.
- Caching: Redis, Memcached.
- API security: JWT, OAuth2.
Mandatory Skills
Technical
- Experience building and deploying Node.js applications.
- Strong command over databases and efficient query writing.
- Comfort with AI-based coding tools.
- Ability to debug and solve backend issues independently.
Soft Skills
- Strong communication and teamwork skills.
- Self-driven and eager to learn new technologies.
- Ability to mentor junior developers (for senior positions).
We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Job Description:
Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.
We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.
Responsibilities
- Understanding and automating AI based deployment an AI based workflows
- Implementing various development, testing, automation tools, and IT infrastructure
- Manage Cloud, computer systems and other IT assets.
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
- Ensure the security of data, network access, and backup systems
- Act in alignment with user needs and system functionality to contribute to organizational policy
- Identify problematic areas, perform RCA and implement strategic solutions in time
- Preserve assets, information security, and control structures
- Handle monthly/annual cloud budget and ensure cost effectiveness
Requirements and skills
- Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
- Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
- Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
- Well versed with ELK stack or any other logging, monitoring and analysis tools
- Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
- Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
- Hands-on experience with computer networks, network administration, and network installation
- Knowledge in ISO/SOC Type II implementation with be a
- BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What We Expect:
• You should already be exceptional at Backend. If you need hand-holding, this isn’t the place for you.
• You thrive on challenges, not on perks or financial rewards.
• You measure success by your own growth, not external validation.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a pay check; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and optimize high-performance backend systems in for trading platforms and financial services.
• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.
• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
• Own your work—no babysitting, no micromanagement.
• Work alongside equally driven engineers who expect nothing less than brilliance.
• Learn faster than you ever thought possible.
Must-Have Skills:
• Proven expertise in Backend (if you need to prove yourself, this isn’t the role for you).
• Deep understanding of concurrency, memory management, and system design.
• Experience with Trading, market data processing, or low-latency systems.
• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
• Hands-on with Docker, Kubernetes, and CI/CD pipelines.
• A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Contributions to open-source projects.
• A history of building something impactful from scratch.
• Understanding of FIX protocol, WebSockets, and streaming APIs.
Why Join Us?
• Work with a team that expects and delivers excellence.
• A culture where risk-taking is rewarded, and complacency is not.
• Limitless opportunities for growth—if you can handle the pace.
• A place where learning is currency, and outperformance is the only metric that matters.
• The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)
Role Overview
Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.
Key Responsibilities
1. Client Engagement (India + International Markets)
- Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
- Explain Tradelab’s capabilities, architecture, and deployment options.
- Understand region-specific latency expectations, connectivity options, and regulatory constraints.
2. Requirement Gathering & Solutioning
- Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
- Assess infra readiness (cloud/on-prem/colo).
- Propose architecture aligned with forex markets.
3. Global Architecture & Deployment Design
- Design multi-region infrastructure using AWS/Azure/GCP.
- Architect low-latency routing between India–Singapore–Dubai.
- Support deployments in DCs like Equinix SG1/DX1.
4. Networking & Security Architecture
- Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
- Implement network hardening, segmentation, WAF/firewall rules.
5. DevOps, Cloud Engineering & Scalability
- Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
- Design global failover models.
6. BFSI & Trading Domain Expertise
- Indian broking, international forex, LP aggregation, HFT.
- OMS/RMS, risk engines, LP connectivity, and matching engines.
7. Latency, Performance & Capacity Planning
- Benchmark and optimize cross-region latency.
- Tune performance for high tick volumes and volatility bursts.
8. Documentation & Consulting
- Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
- Required Skills
- AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
- DevOps: Kubernetes, Docker, Helm, Terraform.
- Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
- Message buses: Kafka, RabbitMQ, Redis Streams.
Domain Skills
- Deep Broking Domain Understanding.
- Indian broking + global forex/CFD.
- FIX protocol, LP integration, market data feeds.
- Regulations: SEBI, DFSA, MAS, ESMA.
Soft Skills
- Excellent communication and client-facing ability.
- Strong presales and solutioning mindset.
- Preferred Qualifications
- B.Tech/BE/M.Tech in CS or equivalent.
- AWS Architect Professional, CCNP, CKA.
Why Join Us?
- Experience in colocation/global trading infra.
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
About the Company
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position Summary
We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.
Key Responsibilities
Leadership & Strategy
- Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
- Define the long-term vision and roadmap for inference services in alignment with product and business goals.
- Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.
Engineering Excellence
- Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
- Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
- Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.
Innovation & Delivery
- Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
- Champion automation and standardization of model deployment and lifecycle management.
- Balance short-term delivery with long-term architectural evolution.
People & Culture
- Build a strong engineering culture focused on collaboration, innovation, and accountability.
- Provide coaching, feedback, and career development opportunities to team members.
- Foster a growth mindset and data-driven decision-making.
Basic Qualifications
Experience
- 12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
- Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
- Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.
Technical Expertise
- Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
- Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
- Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
- Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.
Preferred Qualifications
- Experience with MLOps platforms and end-to-end ML lifecycle management.
- Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
- Knowledge of cost optimization strategies for large-scale inference workloads.
About the Job
This is a full-time role for a Lead DevOps Engineer at Spark Eighteen. We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region.
Responsibilities
- Lead and mentor the DevOps/SRE team
- Define and drive DevOps strategy and roadmaps
- Oversee infrastructure automation and CI/CD at scale
- Collaborate with architects, developers, and QA teams to integrate DevOps practices
- Ensure security, compliance, and high availability of platforms
- Own incident response, postmortems, and root cause analysis
- Budgeting, team hiring, and performance evaluation
Requirements
Technical Skills
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 7+ years of professional DevOps experience with demonstrated progression.
- Strong architecture and leadership background
- Deep hands-on knowledge of infrastructure as code, CI/CD, and cloud
- Proven experience with monitoring, security, and governance
- Effective stakeholder and project management
- Experience with tools like Jenkins, ArgoCD, Terraform, Vault, ELK, etc.
- Strong understanding of business continuity and disaster recovery
Soft Skills
- Cross-functional communication excellence with ability to lead technical discussions.
- Strong mentorship capabilities for junior and mid-level team members.
- Advanced strategic thinking and ability to propose innovative solutions.
- Excellent knowledge transfer skills through documentation and training.
- Ability to understand and align technical solutions with broader business strategy.
- Proactive problem-solving approach with focus on continuous improvement.
- Strong leadership skills in guiding team performance and technical direction.
- Effective collaboration across development, QA, and business teams.
- Ability to make complex technical decisions with minimal supervision.
- Strategic approach to risk management and mitigation.
What We Offer
- Professional Growth: Continuous learning opportunities through diverse projects and mentorship from experienced leaders
- Global Exposure: Work with clients from 20+ countries, gaining insights into different markets and business cultures
- Impactful Work: Contribute to projects that make a real difference, with solutions generating over $1B in revenue
- Work-Life Balance: Flexible arrangements that respect personal wellbeing while fostering productivity
- Career Advancement: Clear progression pathways as you develop skills within our growing organization
- Competitive Compensation: Attractive salary packages that recognize your contributions and expertise
Our Culture
At Spark Eighteen, our culture centers on innovation, excellence, and growth. We believe in:
- Quality-First: Delivering excellence rather than just quick solutions
- True Partnership: Building relationships based on trust and mutual respect
- Communication: Prioritizing clear, effective communication across teams
- Innovation: Encouraging curiosity and creative approaches to problem-solving
- Continuous Learning: Supporting professional development at all levels
- Collaboration: Combining diverse perspectives to achieve shared goals
- Impact: Measuring success by the value we create for clients and users
Apply Here - https://tinyurl.com/t6x23p9b
About the job
Key Responsibilities
1. Design, develop, and maintain dynamic, responsive web applications using React.js and Node.js.
2. Build efficient, reusable front-end and back-end components for seamless performance.
3. Integrate RESTful APIs and third-party services.
4. Collaborate with UI/UX designers, mobile teams, and backend developers to deliver high-quality products.
5. Write clean, maintainable, and efficient code following modern coding standards.
6. Debug, test, and optimize applications for maximum speed and scalability.
7. Manage databases using MongoDB and handle cloud deployment (AWS, Firebase, or similar).
8. Participate in code reviews, architectural discussions, and agile development cycles.
Required Skills & Experience
1. 1-5 years of proven experience in Full Stack Web Development using the MERN stack.
2. Proficiency in React.js, Node.js, Express.js, MongoDB, and JavaScript (ES6+).
3. Strong understanding of HTML5, CSS3, Bootstrap, and Tailwind CSS.
4. Hands-on experience with API design, state management (Redux, Context API), and authentication (JWT/OAuth).
5. Familiarity with version control tools (Git, Bitbucket).
6. Good understanding of database design, schema modeling, and RESTful architecture.
7. Strong problem-solving skills and ability to work in a collaborative team environment.
Perks & Benefits
- Onsite opportunity in our modern Greater Noida office.
- Competitive salary based on skills and experience.
- Exposure to real-world projects and latest tech stacks.
- Work with a creative and talented team.
- Career growth opportunities in full-stack and cross-platform development.
Benefits:
- Health insurance
- Leave encashment
- Paid sick time
- Paid time off
- Work from home
Application Question(s):
- How many years of _____ experience do you have?
- What is your current CTC?
- What is your expected CTC?
Position Overview:
We are seeking a hands-on Engineering Lead with a strong background in cloud-native application development, microservices architecture, and team leadership. The ideal candidate will have a proven track record of delivering complex enterprise-grade applications and will
be capable of leading a large team to build scalable, secure, and high-
performance systems. This person will not only be a technical expert but also an effective people manager, fostering growth and collaboration within their team.
Key Responsibilities:
- Lead by example, mentor junior engineers, and contribute to team knowledge-sharing efforts.
- Provide guidance on best practices, architecture, and development processes.
- Drive the design and implementation of cloud-native enterprise applications, ensuring scalability, reliability, and maintainability.
- Champion the adoption of microservices principles and design patterns across the engineering team.
- Maintain a hands-on approach in software development, contributing directly to code while balancing leadership responsibilities.
- Collaborate with cross-functional teams (Product, UI/UX, DevOps,
- QA, Security) to ensure successful delivery of features and enhancements.
- Continuously evaluate and improve the development process, from CI/CD pipelines to code quality and testing.
- Ensure application security best practices are followed, addressing vulnerabilities and potential threats in a proactive manner.
- Help define technical roadmaps and provide input on architectural decisions that meet both current and future customer needs.
- Foster a culture of collaboration, continuous learning, and innovation within the engineering team.
Required Skills & Experience:
Technical Skills:
Core Technologies: Strong expertise in Node.js and Javascript,
with the ability to pick up new languages and technologies as required.
Cloud Expertise: Hands-on experience with cloud technologies,
particularly AWS, Azure, or Google Cloud Platform (GCP).
Microservices Architecture: Proven experience in building and
maintaining cloud-native, microservices-based applications.
Security Awareness: Deep understanding of security principles,
especially in the context of developing enterprise applications.
Development Tools: Proficiency in version control systems (Git),
CI/CD tools, containerization (Docker), and orchestration platforms
(Kubernetes).
Scalability & Performance: Strong knowledge of designing
systems for scalability and performance, with experience managing
large-scale systems.
Communication Skills:
- Exceptional verbal and written communication skills, with the ability to articulate complex business concepts to both technical and non-technical stakeholders.
- Strong presentation skills to effectively convey technical information and business value to clients.
- Ability to collaborate effectively with cross-functional teams and clients across different time zones and cultural backgrounds.
Experience:
- At least 5-10 years of experience in software engineering with at least 2-3 years in a leadership role managing a team of developers.
- Proven track-record for delivering performant and scalable applications.
- Experience working in client-facing roles, providing technical consulting, and managing client expectations.
Leadership Skills:
- Proven ability to manage, mentor, and motivate a team of engineers.
- Strong communication skills, capable of explaining complex technical concepts to non-technical stakeholders.
- Collaborative mindset with the ability to work effectively with cross-functional teams.
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially.
Qualifications - No bachelor's degree required. Good communication skills are a must!
ABOUT US:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.
Our Mission
To make video as accessible to machines as text and voice are today.
At lookup, we believe the world's most valuable asset is trapped. Video is everywhere, but it's unsearchable—a black box of insight that no one can open or atleast open affordably. We’re changing that. We're building the search engine for the visual world, so anyone can find or do anything with video just by asking.
Text is queryable. Voice is transcribed. Video, the largest and richest data source of all, is still a black box. A computer can't understand it, and so its value remains trapped.
Our mission at lookup is to fix this.
About the Role
We are looking for founding Backend Engineers to build a highly performant, reliable, and scalable API platform that makes enterprise video knowledge readily available for video search, summarization, and natural‑language Q&A. You will partner closely with our ML team working on vision‑language models to productionize research and deliver fast, trustworthy APIs for customers.
Examples of technical challenges you will work on include: distributed video storage, a unified application framework and data model for indexing large video libraries, low‑latency clip retrieval, vector search at scale, and end‑to‑end build, test, deploy, and observability in cloud environments.
What You’ll Do:
- Design and build robust backend services and APIs (REST, gRPC) for vector search, video summarization, and video Q&A.
- Own API performance and reliability, including low‑latency retrieval, pagination, rate limiting, and backwards‑compatible versioning.
- Design schemas and tune queries in Postgres, and integrate with unstructured storage.
- Implement observability across metrics, logs, and traces. Set error budgets and SLOs.
- Write clear design docs and ship high‑quality, well‑tested code.
- Collaborate with ML engineers to integrate and productionize VLMs and retrieval pipelines.
- Take ownership of architecture from inception to production launch.
Who You Are:
- 3+ years of professional experience in backend development.
- Proven experience building and scaling polished WebSocket, gRPC, and REST APIs.
- Exposure to distributed systems and container orchestration (Docker and Kubernetes).
- Hands‑on experience with AWS.
- Strong knowledge of SQL (Postgres) and NoSQL (e.g., Cassandra), including schema design, query optimization, and scaling.
- Familiarity with our stack is a plus, but not mandatory: Python (FastAPI), Celery, Kafka, Postgres, Redis, Weaviate, React.
- Ability to diagnose complex issues, identify root causes, and implement effective fixes.
- Comfortable working in a fast‑paced startup environment.
Nice to have:
- Hands-on work with LLM agents, vector embeddings, or RAG applications.
- Building video streaming pipelines and storage systems at scale (FFmpeg, RTSP, WebRTC).
- Proficiency with modern frontend frameworks (React, TypeScript, Tailwind CSS) and responsive UI design.
Location & Culture
- Full-time, in-office role in Bangalore (we’re building fast and hands-on).
- Must be comfortable with a high-paced environment and collaboration across PST time zones for our US customers and investors.
- Expect startup speed — daily founder syncs, rapid design-to-prototype cycles, and a culture of deep ownership.
Why You’ll Love This Role
- Work on the frontier of video understanding and real-world AI — products that can redefine trust and automation.
- Build core APIs that make video queryable and power real customer use.
- Own systems end to end: performance, reliability, and developer experience.
- Work closely with founders and collaborate in person in Bangalore.
- Competitive salary with meaningful early equity.
About the Role:
We are building cutting-edge AI products designed for enterprise-scale applications and arelooking for a Senior Python Developer to join our core engineering team. You will beresponsible for designing and delivering robust, scalable backend systems that power ouradvanced AI solutions.
Key Responsibilities:
- Design, develop, and maintain scalable Python-based backend applications and services.
- Collaborate with AI/ML teams to integrate machine learning models into production environments.
- Optimize applications for performance, reliability, and security.
- Write clean, maintainable, and testable code following best practices.
- Work with cross-functional teams including Data Science, DevOps, and UI/UX to ensure seamless delivery.
- Participate in code reviews, architecture discussions, and technical decision-making.
- Troubleshoot, debug, and upgrade existing systems.
Required Skills & Experience:
- Minimum 5 years of professional Python development experience.
- Strong expertise in Django / Flask / FastAPI.
- Hands-on experience with REST APIs, microservices, and event-driven architecture.
- Solid understanding of databases (PostgreSQL, MySQL, MongoDB, Redis).
- Familiarity with cloud platforms (AWS / Azure / GCP) and CI/CD pipelines.
- Experience with Gen AI tools such as RAG, LLM, Langchain, Langraph etc
- Experience with AI/ML pipeline integration is a strong plus.
- Strong problem-solving and debugging skills.
- Excellent communication skills and ability to work in a collaborative environment.
Good to Have:
- Experience with Docker, Kubernetes.
- Exposure to message brokers (RabbitMQ, Kafka).
- Knowledge of data engineering tools (Airflow, Spark).
- Familiarity with Neo4j or other graph databases.
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
About the Role
We’re looking for an Elixir Developer who is passionate about building scalable, high performance backend systems. You’ll work closely with our engineering team to design, develop, and maintain reliable applications that power mission-critical systems.
Key Responsibilities
• Develop and maintain backend services using Elixir and Phoenix framework.
• Build scalable, fault-tolerant, and distributed systems.
• Integrate APIs, databases, and message queues for real-time applications.
• Optimize system performance and ensure low latency and high throughput.
• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.
• Write clean, maintainable, and testable code with proper documentation.
• Participate in code reviews, architectural discussions, and deployment automation.
Required Skills & Experience
• 2–4 years of hands-on experience in Elixir (or strong functional programming background).
• Experience with Phoenix, Ecto, and RESTful API development.
• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.
• Proficiency in PostgreSQL, Redis, or similar databases.
• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).
• Understanding of CI/CD pipelines, version control (Git), and agile development.
Good to Have
• Experience with microservices architecture or real-time data systems.
• Knowledge of GraphQL, LiveView, or PubSub.
• Exposure to performance profiling, observability, or monitoring tools.
Why Join Us?
• Work with a team that expects and delivers excellence.
• A culture where risk-taking is rewarded, and complacency is not.
• Limitless opportunities for growth—if you can handle the pace.
• A place where learning is currency, and outperformance is the only metric that matters.
• The opportunity to build systems that move markets, execute trades in microseconds, and redefine
fintech.
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Junior DevOps Engineer
Experience: 2–3 years
About Us
We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.
Key Responsibilities
- Support deployment, monitoring, and maintenance of trading and fintech applications.
- Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
- Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
- Troubleshoot and resolve production issues in a fast-paced environment.
- Implement and maintain CI/CD pipelines for continuous integration and delivery.
- Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
- Assist in maintaining compliance with financial industry standards and security best practices.
Required Skills
- 2–3 years of hands-on experience in DevOps or related roles.
- Proficiency in Linux/Unix environments.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Working knowledge of scripting languages (Bash, Python).
- Experience with configuration management tools (Ansible, Puppet, Chef).
- Understanding of networking concepts and security practices.
- Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
- Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).
Preferred Skills
- Experience in fintech, trading, or financial services.
- Knowledge of high-frequency trading systems or low-latency environments.
- Familiarity with financial data protocols and APIs.
- Understanding of regulatory requirements in financial technology.
What We Offer
- Opportunity to work on cutting-edge fintech/trading platforms.
- Collaborative and learning-focused environment.
- Competitive salary and benefits.
- Career growth in a rapidly expanding domain.
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
ROLES AND RESPONSIBILITIES:
We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.
KEY RESPONSIBILITIES:
- Lead and mentor a team of software engineers across backend, frontend, and integration areas.
- Drive architectural design, technical reviews, and ensure scalability and reliability.
- Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
- Establish best practices in agile development, testing automation, and CI/CD pipelines.
- Build reusable frameworks for low-code app development and AI-driven workflows.
- Hire, coach, and develop engineers to strengthen technical capabilities and team culture.
IDEAL CANDIDATE:
- B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
- 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
- Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
- Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
- Experience with Docker, Kubernetes, and CI/CD pipelines.
- Excellent communication and problem-solving skills.
PREFERRED QUALIFICATIONS:
- Experience building or scaling SaaS or platform-based products.
- Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
- Prior experience in a startup or high-growth product environment.
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Role: DevOps Engineer
Experience: 2–3+ years
Location: Pune
Work Mode: Hybrid (3 days Work from office)
Mandatory Skills:
- Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
- Proficiency in scripting languages (Bash, Python, PowerShell)
- Hands-on experience with containerization (Docker) and container management
- Proven experience managing infrastructure (On-premise or AWS/VMware)
- Experience with version control systems (Git/Bitbucket/GitHub)
- Familiarity with monitoring and logging tools for system performance tracking
- Knowledge of security best practices and compliance standards
- Bachelor's degree in Computer Science, Engineering, or related field
- Willingness to support production issues during odd hours when required
Preferred Qualifications:
- Certifications in AWS, Docker, or VMware
- Experience with configuration management tools like Ansible
- Exposure to Agile and DevOps methodologies
- Hands-on experience with Virtual Machines and Container orchestration
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make
a mark in the trading and fintech industry. If you are looking for just another backend role, this
isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance
driven backend systems, we want you.
What We Expect:
• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
• You thrive on challenges, not on perks or financial rewards.
• You measure success by your own growth, not external validation.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and optimize high-performance backend systems in Golang for trading platforms and financial
services.
• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.
• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
• Own your work—no babysitting, no micromanagement.
• Work alongside equally driven engineers who expect nothing less than brilliance.
• Learn faster than you ever thought possible.
Must-Have Skills:
• Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
• Deep understanding of concurrency, memory management, and system design.• Experience with Trading, market data processing, or low-latency systems.
• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time
processing.
• Hands-on with Docker, Kubernetes, and CI/CD pipelines.
• A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Contributions to open-source Golang projects.
• A history of building something impactful from scratch.
• Understanding of FIX protocol, WebSockets, and streaming APIs.
Why Join Us?
• Work with a team that expects and delivers excellence.
• A culture where risk-taking is rewarded, and complacency is not.
• Limitless opportunities for growth—if you can handle the pace.
• A place where learning is currency, and outperformance is the only metric that matters.
• The opportunity to build systems that move markets, execute trades in microseconds, and redefine
fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
Job Details
- Job Title: Lead I - Software Engineering - Java, J2Ee, Spring
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Summary:
We are looking for an experienced Senior Java Developer with expertise in building robust, scalable web applications using Java/J2EE, Spring Boot, REST APIs, and modern microservices architectures. The ideal candidate will be skilled in both back-end and middleware technologies, with strong experience in cloud platforms (AWS), and capable of mentoring junior developers while contributing to high-impact enterprise projects.
The developer will be responsible for full-cycle application development: from interpreting specifications and writing clean, reusable code, to testing, integration, and deployment. You will also work closely with customers and project teams to understand requirements and deliver solutions that optimize cost, performance, and maintainability.
Key Responsibilities:
Application Development & Delivery
- Design, code, debug, test, and document Java-based web applications aligned with design specifications.
- Build scalable and secure microservices using Spring Boot and RESTful APIs.
- Optimize application performance, maintainability, and reusability by using proven design patterns.
- Handle complex data structures and develop multi-threaded, performance-optimized applications.
- Ensure code quality through TDD (JUnit) and best practices.
Cloud & DevOps
- Develop and deploy applications on AWS Cloud Services: EC2, S3, DynamoDB, SNS, SES, etc.
- Leverage containerization tools like Docker and orchestration using Kubernetes.
Integration & Configuration
- Integrate with various databases (PostgreSQL, MySQL, Oracle, NoSQL).
- Configure development environments and CI/CD pipelines as per project needs.
- Follow configuration management processes and ensure compliance.
Testing & Quality Assurance
- Review and create unit test cases, scenarios, and support UAT phases.
- Perform defect root cause analysis (RCA) and proactively implement quality improvements.
Documentation
- Create and review technical documents: HLD, LLD, SAD, user stories, design docs, test cases, and release notes.
- Contribute to project knowledge bases and code repositories.
Team & Project Management
- Mentor team members; conduct code and design reviews.
- Assist Project Manager in effort estimation, planning, and task allocation.
- Set and review FAST goals for yourself and your team; provide regular performance feedback.
Customer Interaction
- Engage with customers to clarify requirements and present technical solutions.
- Conduct product demos and design walkthroughs.
- Interface with customer architects for design finalization.
Key Skills & Tools:
Core Technologies:
- Java/J2EE, Spring Boot, REST APIs
- Object-Oriented Programming (OOP), Design Patterns, Domain-Driven Design (DDD)
- Multithreading, Data Structures, TDD using JUnit
Web & Data Technologies:
- JSON, XML, AJAX, Web Services
- Database Technologies: PostgreSQL, MySQL, Oracle, NoSQL (e.g., DynamoDB)
- Persistence Frameworks: Hibernate, JPA
Cloud & DevOps:
- AWS: S3, EC2, DynamoDB, SNS, SES
- Version Control & Containerization: GitHub, Docker, Kubernetes
Agile & Development Practices:
- Agile methodologies: Scrum or Kanban
- CI/CD concepts
- IDEs: Eclipse, IntelliJ, or equivalent
Expected Outcomes:
- Timely delivery of high-quality code and application components
- Improved performance, cost-efficiency, and maintainability of applications
- High customer satisfaction through accurate requirement translation and delivery
- Team productivity through effective mentoring and collaboration
- Minimal post-production defects and technical issues
Performance Indicators:
- Adherence to coding standards and engineering practices
- On-time project delivery and milestone completion
- Reduction in defect count and issue recurrence
- Knowledge contributions to project and organizational repositories
- Completion of mandatory compliance and technology/domain certifications
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- Relevant certifications (e.g., AWS Certified Developer, Oracle Certified, Scrum Master)
Soft Skills:
- Strong analytical and problem-solving mindset
- Excellent communication and presentation skills
- Team leadership and mentorship abilities
- High accountability and ability to work under pressure
- Positive team dynamics and proactive collaboration
Skills
Java, J2Ee, Spring
Must-Haves
Java, J2Ee, Spring
Machine Learning + Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
NP: Immediate – 30 Days
🎯 Ideal Candidate Profile:
This role requires a seasoned engineer/scientist with a strong academic background from a premier institution and significant hands-on experience in deep learning (specifically image processing) within a hardware or product manufacturing environment.
📋 Must-Have Requirements:
Experience & Education Combinations:
Candidates must meet one of the following criteria:
- Doctorate (PhD) + 2 years of related work experience
- Master's Degree + 5 years of related work experience
- Bachelor's Degree + 7 years of related work experience
Technical Skills:
- Minimum 5 years of hands-on experience in all of the following:
- Python
- Deep Learning (DL)
- Machine Learning (ML)
- Algorithm Development
- Image Processing
- 3.5 to 4 years of strong proficiency with PyTorch OR TensorFlow / Keras.
Industry & Institute:
- Education: Must be from a premier institute (IIT, IISC, IIIT, NIT, BITS) or a recognized regional tier 1 college.
- Industry: Current or past experience in a Product, Semiconductor, or Hardware Manufacturing company is mandatory.
- Preference: Candidates from engineering product companies are strongly preferred.
ℹ️ Additional Role Details:
- Interview Process: 3 technical rounds followed by 1 HR round.
- Work Model: Hybrid (requiring 3 days per week in the office).
Based on the job description you provided, here is a detailed breakdown of the Required Skills and Qualifications for this AI/ML/LLM role, formatted for clarity.
📝 Required Skills and Competencies:
💻 Programming & ML Prototyping:
- Strong Proficiency: Python, Data Structures, and Algorithms.
- Hands-on Experience: NumPy, Pandas, Scikit-learn (for ML prototyping).
🤖 Machine Learning Frameworks:
- Core Concepts: Solid understanding of:
- Supervised/Unsupervised Learning
- Regularization
- Feature Engineering
- Model Selection
- Cross-Validation
- Ensemble Methods: Experience with models like XGBoost and LightGBM.
🧠 Deep Learning Techniques:
- Frameworks: Proficiency with PyTorch OR TensorFlow / Keras.
- Architectures: Knowledge of:
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks (RNNs)
- Long Short-Term Memory networks (LSTMs)
- Transformers
- Attention Mechanisms
- Optimization: Familiarity with optimization techniques (e.g., Adam, SGD), Dropout, and Batch Normalization.
💬 LLMs & RAG (Retrieval-Augmented Generation):
- Hugging Face: Experience with the Transformers library (tokenizers, embeddings, model fine-tuning).
- Vector Databases: Familiarity with Milvus, FAISS, Pinecone, or ElasticSearch.
- Advanced Techniques: Proficiency in:
- Prompt Engineering
- Function/Tool Calling
- JSON Schema Outputs
🛠️ Data & Tools:
- Data Management: SQL fundamentals; exposure to data wrangling and pipelines.
- Tools: Experience with Git/GitHub, Jupyter, and basic Docker.
🎓 Minimum Qualifications (Experience & Education Combinations):
Candidates must have experience building AI systems/solutions with Machine Learning, Deep Learning, and LLMs, meeting one of the following criteria:
- Doctorate (Academic) Degree + 2 years of related work experience.
- Master's Level Degree + 5 years of related work experience.
- Bachelor's Level Degree + 7 years of related work experience.
⭐ Preferred Traits and Mindset:
- Academic Foundation: Solid academic background with strong applied ML/DL exposure.
- Curiosity: Eagerness to learn cutting-edge AI and willingness to experiment.
- Communication: Clear communicator who can explain ML/LLM trade-offs simply.
- Ownership: Strong problem-solving and ownership mindset.
Position-Tech Lead
Experience: 8-10
Job Location: Pune
We are seeking a highly skilled Tech Lead with strong expertise in Java, microservices architecture, and cloud-native application development. The ideal candidate will bring hands-on leadership experience in designing scalable solutions, guiding development teams, and collaborating with DevOps engineers on OpenShift (OCP) platforms. This role requires a blend of technical leadership, solution design, and delivery ownership.
Key Responsibilities
Lead the design and development of Java / Spring Boot based microservices in a cloud-native environment.
Provide technical leadership to a team of developers, ensuring adherence to coding, security, and architectural best practices.
Collaborate with architects and DevOps engineers to deploy and manage microservices on Red Hat OpenShift (OCP).
Oversee end-to-end delivery including requirement analysis, design, development, code review, testing, and deployment.
Define and implement API specifications, integration patterns, and microservices orchestration.
Work closely with DevOps teams to integrate CI/CD pipelines, containerized deployments, Helm, and GitOps workflows.
Ensure application performance, scalability, and reliability with proactive observability practices (Grafana, Prometheus, etc.).
Required Skills & Qualifications
8-10 years of proven experience in Java application development with at least 4+ years in microservices architecture.
Strong expertise in Spring Boot, REST APIs, JPA/Hibernate, and messaging frameworks (Kafka, RabbitMQ, etc.).
Hands-on experience with containerization (Docker) and orchestration (OpenShift/Kubernetes).
Familiarity with OCP DevOps practices including CI/CD (ArgoCD, Tekton, Jenkins), Helm, and YAML deployments.
Good understanding of observability stacks (Grafana, Prometheus, Loki, Alertmanager) and logging practices.
Solid knowledge of cloud-native design principles, scalability, and fault tolerance.
Exposure to security best practices (OAuth, RBAC, secrets management via Vault or similar).
























