Cutshort logo

50+ Docker Jobs in India

Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!

icon
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
7 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconKubernetes
skill iconDocker
Terraform
+4 more

Hiring for SRE Lead


Exp: 7 - 12 yrs

Work Location : Mumbai ( Kurla West )

WFO


Skills :

Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet). 


Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.


 System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.


Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGit
skill iconDocker
skill iconKubernetes

Key Responsibilities:

  • Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
  • Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
  • Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
  • Manage and monitor production deployments to ensure high availability and performance
  • Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
  • Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
  • Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
  • Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
  • Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
  • Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
  • Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
  • Collaborate with development and QA teams to align infrastructure with application needs
  • Troubleshoot infrastructure and deployment issues efficiently and proactively
  • Ensure cloud cost optimization and usage tracking


Required Skills & Experience:

  • 3-4 years of hands-on experience in a DevOps
  • Strong expertise with both AWS and Azure cloud platforms
  • Proficient in Git, branching strategies, and pull request workflows
  • Deep understanding of CI/CD concepts and experience with pipeline tools
  • Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
  • Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
  • Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
  • Experience with Infrastructure as Code tools (Terraform preferred)
  • Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
  • Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
  • Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
  • Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
  • Working knowledge of monitoring and logging tools
  • Strong troubleshooting and problem-solving skills


Good to Have (Bonus Points):

  • Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
  • Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
  • Experience with compliance/security best practices (SOC2, ISO, etc.)
  • Familiarity with Service Mesh (Istio, Linkerd) and API gateways
  • Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Ariba Khan
Posted by Ariba Khan
Mumbai, Hyderabad
8 - 11 yrs
Upto ₹35L / yr (Varies
)
skill iconKubernetes
skill iconJenkins
skill iconDocker
Ansible
sonarqube

About the role:

We are seeking an experienced DevOps Engineer with deep expertise in Jenkins, Docker, Ansible, and Kubernetes to architect and maintain secure, scalable infrastructure and CI/CD pipelines. This role emphasizes security-first DevOps practices, on-premises Kubernetes operations, and integration with data engineering workflows.


🛠 Required Skills & Experience

Technical Expertise

  • Jenkins (Expert): Advanced pipeline development, DSL scripting, security integration, troubleshooting
  • Docker (Expert): Secure multi-stage builds, vulnerability management, optimisation for Java/Scala/Python
  • Ansible (Expert): Complex playbook development, configuration management, automation at scale
  • Kubernetes (Expert - Primary Focus): On-premises cluster operations, security hardening, networking, storage management
  • SonarQube/Code Quality (Strong): Integration, quality gate enforcement, threshold management
  • DevSecOps (Strong): Security scanning, compliance automation, vulnerability remediation, workload governance
  • Spark ETL/ETA (Moderate): Understanding of distributed data processing, job configuration, runtime behavior


Core Competencies

  • Deep understanding of DevSecOps principles and security-first automation
  • Strong troubleshooting and problem-solving abilities across complex distributed systems
  • Experience with infrastructure-as-code and GitOps methodologies
  • Knowledge of compliance frameworks and security standards
  • Ability to mentor teams and drive best practice adoption


🎓Qualifications

  • 6 - 10 Years years of hands-on DevOps
  • Proven track record with Jenkins, Docker, Kubernetes, and Ansible in production environments
  • Experience managing on-premises Kubernetes clusters (bare-metal preferred)
  • Strong background in security hardening and compliance automation
  • Familiarity with data engineering platforms and big data technologies
  • Excellent communication and collaboration skills


🚀 Key Responsibilities

1.CI/CD Pipeline Architecture & Security

  • Design, implement, and maintain enterprise-grade CI/CD pipelines in Jenkins with embedded security controls:
  • Build greenfield pipelines and enhance/stabilize existing pipeline infrastructure
  • Diagnose and resolve build, test, and deployment failures across multi-service environments
  • Integrate security gates, compliance checks, and automated quality controls at every pipeline stage
  • Manage and optimize SonarQube and static code analysis tooling:
  • Enforce code quality and security scanning standards across all services
  • Maintain organizational coding standards, vulnerability thresholds, and remediation workflows
  • Automate quality gates as integral components of CI/CD processes
  • Engineer optimized Docker images for Java, Scala, and Python applications:
  • Implement multi-stage builds, layer optimization, and minimal base images
  • Conduct image vulnerability scanning and enforce compliance policies
  • Apply containerization best practices for security and performance
  • Develop comprehensive Ansible automation:
  • Create modular, reusable, and secure playbooks for configuration management
  • Automate environment provisioning and application lifecycle operations
  • Maintain infrastructure-as-code standards and version control

2.Kubernetes Platform Operations & Security

  • Lead complete lifecycle management of on-premises/bare-metal Kubernetes clusters:
  • Cluster provisioning, version upgrades, node maintenance, and capacity planning
  • Configure and manage networking (CNI), persistent storage solutions, and ingress controllers
  • Troubleshoot workload performance, resource constraints, and reliability issues
  • Implement and enforce Kubernetes security best practices:
  • Design and manage RBAC policies, service account isolation, and least-privilege access models
  • Apply Pod Security Standards, network policies, secrets encryption, and certificate lifecycle management
  • Conduct cluster hardening, security audits, monitoring, and policy governance
  • Provide technical leadership to development teams:
  • Guide secure deployment patterns and containerized application best practices
  • Establish workload governance frameworks for distributed systems
  • Drive adoption of security-first mindsets across engineering teams

3.Data Engineering Support

  • Collaborate with data engineering teams on Spark-based workloads:
  • Support deployment and operational tuning of Spark ETL/ETA jobs
  • Understand cluster integration, job orchestration, and performance optimization
  • Debug and troubleshoot Spark workflow issues in production environments
Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
Data engineering
Databases
skill iconPython
SQL
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data


OVERVIEW

We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.


The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.


CORE TECHNICAL REQUIREMENTS

Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.


SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.


Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.


Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.


WHAT YOU WILL BUILD

Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.


Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.


Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.


Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.

Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.


DOMAIN EXPERIENCE

Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.


Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.


High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.


ENGINEERING STANDARDS

Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.


Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.


Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.


Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.


TECHNICAL ENVIRONMENT

PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.


WHAT WE ARE LOOKING FOR

Attention to Detail: You notice when something is slightly off and investigate rather than ignore.


Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.


Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.


Long-Term Orientation: You build systems you will maintain for years.


Communication: You document clearly, explain data issues to non-engineers, and surface problems early.


EDUCATION

University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1yr+
Upto ₹10L / yr (Varies
)
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+3 more

Role Overview

We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal

candidate will bridge the gap between development and operations, implementing and maintaining our

cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our

client projects.


Responsibilities:

  • Design, implement, and maintain CI/CD pipelines.
  • Containerize applications using Docker and orchestrate deployments
  • Manage and optimize cloud infrastructure on AWS and Azure platforms
  • Monitor system performance and implement automation for operational tasks to ensure optimal
  • performance, security, and scalability.
  • Troubleshoot and resolve infrastructure and deployment issues
  • Create and maintain documentation for processes and configurations
  • Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
  • Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.


Requirements:

  • 2+ years of hands-on experience with AWS cloud services
  • Strong proficiency in CI/CD pipeline configuration
  • Expertise in Docker containerisation and container management
  • Proficiency in shell scripting (Bash/Power-Shell)
  • Working knowledge of monitoring and logging tools
  • Knowledge of network security and firewall configuration
  • Strong communication and collaboration skills, with the ability to work effectively within a team
  • environment
  • Understanding of networking concepts and protocols in AWS and/or Azure
Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
3 - 5 yrs
₹20L - ₹35L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
NOSQL Databases
Google Cloud Platform (GCP)
+7 more

What you'll be doing:

As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.

  • Design & Implement: Lead the design, implementation and management of Trential’s products.
  • Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
  • Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
  • Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
  • Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

  • 3+ years of experience in backend development.
  • Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
  • Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
  • Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.

Preferred Qualifications (Nice to Have)

  • Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
  • Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
  • Experience integrating AI/ML models into verification or data extraction workflows.
Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Mumbai, Navi Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
Microservices
skill iconElastic Search
RabbitMQ
skill iconKubernetes
+2 more


We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and

a deep interest in scalable, low-latency systems.


Key Responsibilities

• Develop, maintain, and optimize backend applications using Python.

• Build and integrate RESTful APIs and microservices.

• Work with relational and NoSQL databases for data storage, retrieval, and optimization.

• Write clean, efficient, and reusable code while following best practices.

• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.

• Participate in code reviews to maintain high coding standards.

• Troubleshoot, debug, and upgrade existing applications.

• Ensure application security, performance, and scalability.


Required Skills & Qualifications:

• 2–4 years of hands-on experience in Python development.

• Strong command over Python frameworks such as Django, Flask, or FastAPI.

• Solid understanding of Object-Oriented Programming (OOP) principles.

• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.

• Proficiency in writing and consuming REST APIs.

• Familiarity with Git and version control workflows.

• Experience with unit testing and frameworks like PyTest or Unittest.

• Knowledge of containerization (Docker) is a plus.

Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer

Location:Banaglore

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems.

Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Mumbai, Navi Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Mumbai

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems

Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Meraki Labs
Bengaluru (Bangalore)
3 - 4 yrs
₹30L - ₹50L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconNextJs (Next.js)
RESTful APIs
+4 more

Job Title:Full Stack Developer 

Location: Bangalore, India


About Us:


Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.


Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.


Role Overview:


As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.

Your Core Impact

  • You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
  • You’ll translate learning and AI requirements into tangible, performant product features.
  • Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.


Key Responsibilities:


Platform Architecture & Backend Development

  • Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
  • Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
  • Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
  • Ensure security, uptime, and performance across all services.

Front-End Development & User Experience

  • Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
  • Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
  • Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.

AI Integration & Support

  • Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
  • Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
  • Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.

Data, Analytics & Reporting

  • Build dashboards and data views for educators and product teams to derive insights from learner behavior.
  • Implement secure data storage and export pipelines for progress analytics.

Collaboration & Engineering Culture

  • Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
  • Participate in code reviews, architectural discussions, and system design decisions.
  • Help define engineering best practices that balance innovation, maintainability, and performance.


Required Qualifications & Skills

  • 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
  • Strong proficiency in Python or Node.js for backend services.
  • Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
  • Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
  • Experience with real-time data systems (WebSockets or event-driven architectures).
  • Exposure to AI/ML integrations or data-intensive backends.
  • Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
  • Strong problem-solving mindset and attention to detail.
Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Upswing

Upswing

Agency job
via Talentfoxhr by ANMOL SINGH
Pune
2 - 5 yrs
₹5L - ₹7.5L / yr
skill iconPython
Google Cloud Platform (GCP)
FastAPI
RabbitMQ
Apache Kafka
+7 more

🚀 We’re Hiring: Python Developer – Pune 🚀


Are you a skilled Python Developer looking to work on high-performance, scalable backend systems?

If you’re passionate about building robust applications and working with modern technologies — this opportunity is for you! 💼✨


📍 Location: Pune

🏢 Role: Python Backend Developer

🕒 Type: Full-Time | Permanent


🔍 What We’re Looking For:

We need a strong backend professional with experience in:

🐍 Python (Advanced)

⚡ FastAPI

🛢️ MongoDB & Postgres

📦 Microservices Architecture

📨 Message Brokers (RabbitMQ / Kafka)

🌩️ Google Cloud Platform (GCP)

🧪 Unit Testing & TDD

🔐 Backend Security Standards

🔧 Git & Project Collaboration


🛠️ Key Responsibilities:

✔ Build and optimize Python backend services using FastAPI

✔ Design scalable microservices

✔ Manage and tune MongoDB & Postgres

✔ Implement message brokers for async workflows

✔ Drive code reviews and uphold coding standards

✔ Mentor team members

✔ Manage cloud deployments on GCP

✔ Ensure top-notch performance, scalability & security

✔ Write robust unit tests and follow TDD


🎓 Qualifications:

➡ 2–4 years of backend development experience

➡ Strong hands-on Python + FastAPI

➡ Experience with microservices, DB management & cloud tech

➡ Knowledge of Agile/Scrum

➡ Bonus: Docker, Kubernetes, CI/CD


Read more
Meraki Labs
Agency job
via ENTER by Rajkishor Mishra
Bengaluru (Bangalore)
8 - 12 yrs
₹60L - ₹70L / yr
skill iconMachine Learning (ML)
Generative AI
skill iconPython
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+9 more

Job Overview:


As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along  with strong expertise in cloud-based architectures.


Key Responsibilities:


AI Tutor & Simulation Intelligence

  • Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
  • Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
  • Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
  • Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.


Platform & System Architecture

  • Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
  • Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
  • Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.


Reliability, Security & Analytics

  • Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
  • Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
  • Set up real-time learning analytics to measure comprehension and identify concept gaps.


Leadership & Collaboration

  • Mentor and elevate engineers across backend, ML, and front-end teams.
  • Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
  • Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.


Qualifications & Skills:


  • 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
  • Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
  • Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
  • Experience designing microservices and API ecosystems for high-concurrency platforms.
  • Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
  • Demonstrated ability to work with educational data, content pipelines, and real-time systems.


Bonus Skills (Nice to Have):

  • Experience with multi-modal AI models (text, image, audio, video).
  • Knowledge of AI safety, ethical AI, and explain ability techniques.
  • Prior work in AI-powered automation tools or AI-driven SaaS products.
Read more
IndArka Energy Pvt Ltd

at IndArka Energy Pvt Ltd

3 recruiters
Mita Hemant
Posted by Mita Hemant
Bengaluru (Bangalore)
4 - 5 yrs
₹15L - ₹18L / yr
Microsoft Windows Azure
CI/CD
Scripting language
skill iconDocker
skill iconKubernetes
+3 more

About Us

At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.

Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.

 

What We're Looking For

We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.

Key Responsibilities

  • Design and implement CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and configuration in the Azure cloud environment.
  • Monitor and manage system health, performance, and security.
  • Collaborate with development teams to ensure smooth and secure deployment of applications.
  • Troubleshoot and resolve issues related to deployment and operations.
  • Implement best practices for configuration management and infrastructure as code.
  • Maintain documentation of processes and solutions.

 

Requirements

  • Total relevant experience of 4 to 5 years.
  • Proven experience as a DevOps Engineer, specifically with Azure.
  • Experience with CI/CD tools and practices.
  • Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
  • Knowledge of scripting languages such as PowerShell or Python.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Good to have – knowledge on AWS, Digital Ocean, GCP
  • Excellent troubleshooting and problem-solving skills
  • High ownership, self-starter attitude, and ability to work independently
  • Strong aptitude and reasoning ability with a growth mindset

 

Nice to Have

·        Experience working in a SaaS or product-driven startup

·        Familiarity with solar industry (preferred but not required)

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Payal
Payal Sangoi
Posted by Payal Sangoi
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Mumbai

 

We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.

 

Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability

 

Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred

 

If you are passionate about backend engineering and want to build fast, scalable trading systems.

Read more
IT Industry

IT Industry

Agency job
via Truetech by Nithya A
Remote only
4 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Key Responsibilities

●     Design and maintain high-performance backend applications and microservices

●     Architect scalable, cloud-native systems and collaborate across engineering teams

●     Write high-quality, performant code and conduct thorough code reviews

●     Build and operate CI/CD pipelines and production systems

●     Work with databases, containerization (Docker/Kubernetes), and cloud platforms

●     Lead agile practices and continuously improve service reliability

Required Qualifications

●     4+ years of professional software development experience

●     2+ years contributing to service design and architecture

●     Strong expertise in modern languages like Golang, Python

●     Deep understanding of scalable, cloud-native architectures and microservices

●     Production experience with distributed systems and database technologies

●     Experience with Docker, software engineering best practices

●     Bachelor's Degree in Computer Science or related technical field

Preferred Qualifications

●     Experience with Golang, AWS, and Kubernetes

●     CI/CD pipeline experience with GitHub Actions

●     Start-up environment experience

Read more
Tradelab Software Private Limited
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes

We're Hiring: Golang Developer (3–5 Years Experience)

Location: Banaglore


We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.


Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure, and high-performance microservices

• Work with REST APIs, WebSockets, message queues, and distributed systems

• Collaborate with DevOps, frontend, and product teams for smooth project delivery

• Optimize performance, troubleshoot issues, and ensure system stability


Skills & Experience Required

• 3–5 years of experience in Golang development

• Strong understanding of data structures, concurrency, and networking

• Hands-on experience with MySQL / Redis / Kafka or similar technologies

• Good understanding of microservices architecture, APIs, and cloud environments

• Experience in fintech/trading systems is an added advantage

• Immediate joiners or candidates with up to 30 days notice period preferred


If you are passionate about backend engineering and want to build fast, scalable trading systems, share your resume.

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Pune
6 - 8 yrs
₹12L - ₹22L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
skill iconGo Programming (Golang)
Elixir
+10 more


Job Description – Full Stack Developer (React + Node.js)

Experience: 5–8 Years

Location: Pune

Work Mode: WFO

Employment Type: Full-time


About the Role

We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications using React and Node.js.
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
  • Work with relational databases such as PostgreSQL or MySQL.
  • Deploy and manage applications in cloud environments (preferably GCP or AWS).
  • Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
  • Utilize containerization tools like Docker for efficient development and deployment workflows.
  • Integrate third-party services and APIs, including AI APIs and tools.
  • Contribute to improving development processes, documentation, and best practices.


Required Skills

  • Strong experience with React.js (frontend).
  • Solid hands-on experience with Node.js (backend).
  • Good understanding of relational databases: PostgreSQL / MySQL.
  • Experience working in production environments and debugging live systems.
  • Strong understanding of OOP or Functional Programming, and clean coding standards.
  • Knowledge of Docker or other containerization tools.
  • Experience with cloud platforms (GCP or AWS).
  • Excellent written and verbal communication skills.


Good to Have

  • Experience with Golang or Elixir.
  • Familiarity with Kubernetes, RabbitMQ, Redis, etc.
  • Contributions to open-source projects.
  • Previous experience working with AI APIs or machine learning tools.


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
2 - 5 yrs
₹7L - ₹18L / yr
skill iconDocker
skill iconKubernetes
CI/CD
skill iconJenkins

Job Title: DevOps Engineer

Location: Mumbai

Experience: 2–4 Years

Department: Technology

About InCred

InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]

Role Overview

As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.

Key Responsibilities

  • Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
  • CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
  • Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
  • Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
  • Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
  • Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
  • Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
  • Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]

Required Skills

  • 2–4 years of hands-on experience in DevOps roles.
  • Strong knowledge of Linux administration and shell scripting (Bash/Python).
  • Experience with AWS services and cloud architecture.
  • Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
  • Familiarity with Docker, Kubernetes, and container orchestration.
  • Knowledge of Terraform or similar IaC tools.
  • Understanding of networking, security, and performance tuning.
  • Exposure to monitoring tools (Prometheus, Grafana) and log management.

Preferred Qualifications

  • Experience in financial services or fintech environments.
  • Knowledge of microservices architecture and enterprise-grade SaaS setups.
  • Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).

Why Join InCred?

  • Culture: High-performance, ownership-driven, and innovation-focused environment.
  • Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
  • Rewards: Competitive compensation, ESOPs, and performance-based incentives.
  • Impact: Be part of a mission-driven organization transforming India’s credit landscape.


Read more
Tradelab Software Private Limited
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
CI/CD

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.

• You thrive on challenges, not on perks or financial rewards.

• You measure success by your own growth, not external validation.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and optimize high-performance backend systems in Golang for trading platforms and financial

services.

• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.

• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

• Own your work—no babysitting, no micromanagement.

• Work alongside equally driven engineers who expect nothing less than brilliance.

• Learn faster than you ever thought possible.


Must-Have Skills:

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

• Deep understanding of concurrency, memory management, and system design.

• Experience with Trading, market data processing, or low-latency systems.

• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

• Hands-on with Docker, Kubernetes, and CI/CD pipelines.

• A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

• Past experience in fintech.

• Contributions to open-source Golang projects.

• A history of building something impactful from scratch.

• Understanding of FIX protocol, WebSockets, and streaming APIs.

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
GrowthArc

at GrowthArc

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore)
7yrs+
Upto ₹38L / yr (Varies
)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconDocker
Terraform
Amazon EC2
+7 more

Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.


Key Responsibilities:

  • Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
  • Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
  • Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
  • Optimize workloads post-migration for cost, performance, and security.
  • Collaborate with stakeholders to define migration roadmaps and timelines.
  • Perform data migration, application re-architecture, and hybrid cloud setups.
  • Monitor migration progress, troubleshoot issues, and ensure business continuity.
  • Document processes and provide post-migration support and training.
  • Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.


Required Qualifications:

  • 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
  • AWS Certified Solutions Architect or Migration Specialty certification preferred.
  • Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
  • Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
  • Experience with infrastructure as code (CloudFormation, Terraform).
  • Proficiency in scripting (Python, PowerShell) and automation.
  • Familiarity with security best practices (IAM, encryption, compliance).
  • Hands-on experience with Kubernetes/EKS networking components and best practices.


Preferred Skills:

  • Experience with hybrid/multi-cloud environments.
  • Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
  • Excellent problem-solving and communication skills.
Read more
Blurgs AI

at Blurgs AI

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹25L / yr (Varies
)
skill iconPython
Data engineering
skill iconMongoDB
SQL
skill iconDocker
+1 more

We are seeking a Technical Lead with strong expertise in backend engineering, real-time data streaming, and platform/infrastructure development to lead the architecture and delivery of our on-premise systems.

You will design and build high-throughput streaming pipelines (Apache Pulsar, Apache Flink), backend services (FastAPI), data storage models (MongoDB, ClickHouse), and internal dashboards/tools (Angular).

In this role, you will guide engineers, drive architectural decisions, and ensure reliable systems deployed on Docker + Kubernetes clusters.


Key Responsibilities

1. Technical Leadership & Architecture

  • Own the end-to-end architecture for backend, streaming, and data systems.
  • Drive system design decisions for ingestion, processing, storage, and DevOps.
  • Review code, enforce engineering best practices, and ensure production readiness.
  • Collaborate closely with founders and domain experts to translate requirements into technical deliverables.

2. Data Pipeline & Streaming Systems

  • Architect and implement real-time, high-throughput data pipelines using Apache Pulsar and Apache Flink.
  • Build scalable ingestion, enrichment, and stateful processing workflows.
  • Integrate multi-sensor maritime data into reliable, unified streaming systems.

3. Backend Services & Platform Engineering

  • Lead development of microservices and internal APIs using FastAPI (or equivalent backend frameworks).
  • Build orchestration, ETL, and system-control services.
  • Optimize backend systems for latency, throughput, resilience, and long-term maintainability.

4. Data Storage & Modeling

  • Design scalable, efficient data models using MongoDB, ClickHouse, and other on-prem databases.
  • Implement indexing, partitioning, retention, and lifecycle strategies for large datasets.
  • Ensure high-performance APIs and analytics workflows.

5. Infrastructure, DevOps & Containerization

  • Deploy and manage distributed systems using Docker and Kubernetes.
  • Own observability, monitoring, logging, and alerting for all critical services.
  • Implement CI/CD pipelines tailored for on-prem and hybrid cloud environments.

6. Team Management & Mentorship

  • Provide technical guidance to engineers across backend, data, and DevOps teams.
  • Break down complex tasks, review designs, and ensure high-quality execution.
  • Foster a culture of clarity, ownership, collaboration, and engineering excellence.

Required Skills & Experience

  • 5–10+ years of strong software engineering experience.
  • Expertise with streaming platforms like Apache Pulsar, Apache Flink, or similar technologies.
  • Strong backend engineering proficiency — preferably FastAPI, Python, Java, or Scala.
  • Hands-on experience with MongoDB and ClickHouse.
  • Solid experience deploying, scaling, and managing services on Docker + Kubernetes.
  • Strong understanding of distributed systems, high-performance data flows, and system tuning.
  • Experience working with Angular for internal dashboards is a plus.
  • Excellent system-design, debugging, and performance-optimization skills.
  • Prior experience owning critical technical components or leading engineering teams.

Nice to Have

  • Experience with sensor data (AIS, Radar, SAR, EO/IR).
  • Exposure to maritime, defence, or geospatial technology.
  • Experience with bare-metal / on-premise deployments.
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
ECS
Amazon Redshift
+14 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker

Notice period - 0 to 15days only

Hybrid work mode- 3 days office, 2 days at home

Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Remote only
5 - 15 yrs
₹10L - ₹15L / yr
FastAPI
skill iconPython
RESTful APIs
SQL
NOSQL Databases
+5 more


Summary:

We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.

Job Type:

Full-Time Contractor (12 months)

Location:

Remote / On-site (Jaipur preferred, as per project needs)

Experience:

5+ years in backend development

Key Responsibilities:

  • Design, develop, and maintain robust backend services using Python and FastAPI.
  •  Implement and manage Prisma ORM for database operations.
  • Build scalable APIs and integrate with SQL databases and third-party services.
  • Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
  • Collaborate with front-end developers and other team members to deliver high-quality web applications.
  • Ensure application performance, security, and reliability.
  • Participate in code reviews, testing, and deployment processes.

Required Skills:

  • Expertise in Python backend development with strong experience in FastAPI.
  • Solid understanding of RESTful API design and implementation.
  • Proficiency in SQL databases and ORM tools (preferably Prisma)
  • Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
  • Familiarity with CI/CD pipelines and containerization (Docker).
  • Knowledge of cloud architecture best practices.

Added Advantage:

  • Front-end development knowledge (React, Angular, or similar frameworks).
  • Exposure to AWS/GCP cloud platforms.
  • Experience with NoSQL databases.

Eligibility:

  • Minimum 5 years of professional experience in backend development.
  • Available for full-time engagement.
  • Please excuse if you are currently engaged in other projects—we require dedicated availability.

 

Read more
Wehyb Online Services LLP
Pawan Choudhary
Posted by Pawan Choudhary
Ahmedabad
3 - 5 yrs
₹8L - ₹10L / yr
skill iconPython
skill iconJavascript
ERP module
frappe
SQL
+7 more

Job Title: Frappe Engineer

Location: Ahmedabad, Gujarat


About Us

We specialize in providing scalable, high-performance IT and software development teams through our Offshore Development Centre (ODC) model. As a part of the DevX Group, we enable global companies to establish dedicated, culturally aligned engineering teams in India combining world-class talent, infrastructure, and operational excellence. Our expertise spans AI led product engineering, data platforms, cloud solutions, and digital transformation initiatives for mid-market and enterprise clients worldwide.


Position Overview

We are looking for an experienced Frappe Engineer to lead the implementation, customization, and integration of ERPNext solutions. The ideal candidate should have strong expertise in ERP module customization, business process automation, and API integrations while ensuring system scalability and performance.


Key Responsibilities

ERPNext Implementation & Customization

  • Design, configure, and deploy ERPNext solutions based on business requirements.
  • Customize and develop new modules, workflows, and reports within ERPNext.
  • Optimize system performance and ensure data integrity.


Integration & Development

  • Develop custom scripts using Python and Frappe Framework.
  • Integrate ERPNext with third-party applications via REST API.
  • Automate workflows, notifications, and business processes.


Technical Support & Maintenance

  • Troubleshoot ERP-related issues and provide ongoing support.
  • Upgrade and maintain ERPNext versions with minimal downtime.
  • Ensure security, scalability, and compliance in ERP implementations.


Collaboration & Documentation

  • Work closely with stakeholders to understand business needs and translate them into ERP solutions.
  • Document ERP configurations, custom scripts, and best practices.


Qualifications & Skills

  • 3+ years of experience working with ERPNext and the Frappe Framework.
  • Strong proficiency in Python, JavaScript, and SQL.
  • Hands-on experience with ERPNext customization, report development, and API integrations.
  • Knowledge of Linux, Docker, and cloud platforms (AWS, GCP, or Azure) is a plus.
  • Experience in business process automation and workflow optimization.
  • Familiarity with version control (Git) and Agile development methodologies.


Benefits:

  • Competitive salary.
  • Opportunity to lead a transformative ERP project for a mid-market client.
  • Professional development opportunities.
  • Fun and inclusive company culture.
  • Five-day workweek.


Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
One2n

at One2n

3 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
6yrs+
Upto ₹35L / yr (Varies
)
skill iconKubernetes
Monitoring
skill iconAmazon Web Services (AWS)
JVM
skill iconDocker
+7 more

About the role:

We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.

You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.


Responsibilities:

  • Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
  • Provide technical guidance and mentorship to young engineers.
  • Participate in code reviews and contribute to best practices for development and operations.
  • Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
  • Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
  • Improve Developer Experience (DX) to help engineers improve their productivity.
  • Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
  • Help SRE teams set up on-call rosters and coach them for effective on-call management.
  • Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
  • Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.


Requirements:

  • 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
  • Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
  • Working knowledge of programming using Golang, Python, Java, or equivalent.
  • Skilled in diagnosing and resolving Linux operating system issues.
  • Strong proficiency in scripting and automation to build monitoring and analytics solutions.
  • Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
  • Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconReact.js
skill iconDjango
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.
Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹35L / yr
skill iconPython
FastAPI
skill iconDjango
TypeScript
skill iconNextJs (Next.js)
+11 more

Job Title : Full Stack Engineer (Python + React.js/Next.js)

Experience : 1 to 6+ Years

Location : Bengaluru (Indiranagar)

Employment : Full-Time

Working Days : 5 Days WFO

Notice Period : Immediate to 30 Days


Role Overview :

We are seeking Full Stack Engineers to build scalable, high-performance fintech products.

You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.


Main Tech Stack :

Frontend : Typescript, Next.js, React

Backend : Python, FastAPI, Django

Database : PostgreSQL, MongoDB, Redis

Cloud : AWS/GCP, Docker, Kubernetes

Tools : Git, GitHub, CI/CD, Elasticsearch


Key Responsibilities :

  • Develop full-stack applications with clean, scalable code.
  • Build fast, responsive UIs using Typescript, Next.js, React.
  • Develop backend APIs using Python, FastAPI, Django.
  • Collaborate with product/design to implement solutions.
  • Own development lifecycle: design → build → deploy → monitor.
  • Ensure performance, reliability, and security.


Requirements :

Must-Have :

  • 1–6+ years of full-stack experience.
  • Product-based company background.
  • Strong DSA + problem-solving skills.
  • Proficiency in either frontend or backend with familiarity in both.
  • Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
  • Strong communication & ownership mindset.

Good-to-Have :

  • Experience with containers, system design, observability tools.

Interview Process :

  1. Coding Round : DSA + problem solving
  2. System Design : LLD + HLD, scalability, microservices
  3. CTO Round : Technical deep dive + cultural fit
Read more
Watsoo Express
Gurgaon Udyog vihar phase 5
6 - 10 yrs
₹9L - ₹11L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+9 more

Profile: Sr. Devops Engineer

Location: Gurugram

Experience: 05+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
skill iconAndroid Development
skill iconDocker
skill iconKubernetes
skill iconKotlin
skill iconLeadership
+1 more

Experience: 5-8 years of professional experience in software engineering, with a strong

background in developing and deploying scalable applications.

● Technical Skills:

○ Architecture: Demonstrated experience in architecture/ system design for scale,

preferably as a digital public good

○ Full Stack: Extensive experience with full-stack development, including mobile

app development and backend technologies.

○ App Development: Hands-on experience building and launching mobile

applications, preferably for Android.

○ Cloud Infrastructure: Familiarity with cloud platforms and containerization

technologies (Docker, Kubernetes).

○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.

● Soft Skills:

○ Experience in hiring team members

○ A proactive and independent problem-solver, comfortable working in a fast-paced

environment.

○ Excellent communication and leadership skills, with the ability to mentor junior

engineers.

○ A strong desire to use technology for social good.


Preferred Qualifications

● Experience working in a startup or smaller team environment.

● Familiarity with the healthcare or public health sector.

● Experience in developing applications for low-resource environments.

● Experience with data management in privacy and security-sensitive applications.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
Tops Infosolutions
Zurin Momin
Posted by Zurin Momin
Ahmedabad
5 - 12 yrs
₹9.5L - ₹14L / yr
skill iconLaravel
skill iconJavascript
DevOps
CI/CD
skill iconKubernetes
+2 more

Job Title: DevOps Engineer


Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.


Responsibilities:

Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.

Automate application deployment and environment provisioning using AWS and containerization tools.

Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).

Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.

Build and manage containerized environments using Docker (Kubernetes is a plus).

Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.

Ensure system security, data integrity, and high availability across environments.

Collaborate with development teams to streamline builds, testing, and deployments.

Troubleshoot and resolve infrastructure and deployment-related issues.


Required Skills:

AWS (EC2, ECS, RDS, S3, IAM, Lambda)

CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy

Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible

Containers: Docker (Kubernetes preferred)

Scripting: Bash, Python

Version Control: Git, GitHub, GitLab

Web Servers: Apache, Nginx (preferred)

Databases: MySQL, MongoDB (preferred)


Qualifications:

3+ years of experience as a DevOps Engineer in a production environment.

Proven experience supporting Laravel, Node.js, and Python-based applications.

Strong understanding of CI/CD, containerization, and automation practices.

Experience with infrastructure monitoring, logging, and performance optimization.

Familiarity with agile and collaborative development processes.

Read more
IAI solution
Anajli Kanojiya
Posted by Anajli Kanojiya
Bengaluru (Bangalore)
4 - 7 yrs
₹15L - ₹20L / yr
skill iconNextJs (Next.js)
skill iconPython
skill iconReact.js
skill iconDocker
skill iconMongoDB

Job Title: Full-Stack developer

Experience: 5 to 8+ Years

ASAP Start Immediately


Key Responsibilities

Develop and maintain end-to-end web applications, including frontend interfaces and backend services.

Build responsive and scalable UIs using React.js and Next.js.

Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.

Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.

Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.

Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.

Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.

Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.

Troubleshoot, debug, and improve application performance and security.

Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.


Must-Have Qualifications

Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .

Proficiency in frontend development using React.js and Next.js

Experience in building and consuming RESTful APIs

Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis

Practical experience with cloud platforms, preferably Azure, or AWS

Familiarity with containerization and orchestration tools like Docker and Kubernetes

Working knowledge of Infrastructure as Code (IaC) using Terraform

Experience with CI/CD pipelines using GitHub Actions or Azure DevOps

Ability to work in an agile development environment with cross-functional teams

Strong problem-solving, debugging, and communication skills

Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.


Technical Stack

Frontend: React.js, Next.js

Backend: Python, FastAPI, Django, Spring Boot, Node.js

DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform

CI/CD: GitHub Actions, Azure DevOps

Databases: PostgreSQL, MongoDB, Redis


Read more
Zenius IT Services Pvt Ltd

at Zenius IT Services Pvt Ltd

2 candid answers
Sunita Pradhan
Posted by Sunita Pradhan
Remote, Chennai
10 - 15 yrs
₹15L - ₹25L / yr
skill iconJava
skill iconSpring Boot
Hibernate (Java)
JPA
Microservices
+4 more

Role Summary

We are hiring a Senior Java Developer with strong backend development experience to build scalable, high-performance applications and lead technical initiatives.


Key Responsibilities

  • Develop and maintain applications using Java 8/11/17, Spring Boot, and REST APIs.
  • Design and implement microservices and backend components.
  • Work with SQL/NoSQL databases, API integrations, and performance optimization.
  • Collaborate with cross-functional teams and participate in code reviews.
  • Deploy applications using CI/CD, Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP).


Skills Required

  • Strong in Core Java, OOPS, multithreading, collections.
  • Hands-on with Spring Boot, Hibernate/JPA, Microservices.
  • Experience with REST APIs, Git, and CI/CD pipelines.
  • Knowledge of Docker/Kubernetes and cloud basics.
  • Good understanding of database queries and performance tuning.


Nice to Have

  • Experience with messaging systems (Kafka/RabbitMQ).
  • Basic frontend understanding (React/Angular).
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
CI/CD
Apache Kafka
RabbitMQ
skill iconDocker
+1 more

Location: Bengalore, India, Exp: 3-5 Yrs

Backend Developer (Golang) - Trading & Fintech


About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What we expect:

 You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.

 You thrive on challenges, not on perks or financial rewards.

 You measure success by your own growth, not external validation.

 Taking calculated risks excites you—you’re here to build, break, and learn.

 You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.

 You understand the stakes—milliseconds can make or break trades, and precision is everything.



What you will do:

 Develop and optimize high-performance backend systems in Golang for trading platforms and financial services.

 Architect low-latency, high-throughput microservices that push the boundaries ofspeed and efficiency.

 Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

 Own your work—no babysitting, no micromanagement.

 Work alongside equally driven engineers who expect nothing less than brilliance.


Must have skills:

 Learn faster than you ever thought possible.

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

 Deep understanding of concurrency, memory management, and system design.

 Experience with Trading, market data processing, or low-latency systems.

 Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

 Hands-on with Docker, Kubernetes, and CI/CD pipelines.

 A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

 Past experience in fintech, trading systems, or algorithmic trading.

 Contributions to open-source Golang projects.

 A history of building something impactful from scratch.

 Understanding of FIX protocol, WebSockets, and streaming APIs.


Why Join Us?

 Work with a team that expects and delivers excellence.

 A culture where risk-taking is rewarded, and complacency is not.

 Limitless opportunities for growth—if you can handle the pace.

 A place where learning is currency, and outperformance is the only metric that matters.


The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech. This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.



Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort