50+ Docker Jobs in India
Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!
Job Overview:
We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.
Qualifications and Requirements
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
- Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
- Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
- Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
- Experience working with Docker, Kubernetes, and CI/CD pipelines.
- Strong understanding of Terraform and Ansible for infrastructure automation.
- Scripting proficiency in Python and Bash (PowerShell optional).
- Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
- Experience with firewalls, basic security concepts, and tools like pfSense.
- Familiarity with Git/GitHub for version control and team collaboration.
- Ability to perform API testing using cURL and Postman.
- Strong understanding of the application deployment lifecycle and basic application deployment processes.
- Good problem-solving, analytical thinking, and documentation skills.
Roles and Responsibility
- Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
- Use Terraform and Ansible to provision, automate, and manage infrastructure components.
- Support application deployment lifecycle—from build and testing to release and rollout.
- Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
- Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
- Write automation scripts using Python/Bash to optimize recurring tasks.
- Conduct API testing using curl and Postman to validate integrations and service functionality.
- Configure and monitor firewalls including pfSense for secure access control.
- Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
- Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
- Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Job Title: AI/ML Engineer (LLMs, RAG & Agent Systems)
Location: Bangalore, India (On-site)
Type: Full-time
About the Role
As an AI/ML Engineer, you’ll be part of a small, fast-moving team focused on developing LLM-powered agentic systems that drive our next generation of AI products.
You’ll work on designing, implementing, and optimizing pipelines involving retrieval-augmented generation (RAG), multi-agent coordination, and tool-using AI systems.
Responsibilities
- Design and implement components for LLM-based systems (retrievers, planners, memory, evaluators).
- Build and maintain RAG pipelines using vector databases and embedding models.
- Experiment with reasoning frameworks like ReAct, Tree of Thought, and Reflexion.
- Collaborate with backend and infra teams to deploy and optimize agentic applications.
- Research and experiment with open-source LLM frameworks to identify best-fit architectures.
- Contribute to internal tools for evaluation, benchmarking, and scaling AI agents.
Required Skills
- Strong foundation in ML/DL theory and implementation (PyTorch preferred).
- Understanding of transformer architectures, embeddings, and LLM mechanics.
- Practical exposure to prompt engineering, tool calling, and structured output design.
- Experience in Python, Git/GitHub, and data processing pipelines.
- Familiarity with RAG systems, vector databases, and API-based model inference.
- Ability to write clean, modular, and reproducible code.
Preferred Skills
- Experience with LangChain, LangGraph, Autogen, or CrewAI.
- Hands-on with Hugging Face ecosystem (transformers, datasets, etc.).
- Working knowledge of Redis, PostgreSQL, or MongoDB.
- Experience with Docker and deployment workflows.
- Familiarity with OpenAI, Anthropic, vLLM, or Ollama inference APIs.
- Exposure to MLOps concepts like CI/CD, model versioning, or cloud (AWS/GCP/Azure).
What We Value
- Deep understanding of core principles over surface-level familiarity with tools.
- Ability to think like a researcher and execute like an engineer.
- Collaborative mindset, building together, learning together.
As a Software Development Engineer II at Hiver, you will have a critical role to play to build and scale our product to thousands of users globally. We are growing very fast and process over 5 million emails daily for thousands of active users on the Hiver platform. You will get a chance to work with and mentor a group of smart engineers as well as learn and grow yourself working with very good mentors. You’ll get the opportunity to work on complex technical challenges such as making the architecture scalable to handle the growing traffic, building frameworks to monitor and improve the performance of our systems, and improving reliability and performance. Code, design, develop and maintain new product features. Improve the existing services for performance and scalability.
What will you be working on?
- Build a new API for our users, or iterate on existing APIs in monolith applications.
- Build event-driven architecture using highly scalable message brokers like Kafka, RabbitMQ, etc.
- Build microservices based on the performance and efficiency needs.
- Build frameworks to monitor and improve the performance of our systems.
- Build and upgrade systems to securely store sensitive data.
- Design, build and maintain APIs, services, and systems across Hiver's engineering teams.
- Debug production issues across services and multiple levels of the stack.
- Work with engineers across the company to build new features at large-scale.
- Improve engineering standards, tooling, and processes.
What we are looking for?
- Have worked in scaling backend systems over at least 3 years.
- Knowledge of Ruby on Rails (RoR) or Pythonis good to have, with hands-on experience in at least one project.
- Have worked on the microservice and event-driven architecture.
- Have worked on technologies like Kafka in building data pipelines with a high volume of data.
- Enjoy and have experience building Lean APIs and amazing backend services.
- Think about systems and services and write high-quality code. We care much more about your general engineering skill than - knowledge of a particular language or framework.
- Have worked extensively with SQL Databases and understand NoSQL databases and Caches.
- Have experience deploying applications on the cloud. We are on AWS, but experience with any cloud provider (GCP, Azure) would be great.
- Hold yourself and others to a high bar when working with production systems.
- Take pride in working on projects to successful completion involving a wide variety of technologies and systems.
- Thrive in a collaborative environment involving different stakeholders.
- Enjoy working with a diverse group of people with different expertise.
About the role
We’re looking for a hands-on Junior System/Cloud Engineer to help keep our cloud infrastructure and internal IT humming. You’ll work closely with a senior IT/Cloud Engineer across Azure, AWS, and Google Cloud—provisioning VMs/EC2/Compute Engine instances, basic database setup, and day-to-day maintenance. You’ll also install and maintain team tools (e.g., Elasticsearch, Tableau, MSSQL) and pitch in with end-user support for laptops and any other system issues when needed.
What you’ll do
Cloud provisioning & operations (multi-cloud)
· Create, configure, and maintain virtual machines and related resources in Azure, AWS (EC2), and Google Compute Engine (networks, security groups/firewalls, storage, OS patching and routine maintenance/backups).
· Assist with database setup (managed services or VM-hosted), backups, and access controls under guidance from senior engineers.
· Implement tagging, least-privilege IAM, and routine patching for compliance and cost hygiene.
Tooling installation & maintenance
· Install, configure, upgrade, and monitor tools/softwares required like Elasticsearch and Tableau server and any other and manage service lifecycle (systemd/Windows service), security basics, and health checks.
· Document installation steps, configurations, and runbooks.
Monitoring & incident support
· Set up basic monitoring/alerts using cloud-native tools and logs.
· End-user & endpoint support (as needed)
· Provide first-line support for laptops/desktops (Windows/macOS), peripherals, conferencing, VPN, and common apps; escalate when appropriate.
· Assist with device setup, imaging, patching, and inventory; keep tickets and resolutions well documented.
What you’ll bring
· Experience: 1–3 years in DevOps/System Admin/IT Ops with exposure to at least one major cloud (primarily Azure, good to have skills on AWS, or GCP); eagerness to work across all three.
Core skills:
· Linux and Windows server admin basics (users, services, networking, storage).
· VM provisioning and troubleshooting in Azure, AWS EC2, or GCE; understanding of security groups/firewalls, SSH/RDP, and snapshots/images.
· Installation/maintenance of team tools (ex: Elasticsearch, Tableau Server etc).
· Scripting (Bash and/or PowerShell); Git fundamentals; comfort with ticketing systems.
· Clear documentation and communication habits.
Nice to have:
· Terraform or ARM/CloudFormation basics; container fundamentals (Docker).
· Monitoring/logging familiarity (Elastic/Azure Monitor).
· Basic networking (DNS, HTTP, TLS, VPN, Nginx)
· Azure certification (AZ-104, AZ-204)
To process your resume for the next process, please fill out the Google form with your updated resume.
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Job Summary:
We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.
Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Hiring for SRE Lead
Exp: 7 - 12 yrs
Work Location : Mumbai ( Kurla West )
WFO
Skills :
Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet).
Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.
System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
About the role:
We are seeking an experienced DevOps Engineer with deep expertise in Jenkins, Docker, Ansible, and Kubernetes to architect and maintain secure, scalable infrastructure and CI/CD pipelines. This role emphasizes security-first DevOps practices, on-premises Kubernetes operations, and integration with data engineering workflows.
🛠 Required Skills & Experience
Technical Expertise
- Jenkins (Expert): Advanced pipeline development, DSL scripting, security integration, troubleshooting
- Docker (Expert): Secure multi-stage builds, vulnerability management, optimisation for Java/Scala/Python
- Ansible (Expert): Complex playbook development, configuration management, automation at scale
- Kubernetes (Expert - Primary Focus): On-premises cluster operations, security hardening, networking, storage management
- SonarQube/Code Quality (Strong): Integration, quality gate enforcement, threshold management
- DevSecOps (Strong): Security scanning, compliance automation, vulnerability remediation, workload governance
- Spark ETL/ETA (Moderate): Understanding of distributed data processing, job configuration, runtime behavior
Core Competencies
- Deep understanding of DevSecOps principles and security-first automation
- Strong troubleshooting and problem-solving abilities across complex distributed systems
- Experience with infrastructure-as-code and GitOps methodologies
- Knowledge of compliance frameworks and security standards
- Ability to mentor teams and drive best practice adoption
🎓Qualifications
- 6 - 10 Years years of hands-on DevOps
- Proven track record with Jenkins, Docker, Kubernetes, and Ansible in production environments
- Experience managing on-premises Kubernetes clusters (bare-metal preferred)
- Strong background in security hardening and compliance automation
- Familiarity with data engineering platforms and big data technologies
- Excellent communication and collaboration skills
🚀 Key Responsibilities
1.CI/CD Pipeline Architecture & Security
- Design, implement, and maintain enterprise-grade CI/CD pipelines in Jenkins with embedded security controls:
- Build greenfield pipelines and enhance/stabilize existing pipeline infrastructure
- Diagnose and resolve build, test, and deployment failures across multi-service environments
- Integrate security gates, compliance checks, and automated quality controls at every pipeline stage
- Manage and optimize SonarQube and static code analysis tooling:
- Enforce code quality and security scanning standards across all services
- Maintain organizational coding standards, vulnerability thresholds, and remediation workflows
- Automate quality gates as integral components of CI/CD processes
- Engineer optimized Docker images for Java, Scala, and Python applications:
- Implement multi-stage builds, layer optimization, and minimal base images
- Conduct image vulnerability scanning and enforce compliance policies
- Apply containerization best practices for security and performance
- Develop comprehensive Ansible automation:
- Create modular, reusable, and secure playbooks for configuration management
- Automate environment provisioning and application lifecycle operations
- Maintain infrastructure-as-code standards and version control
2.Kubernetes Platform Operations & Security
- Lead complete lifecycle management of on-premises/bare-metal Kubernetes clusters:
- Cluster provisioning, version upgrades, node maintenance, and capacity planning
- Configure and manage networking (CNI), persistent storage solutions, and ingress controllers
- Troubleshoot workload performance, resource constraints, and reliability issues
- Implement and enforce Kubernetes security best practices:
- Design and manage RBAC policies, service account isolation, and least-privilege access models
- Apply Pod Security Standards, network policies, secrets encryption, and certificate lifecycle management
- Conduct cluster hardening, security audits, monitoring, and policy governance
- Provide technical leadership to development teams:
- Guide secure deployment patterns and containerized application best practices
- Establish workload governance frameworks for distributed systems
- Drive adoption of security-first mindsets across engineering teams
3.Data Engineering Support
- Collaborate with data engineering teams on Spark-based workloads:
- Support deployment and operational tuning of Spark ETL/ETA jobs
- Understand cluster integration, job orchestration, and performance optimization
- Debug and troubleshoot Spark workflow issues in production environments
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
What you'll be doing:
As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.
- Design & Implement: Lead the design, implementation and management of Trential’s products.
- Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
- Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
- Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
- Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
- 3+ years of experience in backend development.
- Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
- Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
- Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
- Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
- Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
- Experience integrating AI/ML models into verification or data extraction workflows.
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
We're Hiring: Golang Developer
Location:Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Job Title:Full Stack Developer
Location: Bangalore, India
About Us:
Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.
Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.
Role Overview:
As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.
Your Core Impact
- You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
- You’ll translate learning and AI requirements into tangible, performant product features.
- Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.
Key Responsibilities:
Platform Architecture & Backend Development
- Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
- Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
- Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
- Ensure security, uptime, and performance across all services.
Front-End Development & User Experience
- Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
- Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
- Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.
AI Integration & Support
- Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
- Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
- Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.
Data, Analytics & Reporting
- Build dashboards and data views for educators and product teams to derive insights from learner behavior.
- Implement secure data storage and export pipelines for progress analytics.
Collaboration & Engineering Culture
- Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
- Participate in code reviews, architectural discussions, and system design decisions.
- Help define engineering best practices that balance innovation, maintainability, and performance.
Required Qualifications & Skills
- 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
- Strong proficiency in Python or Node.js for backend services.
- Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
- Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
- Experience with real-time data systems (WebSockets or event-driven architectures).
- Exposure to AI/ML integrations or data-intensive backends.
- Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
- Strong problem-solving mindset and attention to detail.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems, share your resume.
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Job Title: DevOps Engineer
Location: Mumbai
Experience: 2–4 Years
Department: Technology
About InCred
InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]
Role Overview
As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.
Key Responsibilities
- Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
- CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
- Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
- Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
- Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
- Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
- Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
- Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]
Required Skills
- 2–4 years of hands-on experience in DevOps roles.
- Strong knowledge of Linux administration and shell scripting (Bash/Python).
- Experience with AWS services and cloud architecture.
- Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
- Familiarity with Docker, Kubernetes, and container orchestration.
- Knowledge of Terraform or similar IaC tools.
- Understanding of networking, security, and performance tuning.
- Exposure to monitoring tools (Prometheus, Grafana) and log management.
Preferred Qualifications
- Experience in financial services or fintech environments.
- Knowledge of microservices architecture and enterprise-grade SaaS setups.
- Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).
Why Join InCred?
- Culture: High-performance, ownership-driven, and innovation-focused environment.
- Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
- Rewards: Competitive compensation, ESOPs, and performance-based incentives.
- Impact: Be part of a mission-driven organization transforming India’s credit landscape.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What We Expect:
• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
• You thrive on challenges, not on perks or financial rewards.
• You measure success by your own growth, not external validation.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and optimize high-performance backend systems in Golang for trading platforms and financial
services.
• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.
• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
• Own your work—no babysitting, no micromanagement.
• Work alongside equally driven engineers who expect nothing less than brilliance.
• Learn faster than you ever thought possible.
Must-Have Skills:
• Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
• Deep understanding of concurrency, memory management, and system design.
• Experience with Trading, market data processing, or low-latency systems.
• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
• Hands-on with Docker, Kubernetes, and CI/CD pipelines.
• A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
• Past experience in fintech.
• Contributions to open-source Golang projects.
• A history of building something impactful from scratch.
• Understanding of FIX protocol, WebSockets, and streaming APIs.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
We are seeking a Technical Lead with strong expertise in backend engineering, real-time data streaming, and platform/infrastructure development to lead the architecture and delivery of our on-premise systems.
You will design and build high-throughput streaming pipelines (Apache Pulsar, Apache Flink), backend services (FastAPI), data storage models (MongoDB, ClickHouse), and internal dashboards/tools (Angular).
In this role, you will guide engineers, drive architectural decisions, and ensure reliable systems deployed on Docker + Kubernetes clusters.
Key Responsibilities
1. Technical Leadership & Architecture
- Own the end-to-end architecture for backend, streaming, and data systems.
- Drive system design decisions for ingestion, processing, storage, and DevOps.
- Review code, enforce engineering best practices, and ensure production readiness.
- Collaborate closely with founders and domain experts to translate requirements into technical deliverables.
2. Data Pipeline & Streaming Systems
- Architect and implement real-time, high-throughput data pipelines using Apache Pulsar and Apache Flink.
- Build scalable ingestion, enrichment, and stateful processing workflows.
- Integrate multi-sensor maritime data into reliable, unified streaming systems.
3. Backend Services & Platform Engineering
- Lead development of microservices and internal APIs using FastAPI (or equivalent backend frameworks).
- Build orchestration, ETL, and system-control services.
- Optimize backend systems for latency, throughput, resilience, and long-term maintainability.
4. Data Storage & Modeling
- Design scalable, efficient data models using MongoDB, ClickHouse, and other on-prem databases.
- Implement indexing, partitioning, retention, and lifecycle strategies for large datasets.
- Ensure high-performance APIs and analytics workflows.
5. Infrastructure, DevOps & Containerization
- Deploy and manage distributed systems using Docker and Kubernetes.
- Own observability, monitoring, logging, and alerting for all critical services.
- Implement CI/CD pipelines tailored for on-prem and hybrid cloud environments.
6. Team Management & Mentorship
- Provide technical guidance to engineers across backend, data, and DevOps teams.
- Break down complex tasks, review designs, and ensure high-quality execution.
- Foster a culture of clarity, ownership, collaboration, and engineering excellence.
Required Skills & Experience
- 5–10+ years of strong software engineering experience.
- Expertise with streaming platforms like Apache Pulsar, Apache Flink, or similar technologies.
- Strong backend engineering proficiency — preferably FastAPI, Python, Java, or Scala.
- Hands-on experience with MongoDB and ClickHouse.
- Solid experience deploying, scaling, and managing services on Docker + Kubernetes.
- Strong understanding of distributed systems, high-performance data flows, and system tuning.
- Experience working with Angular for internal dashboards is a plus.
- Excellent system-design, debugging, and performance-optimization skills.
- Prior experience owning critical technical components or leading engineering teams.
Nice to Have
- Experience with sensor data (AIS, Radar, SAR, EO/IR).
- Exposure to maritime, defence, or geospatial technology.
- Experience with bare-metal / on-premise deployments.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
Job Description
Experience: 5 - 9 years
Location: Bangalore/Pune/Hyderabad
Work Mode: Hybrid(3 Days WFO)
Senior Cloud Infrastructure Engineer for Data Platform
The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.
Key Responsibilities:
Cloud Infrastructure Design & Management
Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.
Optimize cloud costs and ensure high availability and disaster recovery for critical systems
Databricks Platform Management
Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.
Automate cluster management, job scheduling, and monitoring within Databricks.
Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.
CI/CD Pipeline Development
Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.
Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.
Monitoring & Incident Management
Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.
Security & Compliance
Enforce security best practices, including identity and access management (IAM), encryption, and network security.
Ensure compliance with organizational and regulatory standards for data protection and cloud operations.
Collaboration & Documentation
Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.
Maintain comprehensive documentation for infrastructure, processes, and configurations.
Required Qualifications
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Must Have Experience:
6+ years of experience in DevOps or Cloud Engineering roles.
Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.
Hands-on experience with Databricks for data engineering and analytics.
Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.
Strong scripting skills in Python, or Bash.
Experience with containerization and orchestration tools like Docker and Kubernetes.
Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Job Title: Frappe Engineer
Location: Ahmedabad, Gujarat
About Us
We specialize in providing scalable, high-performance IT and software development teams through our Offshore Development Centre (ODC) model. As a part of the DevX Group, we enable global companies to establish dedicated, culturally aligned engineering teams in India combining world-class talent, infrastructure, and operational excellence. Our expertise spans AI led product engineering, data platforms, cloud solutions, and digital transformation initiatives for mid-market and enterprise clients worldwide.
Position Overview
We are looking for an experienced Frappe Engineer to lead the implementation, customization, and integration of ERPNext solutions. The ideal candidate should have strong expertise in ERP module customization, business process automation, and API integrations while ensuring system scalability and performance.
Key Responsibilities
ERPNext Implementation & Customization
- Design, configure, and deploy ERPNext solutions based on business requirements.
- Customize and develop new modules, workflows, and reports within ERPNext.
- Optimize system performance and ensure data integrity.
Integration & Development
- Develop custom scripts using Python and Frappe Framework.
- Integrate ERPNext with third-party applications via REST API.
- Automate workflows, notifications, and business processes.
Technical Support & Maintenance
- Troubleshoot ERP-related issues and provide ongoing support.
- Upgrade and maintain ERPNext versions with minimal downtime.
- Ensure security, scalability, and compliance in ERP implementations.
Collaboration & Documentation
- Work closely with stakeholders to understand business needs and translate them into ERP solutions.
- Document ERP configurations, custom scripts, and best practices.
Qualifications & Skills
- 3+ years of experience working with ERPNext and the Frappe Framework.
- Strong proficiency in Python, JavaScript, and SQL.
- Hands-on experience with ERPNext customization, report development, and API integrations.
- Knowledge of Linux, Docker, and cloud platforms (AWS, GCP, or Azure) is a plus.
- Experience in business process automation and workflow optimization.
- Familiarity with version control (Git) and Agile development methodologies.
Benefits:
- Competitive salary.
- Opportunity to lead a transformative ERP project for a mid-market client.
- Professional development opportunities.
- Fun and inclusive company culture.
- Five-day workweek.
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
About the role:
We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.
You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.
Responsibilities:
- Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
- Provide technical guidance and mentorship to young engineers.
- Participate in code reviews and contribute to best practices for development and operations.
- Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
- Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
- Improve Developer Experience (DX) to help engineers improve their productivity.
- Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
- Help SRE teams set up on-call rosters and coach them for effective on-call management.
- Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
- Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.
Requirements:
- 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
- Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
- Working knowledge of programming using Golang, Python, Java, or equivalent.
- Skilled in diagnosing and resolving Linux operating system issues.
- Strong proficiency in scripting and automation to build monitoring and analytics solutions.
- Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
- Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
Experience: 5-8 years of professional experience in software engineering, with a strong
background in developing and deploying scalable applications.
● Technical Skills:
○ Architecture: Demonstrated experience in architecture/ system design for scale,
preferably as a digital public good
○ Full Stack: Extensive experience with full-stack development, including mobile
app development and backend technologies.
○ App Development: Hands-on experience building and launching mobile
applications, preferably for Android.
○ Cloud Infrastructure: Familiarity with cloud platforms and containerization
technologies (Docker, Kubernetes).
○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.
● Soft Skills:
○ Experience in hiring team members
○ A proactive and independent problem-solver, comfortable working in a fast-paced
environment.
○ Excellent communication and leadership skills, with the ability to mentor junior
engineers.
○ A strong desire to use technology for social good.
Preferred Qualifications
● Experience working in a startup or smaller team environment.
● Familiarity with the healthcare or public health sector.
● Experience in developing applications for low-resource environments.
● Experience with data management in privacy and security-sensitive applications.
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: DevOps Engineer
Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.
Responsibilities:
Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.
Automate application deployment and environment provisioning using AWS and containerization tools.
Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).
Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.
Build and manage containerized environments using Docker (Kubernetes is a plus).
Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.
Ensure system security, data integrity, and high availability across environments.
Collaborate with development teams to streamline builds, testing, and deployments.
Troubleshoot and resolve infrastructure and deployment-related issues.
Required Skills:
AWS (EC2, ECS, RDS, S3, IAM, Lambda)
CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy
Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible
Containers: Docker (Kubernetes preferred)
Scripting: Bash, Python
Version Control: Git, GitHub, GitLab
Web Servers: Apache, Nginx (preferred)
Databases: MySQL, MongoDB (preferred)
Qualifications:
3+ years of experience as a DevOps Engineer in a production environment.
Proven experience supporting Laravel, Node.js, and Python-based applications.
Strong understanding of CI/CD, containerization, and automation practices.
Experience with infrastructure monitoring, logging, and performance optimization.
Familiarity with agile and collaborative development processes.
Job Title: Full-Stack developer
Experience: 5 to 8+ Years
ASAP Start Immediately
Key Responsibilities
Develop and maintain end-to-end web applications, including frontend interfaces and backend services.
Build responsive and scalable UIs using React.js and Next.js.
Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.
Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.
Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.
Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.
Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.
Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.
Troubleshoot, debug, and improve application performance and security.
Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.
Must-Have Qualifications
Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .
Proficiency in frontend development using React.js and Next.js
Experience in building and consuming RESTful APIs
Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis
Practical experience with cloud platforms, preferably Azure, or AWS
Familiarity with containerization and orchestration tools like Docker and Kubernetes
Working knowledge of Infrastructure as Code (IaC) using Terraform
Experience with CI/CD pipelines using GitHub Actions or Azure DevOps
Ability to work in an agile development environment with cross-functional teams
Strong problem-solving, debugging, and communication skills
Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.
Technical Stack
Frontend: React.js, Next.js
Backend: Python, FastAPI, Django, Spring Boot, Node.js
DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform
CI/CD: GitHub Actions, Azure DevOps
Databases: PostgreSQL, MongoDB, Redis


















