50+ CI/CD Jobs in India
Apply to 50+ CI/CD Jobs on CutShort.io. Find your next job, effortlessly. Browse CI/CD Jobs and apply today!
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Key Responsibilities & Skills
Strong hands-on experience in React.js, Node.js, Express.js, MongoDB
Ability to lead and mentor a development team
Project ownership – sprint planning, code reviews, task allocation
Excellent communication skills for client interactions
Strong decision-making & problem-solving abilities
Nice-to-Have (Bonus Skills)
Experience in system architecture design
Deployment knowledge – AWS / DigitalOcean / Cloud
Understanding of CI/CD pipelines & best coding practices
Why Join InfoSparkles?
Lead from Day One
Work on modern & challenging tech projects
Excellent career growth in a leadership position
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
🚀 We’re Hiring: Python Developer – Pune 🚀
Are you a skilled Python Developer looking to work on high-performance, scalable backend systems?
If you’re passionate about building robust applications and working with modern technologies — this opportunity is for you! 💼✨
📍 Location: Pune
🏢 Role: Python Backend Developer
🕒 Type: Full-Time | Permanent
🔍 What We’re Looking For:
We need a strong backend professional with experience in:
🐍 Python (Advanced)
⚡ FastAPI
🛢️ MongoDB & Postgres
📦 Microservices Architecture
📨 Message Brokers (RabbitMQ / Kafka)
🌩️ Google Cloud Platform (GCP)
🧪 Unit Testing & TDD
🔐 Backend Security Standards
🔧 Git & Project Collaboration
🛠️ Key Responsibilities:
✔ Build and optimize Python backend services using FastAPI
✔ Design scalable microservices
✔ Manage and tune MongoDB & Postgres
✔ Implement message brokers for async workflows
✔ Drive code reviews and uphold coding standards
✔ Mentor team members
✔ Manage cloud deployments on GCP
✔ Ensure top-notch performance, scalability & security
✔ Write robust unit tests and follow TDD
🎓 Qualifications:
➡ 2–4 years of backend development experience
➡ Strong hands-on Python + FastAPI
➡ Experience with microservices, DB management & cloud tech
➡ Knowledge of Agile/Scrum
➡ Bonus: Docker, Kubernetes, CI/CD

Global digital transformation solutions provider.
Role Proficiency:
Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.
Knowledge Examples:
- Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
- Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
- Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
- Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
- Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
- Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
- Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
- Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
- Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
- Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
- Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
- Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
- Solution Structuring: Demonstrates working knowledge of service offering and products
Additional Comments:
Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:
• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.
• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.
• Expertise in cloud-based applications on Azure, leveraging key Azure services.
• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.
• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.
• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.
• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.
• Excellent communication skills
• Mentor team members providing guidance on technical challenges and helping them grow their skill set.
• Good to have experience in GCP and retail domain.
Skills: Devops, Azure, Java
Must-Haves
Java (12+ years), React, Azure, DevOps, Cloud Architecture
Strong Java architecture and design experience.
Expertise in Azure cloud services.
Hands-on experience with React and front-end integration.
Proven track record in DevOps practices (CI/CD, automation).
Notice period - 0 to 15days only
Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum
Excellent communication and leadership skills.

Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
- Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
- Configure and maintain Docker containers and/or Kubernetes clusters.
- Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
- Automate build, deployment, and monitoring processes.
- Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
- Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
- Ensure system scalability, reliability, and security.
- Troubleshoot production issues and perform root-cause analysis.
- Collaborate with engineering teams to improve deployment and development workflows.
- Optimize infrastructure costs and improve performance.
Required Skills & Qualifications
- 3+ years of experience in DevOps, SRE, or Cloud Engineering.
- Strong hands-on knowledge of AWS cloud services.
- Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
- Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
- Experience with Linux administration and shell scripting.
- Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
- Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
- Experience with Terraform or CloudFormation (IaC).
- Good understanding of Node.js or similar application deployments.
- Knowledge of NGINX/Apache and load balancing concepts.
- Strong problem-solving and communication skills.
Preferred/Good to Have
- Experience with Kubernetes (EKS).
- Experience with Serverless architectures (Lambda).
- Experience with Redis, MongoDB, RDS.
- Certification in AWS Solutions Architect / DevOps Engineer.
- Experience with security best practices, IAM policies, and DevSecOps.
- Understanding of cost optimization and cloud cost management.
About Us
At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.
Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.
What We're Looking For
We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.
Key Responsibilities
- Design and implement CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and configuration in the Azure cloud environment.
- Monitor and manage system health, performance, and security.
- Collaborate with development teams to ensure smooth and secure deployment of applications.
- Troubleshoot and resolve issues related to deployment and operations.
- Implement best practices for configuration management and infrastructure as code.
- Maintain documentation of processes and solutions.
Requirements
- Total relevant experience of 4 to 5 years.
- Proven experience as a DevOps Engineer, specifically with Azure.
- Experience with CI/CD tools and practices.
- Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
- Knowledge of scripting languages such as PowerShell or Python.
- Familiarity with containerization technologies like Docker and Kubernetes.
- Good to have – knowledge on AWS, Digital Ocean, GCP
- Excellent troubleshooting and problem-solving skills
- High ownership, self-starter attitude, and ability to work independently
- Strong aptitude and reasoning ability with a growth mindset
Nice to Have
· Experience working in a SaaS or product-driven startup
· Familiarity with solar industry (preferred but not required)
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.
About the Role
We’re looking for a Founding Full-Stack Engineer who can own backend, web, and mobile development end-to-end. You’ll work directly with the founder to build the first version of the product, make key architecture decisions, and help shape the engineering culture from day one.
What You’ll Do
- Build and own backend systems: APIs, auth, data models, messaging, matching.
- Lead development across web (React) and mobile (React Native + Swift/Kotlin).
- Convert ideas from Figma/Notion → shipped features quickly.
- Iterate fast based on real user feedback from the queer Desi community.
- Set up CI/CD, testing, monitoring, and engineering best practices.
- Contribute to early hiring and team building.
What We’re Looking For
- 5–8+ years experience across backend + frontend.
- Strong in React, React Native, and either Swift (iOS) or Kotlin (Android).
- Proven experience shipping consumer apps (App Store / Play Store).
- Hands-on with backend architecture, CI/CD, and early-stage infra setup.
- Excellent product sense and ability to build fast in 0→1 environments.
- Comfort with ambiguity and a founder-mindset level of ownership.
You’ll Be a Great Fit If
- You care about building for LGBTQ+ and South Asian communities.
- You’re motivated by impact, not just code.
- You thrive in fast, ambiguous, early-stage environments.
- You want to help define the technology and culture of a mission-driven startup.
Key Responsibilities
- Design, develop, and maintain scalable applications using the OutSystems platform.
- Build modern Reactive Web and Mobile applications aligned with business and technical requirements.
- Implement integrations with REST APIs, databases, and external systems.
- Collaborate with architects, tech leads, and cross-functional teams for smooth deployments.
- Create reusable, maintainable components following OutSystems best practices.
- Participate in code reviews, unit testing, debugging, and performance optimization.
- Ensure adherence to scalability, security, and deployment automation guidelines.
- Stay updated on new OutSystems capabilities and contribute to continuous improvement.
About Phi Commerce
Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.
Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.
Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.
Awards & Recognitions:
The company innovative work has been recognized at prestigious forums in short span of its existence:
- Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
- Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
- Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
- Winner of NPCI IDEATHON on Blockchain in Payments
- Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors
About the role:
As an SDET, you will work closely with the development, product, and QA teams to ensure the delivery of high-quality, reliable, and scalable software. You will be responsible for creating and maintaining automated test suites, designing testing frameworks, and identifying and resolving software defects. The role will also involve continuous improvement of the test process and promoting best practices in software development and testing.
Key Responsibilities:
- Develop, implement, and maintain automated test scripts for validating software functionality and performance.
- Design and develop testing frameworks and tools to improve the efficiency and effectiveness of automated testing.
- Collaborate with developers, product managers, and QA engineers to identify test requirements and create effective test plans.
- Write and execute unit, integration, regression, and performance tests to ensure high-qualitycode.
- Troubleshoot and debug issues identified during testing, working with developers to resolve them in a timely manner.
- Conduct code reviews to ensure code quality, maintainability, and testability.
- Work with CI/CD pipelines to integrate automated testing into the development process.
- Continuously evaluate and improve testing strategies, identifying areas for automation and optimization.
- Monitor the quality of releases by tracking test coverage, defect trends, and other quality metrics.
- Ensure that all tests are documented, maintainable, and reusable for future software releases.
- Stay up-to-date with the latest trends, tools, and technologies in the testing and automation space.
Skills and Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 3+ years of experience as an SDET, software engineer, or quality engineer with a focus on test automation.
- Strong experience in automated testing frameworks and tools (e.g., Selenium, Appium JUnit, TestNG, Cucumber).
- Proficiency in programming languages with Java
- Experience in designing and implementing test automation for web applications, APIs, and mobile applications.
- Strong understanding of software testing methodologies and processes (e.g., Agile, Scrum).
- Excellent problem-solving skills and attention to detail.
- Good communication and collaboration skills, with the ability to work effectively in a team.
- Knowledge of performance testing and load testing tools is a plus (e.g., JMeter, LoadRunner)
- Experience with test management tools (e.g., TestRail, Jira).
- Knowledge of databases and ability to write SQL queries to validate test data.
- Experience in API testing and knowledge of RESTful web services.
The Production Infrastructure Manager is responsible for overseeing and maintaining the infrastructure that powers our payment gateway systems in a high-availability production environment. This role requires deep technical expertise in cloud platforms, networking, and security, along with strong leadership capability to guide a team of infrastructure engineers. You will ensure the system’s reliability, performance, and compliance with regulatory standards while driving continuous improvement.
Key Responsibilities:
Infrastructure Management
- Manage and optimize infrastructure for payment gateway systems to ensure high availability, reliability, and scalability.
- Oversee daily operations of production environments, including AWS cloud services, load balancers, databases, and monitoring systems.
- Implement and maintain infrastructure automation, provisioning, configuration management, and disaster recovery strategies.
- Develop and maintain capacity planning, monitoring, and backup mechanisms to support peak transaction periods.
- Oversee regular patching, updates, and version control to minimize vulnerabilities.
Team Leadership
- Lead and mentor a team of infrastructure engineers and administrators.
- Provide technical direction to ensure efficient and effective implementation of infrastructure solutions.
Cross-Functional Collaboration
- Work closely with development, security, and product teams to ensure infrastructure aligns with business needs and regulatory requirements (PCI-DSS, GDPR).
- Ensure infrastructure practices meet industry standards and security requirements (PCI-DSS, ISO 27001).
Monitoring & Incident Management
- Monitor infrastructure performance using tools like Prometheus, Grafana, Datadog, etc.
- Conduct incident response, root cause analysis, and post-mortems to prevent recurring issues.
- Manage and execute on-call duties, ensuring timely resolution of infrastructure-related issues.
Documentation
- Maintain comprehensive documentation, including architecture diagrams, processes, and disaster recovery plans.
Skills and Qualifications
Required
- Bachelor’s degree in Computer Science, IT, or equivalent experience.
- 8+ years of experience managing production infrastructure in high-availability, mission-critical environments (fintech or payment gateways preferred).
- Expertise in AWS cloud environments.
- Strong experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
- Deep understanding of:
- Networking (load balancers, firewalls, VPNs, distributed systems)
- Database systems (SQL/NoSQL), HA & DR strategies
- Automation tools (Ansible, Chef, Puppet) and containerization/orchestration (Docker, Kubernetes)
- Security best practices, encryption, vulnerability management, PCI-DSS compliance
- Experience with monitoring tools (Prometheus, Grafana, Datadog).
- Strong analytical and problem-solving skills.
- Excellent communication and leadership capabilities.
Preferred
- Experience in fintech/payment industry with regulatory exposure.
- Ability to operate effectively under pressure and ensure service continuity.

Global digital transformation solutions provider.
Role Proficiency:
Performs tests in strict compliance independently guides other testers and assists test leads
Additional Comments:
Position Title: - Automation + Manual Tester Primary
Skills: Playwright, xUnit, Allure Report, Page Object Model, .Net, C#, Database Queries
Secondary Skills: GIT, JIRA, Manual Testing Experience: 4 to 5 years ESSENTIAL FUNCTIONS AND
BASIC DUTIES
1. Leadership in Automation Strategy: o Assess the feasibility and scope of automation efforts to ensure they align with project timelines and requirements. o Identify opportunities for process improvements and automation within the software development life cycle (SDLC).
2. Automation Test Framework Development: o Design, develop, and implement reusable test automation frameworks for various testing phases (unit, integration, functional, performance, etc.). o Ensure the automation frameworks integrate well with CI/CD pipelines and other development tools. o Maintain and optimize test automation scripts and frameworks for continuous improvements.
3. Team Management: o Lead and mentor a team of automation engineers, ensuring they follow best practices, writing efficient test scripts, and developing scalable automation solutions. o Conduct regular performance evaluations and provide constructive feedback. o Facilitate knowledge-sharing sessions within the team.
4. Collaboration with Cross-functional Teams: o Work closely with development, QA, and operations teams to ensure proper implementation of automated testing and automation practices. o Collaborate with business analysts, product owners, and project managers to understand business requirements and translate them into automated test cases.
5. Continuous Integration & Delivery (CI/CD): o Ensure that automated tests are integrated into the CI/CD pipelines to facilitate continuous testing. o Identify and resolve issues related to the automation processes within the CI/CD pipeline.
6. Test Planning and Estimation: o Contribute to the test planning phase by identifying key automation opportunities. o Estimate effort and time required for automating test cases and other automation tasks.
7. Test Reporting and Metrics: o Monitor automation test results and generate detailed reports on test coverage, defects, and progress. o Analyze test results to identify trends, bottlenecks, or issues in the automation process and make necessary improvements.
8. Automation Tools Management: o Evaluate, select, and manage automation tools and technologies that best meet the needs of the project. o Ensure that the automation tools used align with the overall project requirements and help to achieve optimal efficiency.
9. Test Environment and Data Management: o Work on setting up and maintaining the test environments needed for automation. o Ensure automation scripts work across multiple environments, including staging, testing, and production environments.
10. Risk Management & Issue Resolution:
• Proactively identify risks associated with the automation efforts and provide solutions or mitigation strategies.
• Troubleshoot issues in the automation scripts, framework, and infrastructure to ensure minimal downtime and quick issue resolution.
11. Develop and Maintain Automated Tests: Write and maintain automated scripts for different testing levels, including regression, functional, and integration tests.
12. Bug Identification and Tracking: Report, track, and manage defects identified through automation testing to ensure quick resolution.
13. Improve Test Coverage: Identify gaps in test coverage and develop additional test scripts to improve test comprehensiveness. 14. Automation Documentation: Create and maintain detailed documentation for test automation processes, scripts, and frameworks.
15. Quality Assurance: Ensure that all automated testing activities meet the quality standards, contributing to delivering a high-quality software product.
16. Stakeholder Communication: Regularly update project stakeholders about automation progress, risks, and areas for improvement.
REQUIRED KNOWLEDGE
1. Automation Tools Expertise: Proficiency in tools like Playwright, Allure reports and integration with CI/CD pipelines.
2. Programming Languages: Strong knowledge of languages such as .NET and test frameworks like xUnit.
3. Version Control: Experience using Git for script management and collaboration.
4. Test Automation Frameworks: Ability to design scalable, reusable frameworks for different types of tests (functional, integration, etc.).
5. Leadership and Mentoring: Lead and mentor automation teams, ensuring adherence to best practices and continuous improvement.
6. Problem-Solving: Strong troubleshooting and analytical skills to identify and resolve automation issues quickly.
7. Collaboration and Communication: Excellent communication skills for working with cross-functional teams and presenting test results.
8. Time Management: Ability to estimate, prioritize, and manage automation tasks to meet project deadlines.
9. Quality Focus: Strong commitment to improving software quality, test coverage, and automation efficiency.
Skills: xUnit, Allure report, Playwright, C#
A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD
pipelines, building and automating secure cloud infrastructure and ensuring compliance across
development, operations, and security teams.
Responsibilities
• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and
practices to increase automation and reduce human involvement in the process
• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application
building, testing, securing and deployment.
• Implement security controls for cloud platforms (AWS, GCP), including IAM, container
security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.
• Automate vulnerability scanning, monitoring, and compliance processes by collaborating
with DevOps and Development teams to minimize risks in deployment pipelines.
• Suggesting architecture improvements, recommending process improvements.
• Review cloud deployment architectures and implement required security controls.
• Mentor other engineers on security practices and processes.
Requirements
• Bachelor's degree, preferably in CS or a related field, or equivalent experience
• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,
DAST and Pen Testing
• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.
EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based
cloud solution, with an emphasis on best practice cloud security.
• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)
• Passionate about solving security challenges and being informed of available and
emerging security threats and various security technologies.
• Must be familiar with the OWASP Top 10 Security Risks and Controls
• Good skills in at least one or more scripting languages: Python, Bash
• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.
• Willing to work in shifts as required
Good to Have
• AWS Certified DevOps Engineer
• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,
etc.).
• Experience with Terraform/Ansible/Chef/Puppet
• Operating Systems: Windows and Linux system administration.
Perks:
● Day off on the 3rd Friday of every month (one long weekend each month)
● Monthly Wellness Reimbursement Program to promote health well-being
● Monthly Office Commutation Reimbursement Program
● Paid paternity and maternity leaves
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Title: DevOps Engineer
Location: Mumbai
Experience: 2–4 Years
Department: Technology
About InCred
InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]
Role Overview
As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.
Key Responsibilities
- Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
- CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
- Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
- Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
- Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
- Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
- Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
- Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]
Required Skills
- 2–4 years of hands-on experience in DevOps roles.
- Strong knowledge of Linux administration and shell scripting (Bash/Python).
- Experience with AWS services and cloud architecture.
- Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
- Familiarity with Docker, Kubernetes, and container orchestration.
- Knowledge of Terraform or similar IaC tools.
- Understanding of networking, security, and performance tuning.
- Exposure to monitoring tools (Prometheus, Grafana) and log management.
Preferred Qualifications
- Experience in financial services or fintech environments.
- Knowledge of microservices architecture and enterprise-grade SaaS setups.
- Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).
Why Join InCred?
- Culture: High-performance, ownership-driven, and innovation-focused environment.
- Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
- Rewards: Competitive compensation, ESOPs, and performance-based incentives.
- Impact: Be part of a mission-driven organization transforming India’s credit landscape.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What We Expect:
• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
• You thrive on challenges, not on perks or financial rewards.
• You measure success by your own growth, not external validation.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and optimize high-performance backend systems in Golang for trading platforms and financial
services.
• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.
• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
• Own your work—no babysitting, no micromanagement.
• Work alongside equally driven engineers who expect nothing less than brilliance.
• Learn faster than you ever thought possible.
Must-Have Skills:
• Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
• Deep understanding of concurrency, memory management, and system design.
• Experience with Trading, market data processing, or low-latency systems.
• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
• Hands-on with Docker, Kubernetes, and CI/CD pipelines.
• A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
• Past experience in fintech.
• Contributions to open-source Golang projects.
• A history of building something impactful from scratch.
• Understanding of FIX protocol, WebSockets, and streaming APIs.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–8 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
We are looking for a Senior AI / ML Engineer to join our fast-growing team and help build AI-driven data platforms and intelligent solutions. If you are passionate about AI, data engineering, and building real-world GenAI systems, this role is for you!
🔧 Key Responsibilities
• Develop and deploy AI/ML models for real-world applications
• Build scalable pipelines for data processing, training, and evaluation
• Work on LLMs, RAG, embeddings, and agent workflows
• Collaborate with data engineers, product teams, and software developers
• Write clean, efficient Python code and ensure high-quality engineering practices
• Handle model monitoring, performance tuning, and documentation
Required Skills
• 2–5 years of experience in AI/ML engineering
• Strong knowledge of Python, TensorFlow/PyTorch
• Experience with LLMs, GenAI, RAG, or NLP
• Knowledge of Databricks, MLOps or cloud platforms (AWS/Azure/GCP)
• Good understanding of APIs, distributed systems, and data pipelines
🎯 Good to Have
• Experience in healthcare, SaaS, or big data
• Exposure to Databricks Mosaic AI
• Experience building AI agents
Review Criteria
- Strong Oracle Integration Cloud (OIC) Implementation profile
- 5+ years in enterprise integration / middleware roles, with minimum 3+ years of hands-on Oracle Integration Cloud (OIC) implementation experience
- Strong experience designing and delivering integrations using OIC Integrations, Adapters (File, FTP, DB, SOAP/REST, Oracle ERP), Orchestrations, Mappings, Process Automation, Visual Builder (VBCS), and OIC Insight/Monitoring
- Proven experience building integrations across Oracle Fusion/ERP/HCM, Salesforce, on-prem systems (AS/400, JDE), APIs, file feeds (FBDI/HDL), databases, and third-party SaaS.
- Strong expertise in REST/JSON, SOAP/XML, WSDL, XSD, XPath, XSLT, JSON Schema, and web-service–based integrations
- Good working knowledge of OCI components (API Gateway, Vault, Autonomous DB) and hybrid integration patterns
- Strong SQL & PL/SQL skills for debugging, data manipulation, and integration troubleshooting
- Hands-on experience owning end-to-end integration delivery including architecture reviews, deployments, versioning, CI/CD of OIC artifacts, automated testing, environment migrations (Dev→Test→Prod), integration governance, reusable patterns, error-handling frameworks, and observability using OIC/OCI monitoring & logging tools
- Experience providing technical leadership, reviewing integration designs/code, and mentoring integration developers; must be comfortable driving RCA, performance tuning, and production issue resolution
- Strong stakeholder management, communication (written + verbal), problem-solving, and ability to collaborate with business/product/architect teams
Preferred
- Preferred (Certification) – Oracle OIC or Oracle Cloud certification
- Preferred (Domain Exposure) – Experience with Oracle Fusion functional modules (Finance, SCM, HCM), business events/REST APIs, SOA/OSB background, or multi-tenant/API-governed integration environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Oracle Integration Cloud (OIC)?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is seeking an experienced OIC Lead to own the design, development and deployment of enterprise integrations. The ideal candidate will have atleast 6+years of prior experience in various integration technologies, with a good experience implementing OIC integration capabilities. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
- Lead the design and delivery of integration solutions using Oracle Integration Cloud (Integration, Process Automation, Visual Builder, Insight) and related Oracle PaaS components.
- Build and maintain integrations between Oracle Fusion/ERP/HCM, Salesforce, on-prem applications (e.g., AS/400, JDE), APIs, file feeds (FBDI/HDL), databases and third-party SaaS.
- Own end-to-end integration delivery - from architecture/design reviews through deployment, monitoring, and post-production support.
- Create reusable integration patterns, error-handling frameworks, security patterns (OAuth2, client credentials), and governance for APIs and integrations.
- Own CI/CD, versioning and migration of OIC artifacts across environments (Dev → Test → Prod); implement automated tests and promotion pipelines.
- Define integration architecture standards and reference patterns for hybrid (cloud/on-prem) deployments.
- Ensure security, scalability, and fault tolerance are built into all integration designs.
- Drive performance tuning, monitoring and incident response for integrations; implement observability using OIC/OCI monitoring and logging tools.
- Provide technical leadership and mentorship to a team of integration developers; review designs and code; run hands-on troubleshooting and production support rotations.
- Work with business stakeholders, product owners and solution architects to translate requirements into integration designs, data mappings and runbooks
Ideal Candidate
- 5+ years in integration/enterprise middleware roles with at least 3+ years hands-on OIC (Oracle Integration Cloud) implementations.
- Strong experience with OIC components: Integrations, Adapters (File, FTP, Database, SOAP, REST, Oracle ERP), Orchestrations/Maps, OIC Insight/Monitoring, Visual Builder (VBCS) or similar
- Expert in web services and message formats: REST/JSON, SOAP/XML, WSDL, XSD, XPath, XSLT, JSON Schema
- Good knowledge of Oracle Cloud stack / OCI (API Gateway, Vault, Autonomous DB) and on-prem integration patterns
- SQL & PL/SQL skills for data manipulation and troubleshooting; exposure to FBDI/HDL (for bulk loads) is desirable
- Strong problem-solving, stakeholder management, written/verbal communication and team mentoring experience
Nice-to-have / Preferred:
- Oracle OIC certification(s) or Oracle Cloud certifications
- Exposure to OCI services (API Gateway, Vault, Monitoring) and Autonomous Database
- Experience with Oracle Fusion functional areas (Finance, Supply Chain, HCM) and business events/REST APIs preferred.
- Background with SOA Suite/Oracle Service Bus (useful if migrating legacy SOA to OIC)
- Experience designing multi-tenant integrations, rate limiting/throttling and API monetization strategies.
Job Title: Senior DevOps Engineer (Cybersecurity & VAPT)
Location: Vashi (On-site)
Shift: 10:00 AM – 7:00 PM
Experience: 5+ years
Job Summary
Hiring a Senior DevOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
Manage deployments on AWS/Azure
Maintain Linux servers & cloud environments
Ensure uptime, performance, and scalability
CI/CD & Automation
Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
Automate tasks using Bash/Python
Implement IaC (Terraform/CloudFormation)
Containerization
Build and run Docker containers
Work with basic Kubernetes concepts
Cybersecurity & VAPT
Perform Vulnerability Assessment & Penetration Testing
Identify, track, and mitigate security vulnerabilities
Implement hardening and support DevSecOps practices
Assist with firewall/security policy management
Monitoring & Troubleshooting
Use ELK, Prometheus, Grafana, CloudWatch
Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
Work with Dev, QA, and Security for secure releases
Maintain documentation and best practices
Required Skills
AWS/Azure, Linux, Docker
CI/CD tools: Jenkins, GitHub Actions, GitLab
Terraform / IaC
VAPT experience + understanding of OWASP, cloud security
Bash/Python scripting
Monitoring tools (ELK, Prometheus, Grafana)
Strong troubleshooting & communication
AI/LLM Test Automation Engineer (SDET)
Location: Bangalore (Hybrid preferred)
Experience: 5-8 Years
Job Summary
We are seeking a Senior Test Automation Engineer (SDET) with expertise in AI/LLM testing and Azure DevOps CI/CD to build robust automation frameworks for cutting-edge AI applications. The role combines deep programming skills (Java/Python), modern DevOps practices, and specialized LLM testing to ensure high-quality AI product delivery.
Key Responsibilities
- Design, develop, and maintain automation frameworks using Java/Python for web, mobile, API, and backend testing.
- Create and manage YAML-based CI/CD pipelines in Azure DevOps for end-to-end testing workflows.
- Perform AI/LLM testing including prompt validation, content generation evaluation, model behavior analysis, and bias detection.
- Write and maintain BDD Cucumber feature files integrated with automation suites.
- Execute manual + automated testing across diverse application layers with focus on edge cases.
- Implement Git branching strategies, code reviews, and repository best practices.
- Track defects and manage test lifecycle using ServiceNow or similar tools.
- Conduct root-cause analysis, troubleshoot complex issues, and drive continuous quality improvements.
Mandatory Skills & Experience
✅ 5+ years SDET/Automation experience
✅ Java/Python scripting for test frameworks (Selenium, REST Assured, Playwright)
✅ Azure DevOps YAML pipelines (CI/CD end-to-end)
✅ AI/LLM testing (prompt engineering, model validation, RAG testing)
✅ Cucumber BDD (Gherkin feature files + step definitions)
✅ Git (branching, PRs, GitFlow)
✅ ServiceNow/Jira defect tracking
✅ Manual + Automation testing (web/mobile/API/backend)
Technical Stack
Programming: Java, Python, JavaScript
CI/CD: Azure DevOps, YAML Pipelines
Testing: Selenium, Playwright, REST Assured, Postman
BDD: Cucumber (Gherkin), JBehave
AI/ML: Prompt validation, LLM APIs (OpenAI, LangChain)
Version Control: Git, GitHub/GitLabDefect
Tracking: ServiceNow, Jira, Azure Boards
Preferred Qualifications
- Exposure to AI testing frameworks (LangSmith, Promptfoo)
- Experience with containerization (Docker) and Kubernetes
- Knowledge of performance testing for AI workloads
- AWS/GCP cloud testing experience
- ISTQB or relevant QA certifications
What We Offer
- Work on next-gen AI/LLM products with global impact
- Modern tech stack with Azure-native DevOps
- Flexible hybrid/remote work model
- Continuous learning opportunities in AI testing
Role: Senior Data Engineer (Azure)
Experience: 5+ Years
Location: Anywhere in india
Work Mode: Remote
Notice Period - Immediate joiners or Serving notice period
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Data processing on Azure using ADF, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Services, and Data Pipelines
- Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.)
- Designing and implementing scalable data models and migration strategies
- Working on distributed big data batch or streaming pipelines (Kafka or similar)
- Developing data integration & transformation solutions for structured and unstructured data
- Collaborating with cross-functional teams for performance tuning and optimization
- Monitoring data workflows and ensuring compliance with governance and quality standards
- Driving continuous improvement through automation and DevOps practices
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞:
- 5–10 years of experience as a Data Engineer
- Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory
- Experience in Data Modelling, Data Migration, and Data Warehousing
- Good understanding of database structure principles and schema design
- Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms
- Experience with DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) — good to have
- Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub)
- Familiarity with visualization tools like Power BI or Tableau
- Strong analytical, problem-solving, and debugging skills
- Self-motivated, detail-oriented, and capable of managing priorities effectively
About the role:
We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.
You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.
Responsibilities:
- Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
- Provide technical guidance and mentorship to young engineers.
- Participate in code reviews and contribute to best practices for development and operations.
- Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
- Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
- Improve Developer Experience (DX) to help engineers improve their productivity.
- Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
- Help SRE teams set up on-call rosters and coach them for effective on-call management.
- Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
- Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.
Requirements:
- 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
- Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
- Working knowledge of programming using Golang, Python, Java, or equivalent.
- Skilled in diagnosing and resolving Linux operating system issues.
- Strong proficiency in scripting and automation to build monitoring and analytics solutions.
- Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
- Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: DevOps Engineer
Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.
Responsibilities:
Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.
Automate application deployment and environment provisioning using AWS and containerization tools.
Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).
Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.
Build and manage containerized environments using Docker (Kubernetes is a plus).
Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.
Ensure system security, data integrity, and high availability across environments.
Collaborate with development teams to streamline builds, testing, and deployments.
Troubleshoot and resolve infrastructure and deployment-related issues.
Required Skills:
AWS (EC2, ECS, RDS, S3, IAM, Lambda)
CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy
Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible
Containers: Docker (Kubernetes preferred)
Scripting: Bash, Python
Version Control: Git, GitHub, GitLab
Web Servers: Apache, Nginx (preferred)
Databases: MySQL, MongoDB (preferred)
Qualifications:
3+ years of experience as a DevOps Engineer in a production environment.
Proven experience supporting Laravel, Node.js, and Python-based applications.
Strong understanding of CI/CD, containerization, and automation practices.
Experience with infrastructure monitoring, logging, and performance optimization.
Familiarity with agile and collaborative development processes.
ROLES AND RESPONSIBILITIES:
- Plan, schedule, and manage all releases across product and customer projects.
- Define and maintain the release calendar, identifying dependencies and managing risks proactively.
- Partner with engineering, QA, DevOps, and product management to ensure release readiness.
- Create release documentation (notes, guides, videos) for both internal stakeholders and customers.
- Run a release review process with product leads before publishing.
- Publish releases and updates to the company website release section.
- Drive communication of release details to internal teams and customers in a clear, concise way.
- Manage post-release validation and rollback procedures when required.
- Continuously improve release management through automation, tooling, and process refinement.
IDEAL CANDIDATE:
- 3+ years of experience in Release Management, DevOps, or related roles.
- Strong knowledge of CI/CD pipelines, source control (Git), and build/deployment practices.
- Experience creating release documentation and customer-facing content (videos, notes, FAQs).
- Excellent communication and stakeholder management skills; able to translate technical changes into business impact.
- Familiarity with SaaS, iPaaS, or enterprise software environments is a strong plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive salary package.
- Opportunity to learn from and work with senior leadership & founders.
- Build solutions for large enterprises that move from concept to real-world impact.
- Exceptional career growth pathways in a highly innovative and rapidly scaling environment.
Role Summary
We are hiring a Senior Java Developer with strong backend development experience to build scalable, high-performance applications and lead technical initiatives.
Key Responsibilities
- Develop and maintain applications using Java 8/11/17, Spring Boot, and REST APIs.
- Design and implement microservices and backend components.
- Work with SQL/NoSQL databases, API integrations, and performance optimization.
- Collaborate with cross-functional teams and participate in code reviews.
- Deploy applications using CI/CD, Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP).
Skills Required
- Strong in Core Java, OOPS, multithreading, collections.
- Hands-on with Spring Boot, Hibernate/JPA, Microservices.
- Experience with REST APIs, Git, and CI/CD pipelines.
- Knowledge of Docker/Kubernetes and cloud basics.
- Good understanding of database queries and performance tuning.
Nice to Have
- Experience with messaging systems (Kafka/RabbitMQ).
- Basic frontend understanding (React/Angular).
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Job Summary:
We are looking for a highly skilled MERN Stack Developer with at least 3 years of experience to join our team. As a Team Lead, you will be responsible for designing, developing, and optimizing web applications while mentoring junior developers. Strong communication skills and leadership abilities are essential for this role.
Key Responsibilities:
Full-Stack Development
- Develop, optimize, and maintain scalable applications using MongoDB, Express.js, React.js, and Node.js.
- Implement best coding practices, reusable components, and maintain high-performance applications.
Technical Leadership
- Lead a team of developers, conduct code reviews, and provide technical mentorship.
- Ensure the team follows best practices, development standards, and project timelines.
Backend & Database Management
- Design and implement RESTful APIs, GraphQL, and backend logic.
- Optimize MongoDB queries, indexing, and schema design for performance.
Frontend Development
- Develop interactive and responsive UI components using React.js, Redux, Next.js, or TypeScript.
- Ensure seamless user experience and cross-browser compatibility.
Collaboration & Deployment
- Work closely with UI/UX designers, product managers, and DevOps for smooth project execution.
- Manage CI/CD pipelines, cloud deployments (AWS, Firebase, Heroku), and ensure system security.
Requirements:
✔ 3 years of experience in MERN Stack development.
✔ Strong expertise in JavaScript, TypeScript (preferred), and modern frameworks.
✔ Experience leading teams, conducting code reviews, and mentoring junior developers.
✔ Knowledge of Docker, Kubernetes, CI/CD pipelines, and cloud platforms.
✔ Strong problem-solving skills and ability to troubleshoot performance issues.
✔ Excellent communication skills and ability to work in an Agile team environment.
Location: Bengalore, India, Exp: 3-5 Yrs
Backend Developer (Golang) - Trading & Fintech
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What we expect:
You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
You thrive on challenges, not on perks or financial rewards.
You measure success by your own growth, not external validation.
Taking calculated risks excites you—you’re here to build, break, and learn.
You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
You understand the stakes—milliseconds can make or break trades, and precision is everything.
What you will do:
Develop and optimize high-performance backend systems in Golang for trading platforms and financial services.
Architect low-latency, high-throughput microservices that push the boundaries ofspeed and efficiency.
Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
Own your work—no babysitting, no micromanagement.
Work alongside equally driven engineers who expect nothing less than brilliance.
Must have skills:
Learn faster than you ever thought possible.
Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
Deep understanding of concurrency, memory management, and system design.
Experience with Trading, market data processing, or low-latency systems.
Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
Hands-on with Docker, Kubernetes, and CI/CD pipelines.
A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
Past experience in fintech, trading systems, or algorithmic trading.
Contributions to open-source Golang projects.
A history of building something impactful from scratch.
Understanding of FIX protocol, WebSockets, and streaming APIs.
Why Join Us?
Work with a team that expects and delivers excellence.
A culture where risk-taking is rewarded, and complacency is not.
Limitless opportunities for growth—if you can handle the pace.
A place where learning is currency, and outperformance is the only metric that matters.
The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech. This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
About the Role
We’re looking for an Elixir Developer who is passionate about building scalable, high performance backend systems. You’ll work closely with our engineering team to design, develop, and maintain reliable applications that power mission-critical systems.
Key Responsibilities
• Develop and maintain backend services using Elixir and Phoenix framework.
• Build scalable, fault-tolerant, and distributed systems.
• Integrate APIs, databases, and message queues for real-time applications.
• Optimize system performance and ensure low latency and high throughput.
• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.
• Write clean, maintainable, and testable code with proper documentation.
• Participate in code reviews, architectural discussions, and deployment automation.
Required Skills & Experience
• 2–4 years of hands-on experience in Elixir (or strong functional programming background).
• Experience with Phoenix, Ecto, and RESTful API development.
• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.
• Proficiency in PostgreSQL, Redis, or similar databases.
• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).
• Understanding of CI/CD pipelines, version control (Git), and agile development.
Good to Have
• Experience with microservices architecture or real-time data systems.
• Knowledge of GraphQL, LiveView, or PubSub.
• Exposure to performance profiling, observability, or monitoring tools.
Job Description:
Technical Lead – Full Stack
Experience: 8–12 years (Strong candidates Java 50% - React 50%)
Location – Bangalore/Hyderabad
Interview Levels – 3 Rounds
Tech Stack: Java, Spring Boot, Microservices, React, SQL
Focus: Hands-on coding, solution design, team leadership, delivery ownership
Must-Have Skills (Depth)
Java (8+): Streams, concurrency, collections, JVM internals (GC), exception handling.
Spring Boot: Security, Actuator, Data/JPA, Feign/RestTemplate, validation, profiles, configuration management.
Microservices: API design, service discovery, resilience patterns (Hystrix/Resilience4j), messaging (Kafka/RabbitMQ) optional.
React: Hooks, component lifecycle, state management, error boundaries, testing (Jest/RTL).
SQL: Joins, aggregations, indexing, query optimization, transaction isolation, schema design.
Testing: JUnit/Mockito for backend; Jest/RTL/Cypress for frontend.
DevOps: Git, CI/CD, containers (Docker), familiarity with deployment environments.
We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.
Key Responsibilities
- Solution Development:
- Design and build custom applications using Power Apps (Canvas & Model-Driven).
- Develop automated workflows using Power Automate for business process optimization.
- Create interactive dashboards and reports using Power BI for data visualization and analytics.
- Configure and manage Dataverse for secure data storage and modelling.
- Develop and maintain Power Pages for external-facing portals.
- Integration & Customization:
- Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
- Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
- Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
- Governance & Security:
- Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
- Ensure compliance with security, data governance, and licensing guidelines.
- Implement role-based access control and manage user permissions.
- Performance & Optimization:
- Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
- Troubleshoot and resolve technical issues promptly.
- Collaboration & Documentation:
- Work closely with business stakeholders to gather requirements and translate them into technical solutions.
- Document architecture, workflows, and processes for maintainability.
Required Skills & Qualifications
- Technical Expertise:
- Strong proficiency in Power Apps (Canvas & Model-Driven), Power Automate, Power BI, Power Pages, and Dataverse.
- Experience with Microsoft 365, Dynamics 365, and Azure services.
- Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
- Familiarity with SQL, DAX, and data modeling.
- Additional Skills:
- Understanding of ALM practices, solution packaging, and deployment pipelines.
- Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
- Strong problem-solving and analytical skills.
- Certifications (Preferred):
- Microsoft Certified: Power Platform Developer Associate.
- Microsoft Certified: Power Platform Solution Architect Expert.
Soft Skills
- Excellent communication and collaboration skills.
- Ability to work in agile environments and manage multiple priorities.
- Strong documentation and presentation abilities.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Role: Mid–Senior Cloud Infrastructure Engineer
Type: Individual Contributor
Location: Baner, Pune
Experience: 4–9 years
About the Role
We are establishing a Mid–Senior Cloud Infrastructure Engineer CoE with a focus on cloud-native automation. As a Cloud Infrastructure Engineer, you will design, automate, and maintain AWS infrastructure using Terraform following IaC and platform engineering best practices.
Key Responsibilities
- Build, automate, and maintain AWS infrastructure using Terraform (remote state, modules, pipelines).
- Design secure, scalable AWS environments (VPC, EC2, IAM, ALB/NLB, S3, RDS, Lambda, ECS/EKS preferred).
- Create reusable Terraform modules for platform-wide standardization.
- Automate provisioning workflows for Linux and Windows environments.
- Integrate IaC into CI/CD (Jenkins/GitHub Actions/GitLab).
- Manage environment lifecycle: dev → test → stage → prod.
- Implement cost optimization, tagging strategies, and operational guardrails.
- Maintain infrastructure documentation and reusable patterns.
- Troubleshoot cloud deployments, networking, IAM policies, and automation issues.
Required Skills
- Strong hands-on experience with AWS services.
- Solid expertise with Terraform (workspaces, state mgmt, modules, CI/CD integration).
- Understanding of multi-tier architectures, networking, security groups, IAM.
- Familiarity with Linux/Windows system administration.
- Knowledge of scripting (Python/Bash/PowerShell).
Good to Have
- AWS Certified (SA / SysOps / DevOps — not mandatory).
- Experience with container orchestration (ECS/EKS).
- Experience with monitoring tools (CloudWatch, Grafana, Prometheus).
Soft Skills
- Strong analytical and troubleshooting capability.
- Ability to collaborate closely with DevOps/Platform teams.
NOTE: 2nd Technical round has to be taken F2F from Baner Office in Pune.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)






















