Cutshort logo

50+ Jenkins Jobs in India

Apply to 50+ Jenkins Jobs on CutShort.io. Find your next job, effortlessly. Browse Jenkins Jobs and apply today!

icon
Bengaluru (Bangalore)
2 - 7 yrs
₹10L - ₹25L / yr
skill iconC
skill iconPython
skill iconGo Programming (Golang)
skill iconJenkins
skill iconJava


Description


We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.


We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.


We have codebases in go, java, python, vue js, bash and support the development team that develops C code.


You need to like challenges, explore new fields and find solutions for problems.


You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.



Requirements

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 2+ years in professional software development
  • Solid understanding of software development patterns like SOLID, GoF or similar.
  • Experience automating deployments for different kinds of applications.
  • Strong understanding of Git version control, merge/rebase strategies, tagging.
  • Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
  • Solid scripting experience (bash, or similar).
  • Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).


Preferred Skills

  • Experience in SRE
  • Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
  • Familiarity with build tools like Make, CMake, or similar.
  • Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
  • Experience deploying to Linux production systems with service uptime guarantees.


Responsibilities

  • Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
  • Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
  • Deploy the services and implement and refine the automation for different environments.
  • Operate: The services that the SRE Team developed.
  • Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
  • Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
  • Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
  • Success Metrics
  • Achieve >99% service up time with minimal rollbacks.
  • Delivery in time, hold timelines.


Benefits

Enjoy a great environment, great people, and a great package

  • Stock Appreciation Rights - Generous pre series-B stock options
  • Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements 
  • Health Insurance - Premium health insurance for employee, spouse and children 
  • Working Hours - Flexible working hours with sole focus on enabling a great work environment 
  • Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills 
  • Make a Difference - We're here because we want to make an impact on the world - we hope you do too!


Why Join RtBrick

Enjoy the excitement of a start-up without the risk!


We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology. 


We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation. 


And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia. 


Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams. 


We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide

Read more
Inflectionio

at Inflectionio

1 candid answer
Renu Philip
Posted by Renu Philip
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconJenkins
Chef
CI/CD
+6 more

We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.


Responsibilities: 

* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.

* Deploy and orchestrate containerized applications using Kubernetes.

* Implement and maintain infrastructure as code (IaC) using Terraform.

* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.

* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.

* Collaborate with cross-functional teams to define technical requirements and deliver solutions.

* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.



Requirements: 

* 4+ years of experience with AWS, including practical exposure to its services in production environments.

* Demonstrated expertise in Kubernetes for container orchestration.

* Proficiency in using Terraform for managing infrastructure as code.

* Exposure to at least one CI/CD tool, such as Jenkins or Chef.

* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
5 - 7 yrs
Best in industry
skill iconJava
Selenium
Selenium Web driver
CI/CD
Appium
+11 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are seeking a highly skilled QA Automation Engineer with strong expertise in Java and Selenium to join our growing engineering team. The ideal candidate will play a key role in designing, developing, and maintaining scalable test automation frameworks while ensuring high product quality across releases.


Roles and Responsibilities:

● Design, develop, and maintain robust automation frameworks using Java and Selenium

● Build automated test scripts for web applications and integrate them into CI CD pipelines

● Collaborate closely with developers, product managers, and business analysts to understand requirements and define effective test strategies

● Participate in sprint planning, requirement reviews, and technical discussions

● Perform root cause analysis for defects and work with engineering teams for resolution

● Improve automation coverage and reduce manual regression effort

● Ensure test environments, test data, and execution reports are maintained and documented

● Mentor junior QA engineers and promote best practices in automation

● Develop, execute, and maintain comprehensive test plans and test cases for manual and automated testing

● Perform functional, regression, performance, and security testing to ensure software quality

● Design and develop automated test scripts using tools such as Selenium, Appium, or similar frameworks

● Identify, document, and track software defects, working closely with development teams for resolution

● Ensure test coverage by working closely with developers, product managers, and other stakeholders

● Establish and maintain continuous integration (CI) and continuous deployment (CD) pipelines for test automation

● Conduct API testing using tools like Postman or RestAssured

● Collaborate with cross-functional teams to enhance the overall quality of the product

● Stay up to date with the latest industry trends and best practices in QA methodologies and automation frameworks


Requirements:

● 5 to 7 years of experience in QA automation

● Strong hands-on experience with Java and Selenium WebDriver

● Experience in building or enhancing automation frameworks from scratch

● Good understanding of TestNG or JUnit

● Experience with Maven or Gradle

● Familiarity with CI CD tools such as Jenkins, GitHub Actions, or similar

● Strong understanding of Agile Scrum methodology

● Experience with API testing tools such as Rest Assured or Postman is a plus

● Knowledge of version control systems like Git

● Strong analytical and problem-solving skills

● Strong understanding of software testing life cycle (STLC) and defect lifecycle management

● Experience with version control systems (e.g., Git)

● Relevant certifications in software testing (e.g., ISTQB) are desirable but not required

● Solid understanding of software testing principles, methodologies, and techniques

● Excellent analytical and problem-solving skills

● Strong attention to detail and a commitment to delivering high-quality software

● Good communication and collaboration skills, with the ability to work effectively in a team environment


Good to Have:

● Experience with performance testing tools

● Exposure to cloud platforms such as AWS or Azure

● Knowledge of containerization tools like Docker

● Experience in BDD frameworks such as Cucumber.


Why Join Us?

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
Strategic Pathfinder
Keerthana Rao
Posted by Keerthana Rao
Indiranagar, Bangalore
2 - 5 yrs
₹6L - ₹9L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconJavascript
TypeScript
skill iconHTML/CSS
+4 more

Function: Product

Reports to: Founders

Location: Bangalore

Job type: 6 days WFO


Your Role at Pathfinder


As a Full Stack Developer, you'll be primarily responsible for architecting and developing the platform that will be our interface with the outside world. You'll develop the platform end-to-end such that it is a reflection of who we are, our core capabilities and what we have to offer to the early stage startups. 


What You’ll do

  • Own the entire platform end-to-end. Develop and enhance our website and client dashboards.
  • Convert UX/UI designs into modular, reusable and scalable components using Next.js and React.js 
  • Ensure applications are fully responsive and the visuals adapt to all resolutions, cross-browser and cross-device compatible. 
  • Optimize performance (lazy-loading, code-splitting) to reduce load times. 
  • Build backend systems, database that support CRUD operations. 
  • Develop database schemas, stored procedures, and queries using Notion or any other database systems. 
  • Maintain version-control workflows using Git, branch strategies and pull requests.
  • Stay abreast of emerging trends and propose improvements to our stack.
  • Collaborate with UI/UX designers to translate design mockups and wireframes into responsive, pixel-perfect web applications.


You’ll thrive here if you

  • 2-5 years’ hands-on experience in full stack development. 
  • Strong proficiency in React.js and Next.js frameworks. 
  • Solid understanding of JavaScript (ES6+), TypeScript, HTML5 and CSS3.
  • Ability to design the UI from scratch without using pre-existing CSS libraries.
  • Proven ability to debug complex issues and optimize performance.
  • Excellent communication skills in English, both verbal and written.
  • Strong problem-solving aptitude and attention to detail.


Good to Have

  • Experience with server-side rendering and static/dynamic site generation.
  • Familiarity with CI/CD pipelines (e.g. GitHub Actions, Jenkins) 
  • Basic understanding of backend technologies (Node.js, Express) 
  • Experience in performance prowling (Lighthouse, Chrome DevTools) 


Working Style


We value speed, precision, and reliability in execution. To thrive here, you’ll bring:

  • Operator Mindset – Ability to think from multiple perspectives and execute diligently. 
  • Detail Discipline – Exhibit attention to detail, coherence and rigour in work.
  • Adaptive Creativity – Look beyond the obvious and bring your individual flavour in your work.
  • Builder's Accountability - When something breaks in production, you own it through resolution. You don't pass the bug.
  • Documentation Habit – You write code others can read, and you leave context behind you (Example - comments).
Read more
Aryush Infotech India Pvt Ltd
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore), Bhopal
1 - 4 yrs
₹3L - ₹4L / yr
CI/CD
Selenium
skill iconJenkins
skill iconPostman
JIRA
+14 more

Job Title: QA Tester – FinTech (Manual + Automation Testing)

Location: Bangalore, India

Job Type: Full-Time

Experience Required: 3 Years

Industry: FinTech / Financial Services

Function: Quality Assurance / Software Testing

 

About the Role:

We are looking for a skilled QA Tester with 3 years of experience in both manual and automation testing, ideally in the FinTech domain. The candidate will work closely with development and product teams to ensure that our financial applications meet the highest standards of quality, performance, and security.

 

Key Responsibilities:

  • Analyze business and functional requirements for financial products and translate them into test scenarios.
  • Design, write, and execute manual test cases for new features, enhancements, and bug fixes.
  • Develop and maintain automated test scripts using tools such as Selenium, TestNG, or similar frameworks.
  • Conduct API testing using Postman, Rest Assured, or similar tools.
  • Perform functional, regression, integration, and system testing across web and mobile platforms.
  • Work in an Agile/Scrum environment and actively participate in sprint planning, stand-ups, and retrospectives.
  • Log and track defects using JIRA or a similar defect management tool.
  • Collaborate with developers, BAs, and DevOps teams to improve quality across the SDLC.
  • Ensure test coverage for critical fintech workflows like transactions, KYC, lending, payments, and compliance.
  • Assist in setting up CI/CD pipelines for automated test execution using tools like Jenkins, GitLab CI, etc.

 

Required Skills and Experience:

  • 3+ years of hands-on experience in manual and automation testing.
  • Solid understanding of QA methodologies, STLC, and SDLC.
  • Experience in testing FinTech applications such as digital wallets, online banking, investment platforms, etc.
  • Strong experience with Selenium WebDriver, TestNG, Postman, and JIRA.
  • Knowledge of API testing, including RESTful services.
  • Familiarity with SQL to validate data in databases.
  • Understanding of CI/CD processes and basic scripting for automation integration.
  • Good problem-solving skills and attention to detail.
  • Excellent communication and documentation skills.

 

Preferred Qualifications:

  • Exposure to financial compliance and regulatory testing (e.g., PCI DSS, AML/KYC).
  • Experience with mobile app testing (iOS/Android).
  • Working knowledge of test management tools like TestRail, Zephyr, or Xray.
  • Performance testing experience (e.g., JMeter, LoadRunner) is a plus.
  • Basic knowledge of version control systems (e.g., Git).


Read more
Timble Technologies

at Timble Technologies

1 recruiter
Preeti Bisht
Posted by Preeti Bisht
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
1 - 4 yrs
₹2L - ₹5L / yr
Advanced Linux Admin
Ansible
Terraform
skill iconDocker
skill iconJenkins
+7 more

Job Title: Devops Engineer

Location: Delhi, Arjan Garh

Job Type: Full-Time

IMMEDIATE JOINERS REQUIRED

 

About Us:

Timble is a forward-thinking organization dedicated to leveraging cutting-edge technology to solve real-world problems. Our mission is to drive innovation and create impactful solutions through artificial intelligence and machine learning.


About the Role

We are looking for a high-ownership Senior DevOps Engineer to architect and maintain the mission-critical infrastructure supporting our global algorithmic trading operations. You will be the bridge between development and live trading, ensuring zero-latency performance and 100% system availability.

Key Responsibilities

  • Infrastructure Architecture: Design scalable, fault-tolerant systems for high-frequency trading environments.
  • Performance Optimization: Tune Linux servers and Python environments for maximum speed and efficiency.
  • Incident Management: Lead real-time response for live trading systems, performing RCA and preventive fixes.
  • Automation & CI/CD: Build and enhance robust pipelines using Docker, Jenkins, and Ansible.
  • Proactive Monitoring: Implement advanced logging and alerting (Prometheus/Grafana) to ensure high uptime.
  • Database Admin: Manage relational databases and write optimized SQL for operational reporting.
  • Mentorship: Guide junior DevOps members and maintain rigorous system documentation.

Technical Requirements

  • OS/Scripting: Advanced Linux Admin and expert-level Python scripting.
  • IaC & Tools: Hands-on experience with Ansible, Terraform, and Docker.
  • CI/CD: Proficiency in Jenkins or GitLab CI.
  • Data: Strong SQL skills with experience in performance tuning.
  • Education: B.Tech/M.Tech in Computer Science or related engineering field.
Read more
The Client s an AI-powered Customer Data Platform (CDP) comp

The Client s an AI-powered Customer Data Platform (CDP) comp

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹25L / yr
Test Automation (QA)
CI/CD
API
skill iconProgramming
skill iconJava
+11 more

5+ years of experience in automation-focused QA roles

Strong programming skills in Java / Python / JavaScript

Experience in building automation testing frameworks

Strong knowledge of API testing and backend validation

Experience writing integration and concurrency tests

Hands-on experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI)

Good knowledge of SQL and data validation

Understanding of race conditions, retries, idempotency, and asynchronous systems

Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
Remote only
2 - 7 yrs
₹5L - ₹15L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+8 more

BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.


KeyResponsibilities:

 1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.

 2: Manage and monitor cloud infrastructure and servers.

 3: Automate build, testing, and deployment processes.

 4: Collaborate with development and QA teams to improve release cycles.

 5: Monitor system performance and ensure high availability and reliability.

 6: Troubleshoot infrastructure and deployment issues.

 7: Implement security best practices in DevOps workflows.


RequiredSkills:

 1: Strong understanding of DevOps principles and CI/CD pipelines.

 2: Experience with Docker, Kubernetes, or containerization technologies.

 3: Familiarity with cloud platforms such as AWS, Azure, or GCP.

 4: Experience with Git, Jenkins, GitHub Actions, or similar tools.

 5: Basic scripting knowledge (Bash, Python, or Shell).

 6: Good understanding of Linux systems and networking concepts.


Eligibility:

 1: Experience: 2 – 7 years

 2: Qualification: Bachelor's degree in Computer Science, IT, or related field

 3: Strong analytical and problem-solving skills.


Location: Chennai / Remote


Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
2 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
2 - 5 yrs
₹5L - ₹7L / yr
skill iconJava
Selenium
RestAssured
API
API Testing
+9 more

About Us: 

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Overview:

We are looking for a skilled Quality Analyst with 3-5 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully remote role.


Key Responsibilities:

  • Develop and execute test plans, test cases, and test scripts for software products.
  • Conduct manual and automated testing to ensure reliability and performance.
  • Identify, document, and collaborate with developers to resolve defects and issues.
  • Report testing progress and results to stakeholders and management.
  • Improve automation testing processes for efficiency and accuracy.
  • Stay updated with the latest QA trends, tools, and best practices.
  • Good Knowledge in Mobile Testing & working on BrowserStack tools.
  • Work on Agile ceremonies like Sprint planning and daily standup.


Requirements:

  • 2-5 years of experience in software quality assurance.
  • Strong understanding of testing methodologies and automated testing.
  • Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
  • Proficiency with Appium, TestNG, defect tracking, and version control tools (mandatory).
  • Knowledge of XPath find (mandatory).
  • Familiarity with GitLab, Jenkins, Ci/CD tools, defect tracking, and version control tools.
  • Strong problem-solving, analytical, and debugging skills.
  • Excellent communication and collaboration abilities.
  • Detail-oriented with a commitment to delivering high-quality results.

Why Join Us?

  • Fully remote work with flexible hours.
  • Exposure to industry-leading technologies and practices.
  • Collaborative team culture with growth opportunities.
  • Work with top brands and innovative projects.
Read more
Thinqor
sai patel
Posted by sai patel
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹20L / yr
MLOps
Windows Azure
skill iconKubernetes
aks
aro
+3 more

 Hiring: Cloud Engineer – MLOps Platform 🚨

📍 Location: Bangalore

🧠 Experience: 5–8 Years

We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.

🔹 Tech Stack:

Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD

🔹 Key Responsibilities:

• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.

• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.

• Automate infrastructure using Terraform and GitOps practices.

• Manage Databricks workspaces, AKS clusters, and networking configurations.

• Implement monitoring, logging, and alerting systems for ML workloads.

• Ensure cloud security, governance, and cost optimization best practices.

🔹 Required Skills:

✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks

✔ Experience with MLflow and Kubernetes-based deployments

✔ Proficiency in Python and Bash / PowerShell scripting

✔ Strong understanding of cloud security, infrastructure automation, and distributed systems

Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
7 - 9 yrs
₹8L - ₹16L / yr
skill iconAmazon Web Services (AWS)
Terraform
skill iconJenkins

Key Responsibilities

  • Design, implement, and maintain highly available infrastructure on AWS.
  • Automate infrastructure provisioning using Terraform (Infrastructure as Code).
  • Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
  • Build and manage CI/CD pipelines to enable safe and frequent deployments.
  • Implement robust monitoring, alerting, and logging solutions.
  • Perform incident response, root cause analysis (RCA), and postmortems.
  • Improve system resilience through automation and self-healing mechanisms.
  • Optimize cloud resource utilization and cost (FinOps awareness).
  • Collaborate with development teams to improve application reliability.
  • Manage containerized workloads using Docker and Kubernetes (EKS preferred).
  • Implement security and compliance best practices across infrastructure.
  • Maintain operational runbooks and documentation.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 7–8 years of experience in SRE, DevOps, or Production Engineering.
  • Strong hands-on experience with AWS services.
  • Proven experience with Terraform for infrastructure automation.
  • Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
  • Strong scripting skills (Python, Bash, or Shell).
  • Experience with Linux system administration.
  • Hands-on experience with monitoring and observability tools.
  • Good understanding of networking and cloud security fundamentals.
  • Experience with Git and branching strategies


Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
BestQ
Sudha S
Posted by Sudha S
Jaipur, Faridabad, WEST BENGAL, Odisha, RAJASTHAN, Chandigarh, Nashik, Pune
1 - 5 yrs
₹1L - ₹8L / yr
skill iconJava
skill iconJavascript
skill iconPython
TestNG
Selenium
+18 more

Job Title: QA Engineer – Manual & Automation Testing

We are looking for a detail-oriented QA Engineer (Manual & Automation) to join a

fast-paced product team. In this role, you will be responsible for ensuring the quality,

reliability, and performance of a modern SaaS platform through structured

manual and automation testing. You will work closely with developers, product managers,

and stakeholders to deliver high-quality software releases on time.

What You’ll Be Doing

● Design, review, and execute detailed manual and automated test cases for web-based applications

● Perform functional, regression, smoke, sanity, and end-to-end testing across multiple modules

● Identify, log, and track defects clearly, ensuring proper follow-up and closure

● Validate bug fixes and feature enhancements before production releases

● Collaborate closely with developers to understand requirements and resolve issues efficiently

● Participate in requirement and design reviews to provide early QA feedback

● Maintain and update test cases, test scenarios, and automation scripts based on product changes

● Contribute to the continuous improvement of QA processes, test coverage, and release quality

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
7 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Linux/Unix
skill iconGitHub
+19 more

Description

SRE Engineer


Role Overview 

As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.


Responsibilities and Deliverables

• Manage, monitor, and maintain highly available systems (Windows and Linux)

• Analyze metrics and trends to ensure rapid scalability.

• Address routine service requests while identifying ways to automate and simplify.

• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.

• Maintain data backups and disaster recovery plans.

• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.

• Adhere to security best practices through all stages of the software development lifecycle

• Follow and champion ITIL best practices and standards.

• Become a resource for emerging and existing cloud technologies with a focus on AWS.


Organizational Alignment

• Reports to the Senior SRE Manager

• This role involves close collaboration with DevOps, DBA, and security teams.


Technical Proficiencies

• Hands-on experience with AWS is a must-have.

• Proficiency analyzing application, IIS, system, security logs and CloudTrail events

• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus

• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.

• Experience maintaining and administering Windows, Linux, and Kubernetes.

• Experience in automation using scripting languages such as Bash, PowerShell, or Python.

• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.

• Experience with SQL Server database maintenance and administration is preferred.

• Good Understanding of networking (VNET, subnet, private link, VNET peering).

• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps, 

Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure


Experience

• 7+ years of experience in SRE or System Administration role

• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)

• 3+ years of experience working with cloud technologies including AWS, Azure.

• 1+ years of experience working with container technology including Docker and Kubernetes.

• Comfortable using Scrum, Kanban, or Lean methodologies.


Education

• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent 

experience.


Additional Job Details:

• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST

• Interview process: 3 technical rounds

• Work model: 3 days’ work from office


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram
9 - 12 yrs
₹21L - ₹27L / yr
skill iconJava
Spring
Apache Kafka
SQL
skill iconPostgreSQL
+16 more

JOB DETAILS:

Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka

Industry: Global Digital Transformation Solutions Provider

Salary: Best in Industry

Experience: 9 to 12 years

Location: Trivandrum, Thiruvananthapuram

 

Job Description

Experience

  • 9+ years of experience in Java-based backend application development
  • Proven experience building and maintaining enterprise-grade, scalable applications
  • Hands-on experience working with microservices and event-driven architectures
  • Experience working in Agile and DevOps-driven development environments

 

Mandatory Skills

  • Advanced proficiency in core Java and enterprise Java concepts
  • Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
  • Strong expertise in SQL, including database design, query optimization, and performance tuning
  • Hands-on experience with PostgreSQL or other relational database management systems
  • Strong experience with Kafka or similar event-driven messaging and streaming platforms
  • Practical knowledge of CI/CD pipelines using GitLab
  • Experience with Jenkins for build automation and deployment processes
  • Strong understanding of GitLab for source code management and DevOps workflows

 

Responsibilities

  • Design, develop, and maintain robust, scalable, and high-performance backend solutions
  • Develop and deploy microservices using Spring or Micronaut frameworks
  • Implement and integrate event-driven systems using Kafka
  • Optimize SQL queries and manage PostgreSQL databases for performance and reliability
  • Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
  • Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
  • Ensure code quality through best practices, reviews, and automated testing

 

Good-to-Have Skills

  • Strong problem-solving and analytical abilities
  • Experience working with Agile development methodologies such as Scrum or Kanban
  • Exposure to cloud platforms such as AWS, Azure, or GCP
  • Familiarity with containerization and orchestration tools such as Docker or Kubernetes

 

Skills: java, spring boot, kafka development, cicd, postgresql, gitlab

 

Must-Haves

Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)

Advanced proficiency in core Java and enterprise Java concepts

Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications

Strong expertise in SQL, including database design, query optimization, and performance tuning

Hands-on experience with PostgreSQL or other relational database management systems

Strong experience with Kafka or similar event-driven messaging and streaming platforms

Practical knowledge of CI/CD pipelines using GitLab

Experience with Jenkins for build automation and deployment processes

Strong understanding of GitLab for source code management and DevOps workflows

 

 

*******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: only Trivandrum

F2F Interview on 21st Feb 2026

 

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
5 - 8 yrs
₹11L - ₹20L / yr
PySpark
Apache Kafka
Data architecture
skill iconAmazon Web Services (AWS)
EMR
+32 more

JOB DETAILS:

* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka

* Industry: Global digital transformation solutions provider

* Salary: Best in Industry

* Experience: 5-8 years

* Location: Hyderabad

 

Job Summary

We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.


Key Responsibilities

ETL Pipeline Development & Optimization

  • Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
  • Optimize data pipelines for performance, scalability, fault tolerance, and reliability.

Big Data Processing

  • Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
  • Ensure fault-tolerant, scalable, and high-performance data processing systems.

Cloud Infrastructure Development

  • Build and manage scalable, cloud-native data infrastructure on AWS.
  • Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.

Real-Time & Batch Data Integration

  • Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
  • Ensure consistency, data quality, and a unified view across multiple data sources and formats.

Data Analysis & Insights

  • Partner with business teams and data scientists to understand data requirements.
  • Perform in-depth data analysis to identify trends, patterns, and anomalies.
  • Deliver high-quality datasets and present actionable insights to stakeholders.

CI/CD & Automation

  • Implement and maintain CI/CD pipelines using Jenkins or similar tools.
  • Automate testing, deployment, and monitoring to ensure smooth production releases.

Data Security & Compliance

  • Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
  • Implement data governance practices ensuring data integrity, security, and traceability.

Troubleshooting & Performance Tuning

  • Identify and resolve performance bottlenecks in data pipelines.
  • Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.

Collaboration & Cross-Functional Work

  • Work closely with engineers, data scientists, product managers, and business stakeholders.
  • Participate in agile ceremonies, sprint planning, and architectural discussions.


Skills & Qualifications

Mandatory (Must-Have) Skills

  1. AWS Expertise
  • Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
  • Strong understanding of cloud-native data architectures.
  1. Big Data Technologies
  • Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
  • Experience with Apache Spark and Apache Kafka in production environments.
  1. Data Frameworks
  • Strong knowledge of Spark DataFrames and Datasets.
  1. ETL Pipeline Development
  • Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
  1. Database Modeling & Data Warehousing
  • Expertise in designing scalable data models for OLAP and OLTP systems.
  1. Data Analysis & Insights
  • Ability to perform complex data analysis and extract actionable business insights.
  • Strong analytical and problem-solving skills with a data-driven mindset.
  1. CI/CD & Automation
  • Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
  • Familiarity with automated testing and deployment workflows.

 

Good-to-Have (Preferred) Skills

  • Knowledge of Java for data processing applications.
  • Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
  • Familiarity with data governance frameworks and compliance tooling.
  • Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
  • Exposure to cost optimization strategies for large-scale cloud data platforms.

 

Skills: big data, scala spark, apache spark, ETL pipeline development

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Hyderabad

Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer

F2F Interview: 14th Feb 2026

3 days in office, Hybrid model.

 


Read more
MindInventory

at MindInventory

1 video
Uzer Khan
Posted by Uzer Khan
Ahmedabad
3 - 6 yrs
₹3L - ₹8L / yr
Windows Azure
skill iconKubernetes
skill iconDocker
skill icongrafana
Terraform
+1 more
  • 3+ years hands-on Azure cloud & automation experience.
  • Experience managing high-availability enterprise systems.
  • Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
  • Kubernetes (AKS) & Docker.
  • Networking (VPN, DNS, routing, firewalls, NSGs).
  • Infra-as-Code (Terraform / Bicep optional).
  • Monitoring tools: Azure Monitor, Grafana, Prometheus.
  • CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
  • Security: Key Vault, certificates, encryption, RBAC.
  • Understanding of PostgreSQL/PostGIS networking.
  • Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
  • Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
  • Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
  • Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
  • Ensure high uptime, DR planning, backup and failover strategies.
  • Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
  • Enforce security, RBAC, compliance, and audit standards across environments.
  • Good to have knowledge/experince in Linux administration (Ubuntu/Debian).


Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
skill iconHTML/CSS
skill iconBootstrap
skill iconJavascript
skill iconReact.js
SaaS
+7 more

Job Location: Kharadi, Pune

Job Type: Full-Time

About Us:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.


Qualifications:

  • Strong Experience in JavaScript and React
  • Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
  • Experience with design patterns
  • Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
  • Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
  • A strong foundation in computer science, with competencies in data structures, algorithms, and software design
  • Bachelor's / Master's Degree in CS
  • Experience in GIT in mandatory
  • Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
3 - 8 yrs
₹7L - ₹25L / yr
CI/CD
skill iconJenkins
skill icongrafana
Terraform

Job Location: Bangalore/Mumbai

Exp: 3-10+ Yrs

Job Title: DevOps Engineer


About TradeLab:

TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions.


Key Responsibilities

  • DevOps Strategy & Leadership
  • Contribute to defining and executing the DevOps strategy for high-frequency trading and fintech platforms. Mentor junior engineers and collaborate with cross-functional teams to foster a culture of automation, scalability, and performance.
  • Work closely with engineering and product teams to align infrastructure initiatives with business objectives.
  • CI/CD and Infrastructure Automation Design and optimize CI/CD pipelines for ultra-low-latency trading systems.
  • Implement Infrastructure as Code (IaC) practices using Terraform, Helm, Kubernetes, and automation frameworks.
  • Establish best practices for release management and deployment in mission-critical environments.
  • Cloud & On-Prem Infrastructure Management Manage hybrid infrastructure across AWS, GCP, and on-prem data centers ensuring high availability and fault tolerance.
  • Implement networking strategies for low-latency trading, including routing and performance tuning.
  • Drive cost optimization and scalability initiatives across multi-cloud environments.
  • Performance Monitoring & Optimization Set up and maintain system performance monitoring using Prometheus, Grafana, and ELK stack. Implement alerting and automated remediation strategies for zero-downtime operations.
  • Conduct root-cause analysis and performance tuning for systems handling millions of transactions per second.
  • Security & Compliance Apply DevSecOps principles across all environments.
  • Ensure compliance with financial regulations (SEBI and global standards) and maintain audit trails.
  • Drive security automation, vulnerability management, and IAM policies.


Required Skills & Qualifications

  • 3–8 years of experience in DevOps, with exposure to leadership or team mentoring.
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). Hands-on experience with cloud platforms (AWS, GCP) and hybrid infrastructure.
  • Proficiency in Kubernetes, Docker, and container orchestration. Solid experience with Terraform, Helm, and IaC principles.
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls). Experience with monitoring tools (Prometheus, Grafana, ELK).
  • Proficiency in scripting languages (Python, Bash, Go) for automation. Understanding of security best practices, IAM, and compliance frameworks.


Good to Have

  • Exposure to ultra-low-latency trading infrastructure or high-frequency trading systems.
  • Knowledge of FIX protocol, FPGA acceleration, or network optimization techniques.
  • Familiarity with Redis, Nginx, or other real-time data handling technologies.
  • Experience in advanced performance tuning for microsecond-level execution.


Why Join Us?

Work with a team that expects and delivers excellence. A culture where innovation and speed are rewarded. Limitless opportunities for growth—if you can handle the pace. Build systems that move markets and redefine fintech

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Kochi (Cochin), Chennai, Thiruvananthapuram
5 - 7 yrs
₹19L - ₹28L / yr
skill iconJava
skill iconSpring Boot
Microservices
Architecture
Google Cloud Platform (GCP)
+22 more

Job Details

- Job Title: Lead I - Software Engineering - Java, Spring Boot, Microservices

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 5-7 years

- Employment Type: Full Time

- Job Location: Trivandrum, Chennai, Kochi, Thiruvananthapuram

- CTC Range: Best in Industry

 

Job Description

Job Title: Senior Java Developer Experience: 5+ years

Job Summary:

We are looking for a Senior Java Developer with strong experience in Spring Boot and Microservices to work on high-performance applications for a leading financial services client. The ideal candidate will have deep expertise in Java backend development, cloud (preferably GCP), and strong problem-solving abilities.

 

Key Responsibilities:

• Develop and maintain Java-based microservices using Spring Boot

• Collaborate with Product Owners and teams to gather and review requirements

• Participate in design reviews, code reviews, and unit testing

• Ensure application performance, scalability, and security

• Contribute to solution architecture and design documentation

• Support Agile development processes including daily stand-ups and sprint planning

• Mentor junior developers and lead small modules or features

 

Required Skills:

• Java, Spring Boot, Microservices architecture

• GCP (or other cloud platforms like AWS)

• REST/SOAP APIs, Hibernate, SQL, Tomcat

• CI/CD tools: Jenkins, Bitbucket

• Agile methodologies (Scrum/Kanban)

• Unit testing (JUnit), debugging and troubleshooting

• Good communication and team leadership skills

 

Preferred Skills:

• Frontend familiarity (Angular, AJAX)

• Experience with API documentation tools (Swagger)

• Understanding of design patterns and UML

• Exposure to Confluence, Jira

 

Mandatory Skills Required:

Strong proficiency in Java, spring boot, Microservices, GCP/AWS.

Experience Required: Minimum 5+ years of relevant experience

Java/J2EE (5+ years), Spring/Spring Boot (5+ years), Microservices (5+ years), AWS/GCP/Azure (mandatory), CI/CD (Jenkins, SonarQube, Git)

Java, Spring Boot, Microservices architecture

GCP (or other cloud platforms like AWS)

REST/SOAP APIs, Hibernate, SQL, Tomcat

CI/CD tools: Jenkins, Bitbucket

Agile methodologies (Scrum/Kanban)

Unit testing (JUnit), debugging and troubleshooting

Good communication and team leadership skills

 

******

Notice period - 0 to 15 days only (Immediate and who can join by Feb)

Job stability is mandatory

Location: Trivandrum, Kochi, Chennai

Virtual Interview - 14th Feb 2026

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
7 - 12 yrs
₹40L - ₹80L / yr
skill iconMachine Learning (ML)
Apache Spark
Apache Airflow
skill iconPython
skill iconAmazon Web Services (AWS)
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
2 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3.5 - 5 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconDjango
MySQL
skill iconPostgreSQL
FastAPI
+22 more

About Us:

MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.

Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
6 - 9 yrs
₹13L - ₹22L / yr
Web API
skill iconC#
skill icon.NET
skill iconAmazon Web Services (AWS)
Agile/Scrum
+19 more

JOB DETAILS:

* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 6 -9 years

* Location: Hyderabad

Job Description

Role Overview

We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using .NET Core and C#.
  • Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
  • Collaborate with architects and senior engineers on solution design and implementation.
  • Write clean, scalable, and well‑documented code.
  • Use Postman to build and test RESTful APIs.
  • Participate in code reviews and provide technical guidance to junior developers.
  • Troubleshoot and optimize application performance.
  • Work closely with QA, DevOps, and Product teams in an Agile setup.
  • (Optional) Contribute to frontend development using React.
  • (Optional) Use Datadog for monitoring, logging, and performance metrics.

 

Required Skills & Experience

  • 6+ years of experience in backend development.
  • Strong proficiency in C# and .NET Core.
  • Experience building RESTful services and microservices.
  • Hands‑on experience with AWS cloud platform.
  • Solid understanding of API testing using Postman.
  • Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
  • Strong problem‑solving and debugging skills.
  • Experience working in Agile/Scrum teams.

 

Good to Have

  • Experience with React for frontend development.
  • Exposure to Datadog for monitoring and logging.
  • Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
  • Containerization experience (Docker, Kubernetes).

 

Soft Skills

  • Strong communication and collaboration abilities.
  • Ability to work in a fast‑paced environment.
  • Ownership mindset with a focus on delivering high‑quality solutions.

 

Skills

.NET Core, C#, AWS, Postman

 

Notice period - 0 to 15 days only

Location: Hyderabad

Virtual Interview: 7th Feb 2026

First round will be Virtual

2nd round will be F2F

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
Software Testing (QA)
Mobile App Testing (QA)
Web applications
skill iconJavascript
+17 more

JOB DETAILS:

* Job Title: Tester III - Software Testing- Playwright + API testing

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Design, develop, and maintain automated test scripts for web applications using Playwright.
  • Perform API testing using industry-standard tools and frameworks.
  • Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
  • Analyze test results, identify defects, and track them to closure.
  • Participate in requirement reviews, test planning, and test strategy discussions.
  • Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.

 

Required Experience:

  • Strong hands-on experience in Automation Testing for web-based applications.
  • Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
  • Solid experience in API testing (Postman, REST Assured, or similar tools).
  • Good understanding of software QA methodologies, tools, and processes.
  • Ability to write clear, concise test cases and automation scripts.
  • Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.

 

Good to Have:

  • Knowledge of cloud environments (AWS/Azure)
  • Experience with version control tools like Git
  • Familiarity with Agile/Scrum methodologies

 

Skills: automation testing, sql, api testing, soap ui testing, playwright

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum , Kochi (Cochin), Thiruvananthapuram
5 - 7 yrs
₹14L - ₹28L / yr
skill iconSpring Boot
Microservices
skill iconJava
J2EE
Spring
+26 more

JOB DETAILS:

Job Role: Lead I - Software Engineering - Java, Spring Boot, Microservices

Industry: Global digital transformation solutions provider

Work Mode: 3 days in office, Hybrid model. 

Salary: Best in Industry

Experience: 5-7 years

Location: Trivandrum, Kochi, Thiruvananthapuram


Job Description

Job Title: Senior Java Developer Experience: 5+ years

Job Summary: We are looking for a Senior Java Developer with strong experience in Spring Boot and Microservices to work on high-performance applications for a leading financial services client. The ideal candidate will have deep expertise in Java backend development, cloud (preferably GCP), and strong problem-solving abilities.

 

Key Responsibilities:

• Develop and maintain Java-based microservices using Spring Boot

• Collaborate with Product Owners and teams to gather and review requirements

• Participate in design reviews, code reviews, and unit testing

• Ensure application performance, scalability, and security

• Contribute to solution architecture and design documentation

• Support Agile development processes including daily stand-ups and sprint planning

• Mentor junior developers and lead small modules or features

 

Required Skills:

• Java, Spring Boot, Microservices architecture

• GCP (or other cloud platforms like AWS)

• REST/SOAP APIs, Hibernate, SQL, Tomcat

• CI/CD tools: Jenkins, Bitbucket

• Agile methodologies (Scrum/Kanban)

• Unit testing (JUnit), debugging and troubleshooting

• Good communication and team leadership skills

 

Preferred Skills:

• Frontend familiarity (Angular, AJAX)

• Experience with API documentation tools (Swagger)

• Understanding of design patterns and UML

• Exposure to Confluence, Jira


Must-Haves

Java/J2EE (5+ years), Spring/Spring Boot (5+ years), Microservices (5+ years), AWS/GCP/Azure (mandatory), CI/CD (Jenkins, SonarQube, Git)

Mandatory Skills Required: Strong proficiency in Java, spring boot, microservices, GCP/AWS.

Experience Required: Minimum 5+ years of relevant experience

Java, Spring Boot, Microservices architecture

GCP (or other cloud platforms like AWS)

REST/SOAP APIs, Hibernate, SQL, Tomcat

CI/CD tools: Jenkins, Bitbucket

Agile methodologies (Scrum/Kanban)

Unit testing (JUnit), debugging and troubleshooting

Good communication and team leadership skills


Notice period - 0 to 15 days only (Immediate or candidates who are serving notice period and who can join by Feb)

Job stability is mandatory

Location: Trivandrum, Kochi

Virtual Interview: 31st Jan-Saturday


Nice to Haves

Frontend familiarity (Angular, AJAX)

Experience with API documentation tools (Swagger)

Understanding of design patterns and UML

Exposure to Confluence, Jira

Read more
Impacto Digifin Technologies

at Impacto Digifin Technologies

4 candid answers
1 recruiter
Navitha Reddy
Posted by Navitha Reddy
Bengaluru (Bangalore)
1 - 4 yrs
₹5L - ₹7L / yr
skill iconPython
Automation
Test Automation (QA)
Object Oriented Programming (OOPs)
RESTful APIs
+10 more

Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION 

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹14L - ₹20L / yr
skill iconPython
Mainframe
skill iconC#
SDET
Test Automation (QA)
+37 more

Job Details

Job Title: Java Full Stack Developer 

Industry: Global digital transformation solutions provider

- Domain: Information technology (IT)

Experience Required: 5-7 years

Working Mode: 3 days in office, Hybrid model.

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

SDET (Software Development Engineer in Test)


Job Responsibilities:

• Test Automation: • Develop, maintain, and execute automated test scripts using test automation frameworks. • Design and implement testing tools and frameworks to support automated testing.

• Software Development: • Participate in the design and development of software components to improve testability. • Write code actively, contribute to the development of tools, and work closely with developers to debunk complex issues.

• Quality Assurance: • Collaborate with the development team to understand software features and technical implementations. • Develop quality assurance standards and ensure adherence to the best testing practices.

• Integration Testing: • Conduct integration and functional testing to ensure that components work as expected individually and when combined.

• Performance and Scalability Testing: • Perform performance and scalability testing to identify bottlenecks and optimize application performance. • Test Planning and Execution: • Create detailed, comprehensive, and well-structured test plans and test cases. • Execute manual and/or automated tests and analyze results to ensure product quality.

• Bug Tracking and Resolution: • Identify, document, and track software defects using bug tracking tools. • Verify fixes and work closely with developers to resolve issues. • Continuous Improvement: • Stay updated on emerging tools and technologies relevant to the SDET role. • Constantly look for ways to improve testing processes and frameworks.


Skills and Qualifications: • Strong programming skills, particularly in languages such as COBOL, JCL, Java, C#, Python, or JavaScript. • Strong experience in Mainframe environments. • Experience with test automation tools and frameworks like Selenium, JUnit, TestNG, or Cucumber. • Excellent problem-solving skills and attention to detail. • Familiarity with CI/CD tools and practices, such as Jenkins, Git, Docker, etc. • Good understanding of web technologies and databases is often beneficial. • Strong communication skills for interfacing with cross-functional teams.


Qualifications • 5+ years of experience as a software developer, QA Engineer, or SDET. • 5+ years of hands-on experience with Java or Selenium. • 5+ years of hands-on experience with Mainframe environments. • 4+ years designing, implementing, and running test cases. • 4+ years working with test processes, methodologies, tools, and technology. • 4+ years performing functional and UI testing, quality reporting. • 3+ years of technical QA management experience leading on and offshore resources. • Passion around driving best practices in the testing space. • Thorough understanding of Functional, Stress, Performance, various forms of regression testing and mobile testing. • Knowledge of software engineering practices and agile approaches. • Experience building or improving test automation frameworks. • Proficiency CICD integration and pipeline development in Jenkins, Spinnaker or other similar tools. • Proficiency in UI automation (Serenity/Selenium, Robot, Watir). • Experience in Gherkin (BDD /TDD). • Ability to quickly tackle and diagnose issues within the quality assurance environment and communicate that knowledge to a varied audience of technical and non-technical partners. • Strong desire for establishing and improving product quality. • Willingness to take challenges head on while being part of a team. • Ability to work under tight deadlines and within a team environment. • Experience in test automation using UFT and Selenium. • UFT/Selenium experience in building object repositories, standard & custom checkpoints, parameterization, reusable functions, recovery scenarios, descriptive programming and API testing. • Knowledge of VBScript, C#, Java, HTML, and SQL. • Experience using GIT or other Version Control Systems. • Experience developing, supporting, and/or testing web applications. • Understanding of the need for testing of security requirements. • Ability to understand API – JSON and XML formats with experience using API testing tools like Postman, Swagger or SoapUI. • Excellent communication, collaboration, reporting, analytical and problem-solving skills. • Solid understanding of Release Cycle and QA /testing methodologies • ISTQB certification is a plus.


Skills: Python, Mainframe, C#

Notice period - 0 to 15days only

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
AWS CloudFormation
DevOps
MLOps
+19 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Marble X
Manpreet Kaur
Posted by Manpreet Kaur
Mumbai
3 - 10 yrs
₹3L - ₹22L / yr
Shell Scripting
skill iconPython
MLOps
skill iconJenkins
skill iconGit
+4 more


Skills  - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance


Responsibilities:

1. Design, develop, and implement MLOps pipelines for the continuous deployment and

integration of machine learning models

2. Collaborate with data scientists and engineers to understand model requirements and

optimize deployment processes

3. Automate the training, testing and deployment processes for machine learning models

4. Continuously monitor and maintain models in production, ensuring optimal

performance, accuracy and reliability

5. Implement best practices for version control, model reproducibility and governance

6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness

7. Troubleshoot and resolve issues related to model deployment and performance

8. Ensure compliance with security and data privacy standards in all MLOps activities

9. Keep up to date with the latest MLOps tools, technologies and trends

10. Provide support and guidance to other team members on MLOps practices


Required skills and experience:

• 3-10 years of experience in MLOps, DevOps or a related field

• Bachelor’s degree in computer science, Data Science or a related field

• Strong understanding of machine learning principles and model lifecycle management

• Experience in Jenkins pipeline development

• Experience in automation scripting



Read more
US based large Biotech company with WW operations.

US based large Biotech company with WW operations.

Agency job
Remote only
5 - 10 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
cicd
decsecops
Terraform
Ansible
+9 more

Senior Cloud Engineer Job Description

Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]

Location: Remote [REQUIRES WORKING IN CST TIME ZONE]


Position Overview

The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud

strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives

through innovative cloud engineering.


Key Responsibilities

Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)


Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration


Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes


Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements


Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools


Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management

Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation


Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues

Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence


Stay current with emerging cloud technologies, trends, and best practices,

Required Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
  • 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
  • Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
  • Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
  • Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
  • Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
  • Experience with cloud security, governance, and compliance frameworks
  • Excellent analytical, troubleshooting, and root cause analysis skills
  • Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
  • Ability to work independently, manage multiple priorities, and lead complex projects to completion


Preferred Qualifications

  • Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
  • Experience with cloud cost optimization and FinOps practices
  • Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
  • Exposure to cloud database technologies (SQL, NoSQL, managed database services)
  • Knowledge of cloud migration strategies and hybrid cloud architectures


Read more
IT Services & Staffing Solutions Industry

IT Services & Staffing Solutions Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
12 - 14 yrs
₹29L - ₹38L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform
Troubleshooting
Amazon VPC
+16 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Hands-On AWS Cloud Engineering / DevOps Profile
  • Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
  • Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
  • Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
  • Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
  • Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
  • Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills


ROLE & RESPONSIBILITIES:

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.


KEY RESPONSIBILITIES:

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager


IDEAL CANDIDATE:

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in Python or Bash
  • Understanding of monitoring, incident management, and cloud security basics


NICE TO HAVE:

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
NeoGenCode Technologies Pvt Ltd
Pune
3 - 8 yrs
₹10L - ₹25L / yr
skill iconJava
skill iconSpring Boot
RESTful APIs
Hibernate (Java)
JPA
+10 more

Job Title : Java Backend Developer

Experience : 3 – 8 Years

Location : Pune (Onsite) (Pune candidates Only)

Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)


About the Role :

We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.

The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.


Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.


Key Responsibilities :

  • Design, develop, and maintain backend microservices and REST APIs
  • Implement data persistence using relational and NoSQL databases
  • Ensure performance, scalability, and security of backend systems
  • Integrate observability and monitoring tools for production environments
  • Work within CI/CD pipelines and containerized deployments
  • Collaborate with DevOps, QA, and product teams for feature delivery
  • Troubleshoot, optimize, and improve existing modules and services

Mandatory Skills :

  • Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
  • Databases : MySQL, MongoDB
  • Observability : Prometheus, Grafana, Spring Actuators
  • Cloud Technologies : AWS
  • Containerization Tools : Docker
  • CI/CD Tools : Jenkins, GitHub Actions
  • Version Control : GitHub
  • Operating Systems : Windows 7, Linux

Nice to Have :

  • Strong analytical and debugging abilities
  • Experience working in Agile/Scrum environments
  • Good communication and collaborative skills
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
6 - 8 yrs
₹16L - ₹22L / yr
skill iconC#
skill icon.NET
ASP.NET
SQL
SQL server
+17 more

JOB DETAILS:

Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering

Industry: Global digital transformation solutions provider

Work Mode: Hybrid

Salary: Best in Industry

Experience: 6-8 years

Location: Hyderabad 


Job Description:

 Experience in Microsoft Web development technologies such as Web API, SOAP XML 

• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure

• Knowledge of cloud architecture and technologies

• Support/Incident management experience in a 24/7 environment

• SQL Server and SSIS experience

• DevOps experience of Github and Jenkins CI/CD pipelines or similar

• Windows Server 2016/2019+ and SQL Server 2019+ experience

• Experience of the full software development lifecycle

• You will write clean, scalable code, with a view towards design patterns and security best practices

• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge


Must-Haves

C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)

.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)

Mandatory skills: Net core with Azure or AWS experience

Notice period - 0 to 15 days only

Location: Hyderabad

Virtual Drive - 17th Jan

Read more
Watsoo Express
Gurugram
8 - 11 yrs
₹18L - ₹25L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+12 more

Profile: Devops Lead

Location: Gurugram

Experience: 08+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
2 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Mumbai
3 - 6 yrs
₹5L - ₹15L / yr
DevOps
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
CI/CD
+2 more

Role: Senior Platform Engineer (GCP Cloud)

Experience Level: 3 to 6 Years

Work location: Mumbai

Mode : Hybrid


Role & Responsibilities:

  • Build automation software for cloud platforms and applications
  • Drive Infrastructure as Code (IaC) adoption
  • Design self-service, self-healing monitoring and alerting tools
  • Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
  • Build Kubernetes container platforms
  • Introduce new cloud technologies for business innovation

Requirements:

  • Hands-on experience with GCP Cloud
  • Knowledge of cloud services (compute, storage, network, messaging)
  • IaC tools experience (Terraform/CloudFormation)
  • SQL & NoSQL databases (Postgres, Cassandra)
  • Automation tools (Puppet/Chef/Ansible)
  • Strong Linux administration skills
  • Programming: Bash/Python/Java/Scala
  • CI/CD pipeline expertise (Jenkins, Git, Maven)
  • Multi-region deployment experience
  • Agile/Scrum/DevOps methodology


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Phi Commerce

at Phi Commerce

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
3 - 9 yrs
Upto ₹22L / yr (Varies
)
skill iconJava
CI/CD
skill iconJenkins
Linux/Unix
Selenium
+1 more

About Phi Commerce

Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.


Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.


Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.


Awards & Recognitions:

The company innovative work has been recognized at prestigious forums in short span of its existence:


  • Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
  • Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
  • Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
  • Winner of NPCI IDEATHON on Blockchain in Payments
  • Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors


About the role:

As an SDET, you will work closely with the development, product, and QA teams to ensure the delivery of high-quality, reliable, and scalable software. You will be responsible for creating and maintaining automated test suites, designing testing frameworks, and identifying and resolving software defects. The role will also involve continuous improvement of the test process and promoting best practices in software development and testing.


Key Responsibilities:


  • Develop, implement, and maintain automated test scripts for validating software functionality and performance.
  • Design and develop testing frameworks and tools to improve the efficiency and effectiveness of automated testing.
  • Collaborate with developers, product managers, and QA engineers to identify test requirements and create effective test plans.
  • Write and execute unit, integration, regression, and performance tests to ensure high-qualitycode.
  • Troubleshoot and debug issues identified during testing, working with developers to resolve them in a timely manner.
  • Conduct code reviews to ensure code quality, maintainability, and testability.
  • Work with CI/CD pipelines to integrate automated testing into the development process.
  • Continuously evaluate and improve testing strategies, identifying areas for automation and optimization.
  • Monitor the quality of releases by tracking test coverage, defect trends, and other quality metrics.
  • Ensure that all tests are documented, maintainable, and reusable for future software releases.
  • Stay up-to-date with the latest trends, tools, and technologies in the testing and automation space.


Skills and Qualifications:

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 6+ years of experience as an SDET, software engineer, or quality engineer with a focus on test automation.
  • Strong experience in automated testing frameworks and tools (e.g., Selenium, Appium JUnit, TestNG, Cucumber).
  • Proficiency in programming languages with Java
  • Experience in designing and implementing test automation for web applications, APIs, and mobile applications.
  • Strong understanding of software testing methodologies and processes (e.g., Agile, Scrum).
  • Excellent problem-solving skills and attention to detail.
  • Good communication and collaboration skills, with the ability to work effectively in a team.
  • Knowledge of performance testing and load testing tools is a plus (e.g., JMeter, LoadRunner)
  • Experience with test management tools (e.g., TestRail, Jira).
  • Knowledge of databases and ability to write SQL queries to validate test data.
  • Experience in API testing and knowledge of RESTful web services.
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
2 - 5 yrs
₹7L - ₹18L / yr
skill iconDocker
skill iconKubernetes
CI/CD
skill iconJenkins

Job Title: DevOps Engineer

Location: Mumbai

Experience: 2–4 Years

Department: Technology

About InCred

InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]

Role Overview

As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.

Key Responsibilities

  • Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
  • CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
  • Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
  • Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
  • Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
  • Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
  • Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
  • Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]

Required Skills

  • 2–4 years of hands-on experience in DevOps roles.
  • Strong knowledge of Linux administration and shell scripting (Bash/Python).
  • Experience with AWS services and cloud architecture.
  • Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
  • Familiarity with Docker, Kubernetes, and container orchestration.
  • Knowledge of Terraform or similar IaC tools.
  • Understanding of networking, security, and performance tuning.
  • Exposure to monitoring tools (Prometheus, Grafana) and log management.

Preferred Qualifications

  • Experience in financial services or fintech environments.
  • Knowledge of microservices architecture and enterprise-grade SaaS setups.
  • Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).

Why Join InCred?

  • Culture: High-performance, ownership-driven, and innovation-focused environment.
  • Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
  • Rewards: Competitive compensation, ESOPs, and performance-based incentives.
  • Impact: Be part of a mission-driven organization transforming India’s credit landscape.


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
AWS CloudFormation
MLOps
DevOps
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote, Mumbai
3 - 4 yrs
₹7L - ₹9L / yr
skill iconJava
Selenium
STLC
skill iconJenkins
Test Automation (QA)
+2 more

Job Summary:


We are seeking a highly skilled QA Automation Engineer with 4+ years of proven experience in software testing and test automation. The ideal candidate will have strong expertise in leading the Software Testing Life Cycle (STLC) across multiple projects, functional testing of web and backend systems, and hands-on experience with API automation. This role requires someone who can build robust test strategies, design automation frameworks, and collaborate with cross-functional teams to ensure product quality at scale.


Key Responsibilities:


  • Lead and manage the complete STLC (planning, test case design, execution, defect management, and closure) across concurrent projects.


  • Design, develop, and maintain automated test scripts for web applications, APIs, and backend systems.


  • Perform functional, regression, smoke, and system integration testing, ensuring full coverage and efficient execution.


  • Conduct API testing using tools like Postman, RestAssured, or similar, validating request/response structures, data integrity, performance, and security.


  • Collaborate closely with developers, product managers, and stakeholders to identify gaps, track defects, and enhance test coverage.


  • Integrate automated test suites with CI/CD pipelines to enable continuous testing and faster releases.


  • Execute performance, load, and stress testing to validate system scalability and reliability.


  • Document detailed test plans, strategies, and reports, ensuring clear communication of testing progress and quality metrics.


  • Continuously improve test automation frameworks, strategies, and best practices.


Required Skills & Qualifications:


  • Minimum 3 years of experience in QA automation and software testing.
  • Strong experience managing and executing STLC in multi-project environments.
  • Hands-on experience with functional testing (web, mobile, backend) and solid test case design skills.
  • Proficiency in automation tools such as Selenium, TestNG, JUnit, or equivalent.
  • Expertise in API testing using Postman, RestAssured, or similar frameworks.
  • Experience working with CI/CD tools like Jenkins, GitLab CI/CD, or similar.
  • Strong knowledge of scripting/programming languages (Python, Java, or JavaScript).
  • Good understanding of Agile methodologies (Scrum, Kanban) with hands-on exposure.
  • Solid problem-solving, debugging, and analytical skills.
  • Excellent communication and collaboration abilities.


Nice to Have:


  • Experience with cloud-based test environments or Docker/Kubernetes.
  • Exposure to performance and load testing tools such as JMeter or Gatling.
  • Familiarity with security testing practices.


Why Join Us?


  • Opportunity to own the QA function and contribute to building scalable testing systems.
  • Work in a fast-paced, tech-driven team with real impact on product quality.
  • 5-day work week, flexible working hours, and a collaborative culture.
  • Health benefits and support for professional learning & upskilling.


Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Remote only
8 - 12 yrs
₹10L - ₹18L / yr
TestNG
MuleSoft
API
skill iconJenkins
skill iconGitHub
+3 more

 Automation & MuleSoft experience is a must

 Automation Tools -> Test NG + Rest Assured

 Testing scope -> Integration & API Automation testing (EAPI/PAPI/SAPI), Salesforce integration experience is required

 Mulesoft experience -> Knowledge of API-led architecture, API specification knowledge, Anypoint platform experience - Runtime Manager/Exchange/API Manager/Anypoint Monitoring/Visualizer & Experience with Anypoint Studio is required for the QA Lead

 DevOps automation -> Integrating automated test to DevOps pipelines (Jenkinds/GitHub Actions). GitHub actions experience is a plus.

Programming Languages: Strong proficiency in Java, Python, JavaScript, or C# for writing automation scripts.

 API Testing Tools: Expertise in Postman, REST Assured, Karate, and similar tools for REST API validation and automation.

 Automation Frameworks: Hands-on experience with Selenium, Cypress, Playwright, TestNG, JUnit, and BDD frameworks like Cucumber or SpecFlow.

 CI/CD Integration: Ability to integrate automated tests into pipelines using Jenkins, GitHub Actions, or Azure DevOps.

 Version Control: Familiarity with Git, Bitbucket, and branching strategies for collaborative development.

 Database Knowledge: SQL for data validation and backend testing.


Testing Expertise

 API Automation: Design and execute automated API tests, validate endpoints, and ensure reliability across environments.

 Functional & Regression Testing: Strong understanding of test types including smoke, sanity, and integration testing.

 Defect Management: Experience with tools like Jira and TestRail for tracking and reporting

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹30L - ₹40L / yr
DevOps
skill iconDocker
CI/CD
skill iconAmazon Web Services (AWS)
AWS CloudFormation
+43 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Senior/Lead DevOps Engineer Profile
  • Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
  • Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
  • Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
  • Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
  • Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
  • Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
  • Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
  • Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
  • Its an IC role


PREFERRED:

  • Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
  • Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
  • Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • Candidates from NCR region only (No outstation candidates).


ROLES AND RESPONSIBILITIES:

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.


KEY RESPONSIBILITIES:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.


CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.


Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.


Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.


Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.


Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


IDEAL CANDIDATE:

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Spark Eighteen
Rishabh Jain
Posted by Rishabh Jain
Delhi
5 - 10 yrs
₹23L - ₹30L / yr
cicd
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
skill iconJenkins
+2 more

About the Job

This is a full-time role for a Lead DevOps Engineer at Spark Eighteen. We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region.


Responsibilities

  • Lead and mentor the DevOps/SRE team
  • Define and drive DevOps strategy and roadmaps
  • Oversee infrastructure automation and CI/CD at scale
  • Collaborate with architects, developers, and QA teams to integrate DevOps practices
  • Ensure security, compliance, and high availability of platforms
  • Own incident response, postmortems, and root cause analysis
  • Budgeting, team hiring, and performance evaluation


Requirements

Technical Skills

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 7+ years of professional DevOps experience with demonstrated progression.
  • Strong architecture and leadership background
  • Deep hands-on knowledge of infrastructure as code, CI/CD, and cloud
  • Proven experience with monitoring, security, and governance
  • Effective stakeholder and project management
  • Experience with tools like Jenkins, ArgoCD, Terraform, Vault, ELK, etc.
  • Strong understanding of business continuity and disaster recovery


Soft Skills

  • Cross-functional communication excellence with ability to lead technical discussions.
  • Strong mentorship capabilities for junior and mid-level team members.
  • Advanced strategic thinking and ability to propose innovative solutions.
  • Excellent knowledge transfer skills through documentation and training.
  • Ability to understand and align technical solutions with broader business strategy.
  • Proactive problem-solving approach with focus on continuous improvement.
  • Strong leadership skills in guiding team performance and technical direction.
  • Effective collaboration across development, QA, and business teams.
  • Ability to make complex technical decisions with minimal supervision.
  • Strategic approach to risk management and mitigation.


What We Offer

  • Professional Growth: Continuous learning opportunities through diverse projects and mentorship from experienced leaders
  • Global Exposure: Work with clients from 20+ countries, gaining insights into different markets and business cultures
  • Impactful Work: Contribute to projects that make a real difference, with solutions generating over $1B in revenue
  • Work-Life Balance: Flexible arrangements that respect personal wellbeing while fostering productivity
  • Career Advancement: Clear progression pathways as you develop skills within our growing organization
  • Competitive Compensation: Attractive salary packages that recognize your contributions and expertise


Our Culture

At Spark Eighteen, our culture centers on innovation, excellence, and growth. We believe in:

  • Quality-First: Delivering excellence rather than just quick solutions
  • True Partnership: Building relationships based on trust and mutual respect
  • Communication: Prioritizing clear, effective communication across teams
  • Innovation: Encouraging curiosity and creative approaches to problem-solving
  • Continuous Learning: Supporting professional development at all levels
  • Collaboration: Combining diverse perspectives to achieve shared goals
  • Impact: Measuring success by the value we create for clients and users


Apply Here - https://tinyurl.com/t6x23p9b

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort