50+ DevOps Jobs in Bangalore (Bengaluru) | DevOps Job openings in Bangalore (Bengaluru)
Apply to 50+ DevOps Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.



- 5+ years of experience
- FlaskAPI, RestAPI development experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows.
What We’re Looking For
- 2+ years in a DevOps or SRE role in production-grade, cloud-native environments (AWS-focused)
- Solid hands-on experience with Docker, Kubernetes/EKS, and container networking
- Proficiency with CI/CD tools, especially GitHub Actions
- Experience with staged rollout strategies for microservices
- Familiarity with event-driven architectures using SNS, SQS, and Step Functions
- Strong ability to optimize cloud costs without compromising uptime or performance
- Scripting/automation skills in Python, Go, or Bash
- Good understanding of observability, on-call readiness, and incident response workflows
Nice to Have
- Experience in B2B commerce, delivery/logistics networks, or on-demand operations
- Exposure to real-time inventory systems or marketplaces
- Worked on high-concurrency, low-latency backend systems

Mode of Hire: Permanent
Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git
Desired Skills (Good if you have): Ansible, Terraform
Job Responsibilities
- Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
- Manage infrastructure and services in production AWS environments.
- Drive platform improvements with a focus on security, scalability, and operational excellence.
- Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
- Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.
Job Requirements
- Strong foundational understanding of Linux systems.
- Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
- Proven track record of delivering robust, well-documented, and secure automation solutions.
- Comfortable owning end-to-end delivery of infrastructure components and tooling.
Preferred Qualifications
- Advanced system and cloud optimization skills.
- Prior experience in platform teams or DevOps roles at product-focused startups.
- Demonstrated contributions to internal tooling, open-source, or automation projects.
Create and manage Jenkins Pipelines using Linux groovy
scripting and python
Analyze and fix issues in Jenkins, GitHub, Nexus,
SonarQube and AWS cloud
Perform Jenkins, GitHub, SonarQube and Nexus
administration.
Create resources in AWS environment using
infrastructure-as-code. Analyze and fix issues in AWS
Cloud.
Good-to-Have
AWS Cloud certification
Terraform Certification
- Kubernetes/Docker experience
About Us:
At Vahan, we are building the first AI powered recruitment marketplace for India’s 300 million strong Blue Collar workforce, opening doors to economic opportunities and brighter futures.
Already India’s largest recruitment platform, Vahan is supported by marquee investors like Khosla Ventures, Y Combinator, Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook.
Our customers include names like Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious: to become the go-to platform for blue-collar professionals worldwide, empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives worldwide, creating a future where everyone has access to economic prosperity.
If our vision excites you, Vahan might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. If this sounds like your kind of journey, dive into the details and see where you can make your mark.
What You Will Be Doing:
- Build & Automate Cloud Infrastructure – Design, deploy, and optimize cloud environments, ensuring scalability, reliability, and cost efficiency.
- Set Up CI/CD & Deployment Pipelines – Develop automated workflows to streamline code integration, testing, and deployment for faster releases.
- Monitor & Improve System Performance – Implement robust monitoring, logging, and alerting mechanisms to proactively identify and resolve issues.
- Manage Containers & Scalability – Deploy and maintain containerized applications, ensuring efficient resource utilization and high availability.
- Ensure Security & Reliability – Enforce access controls, backup strategies, and disaster recovery plans to safeguard infrastructure and data.
- Adapt & Scale with the Startup – Take on dynamic responsibilities, quickly learn new technologies, and evolve processes to meet growing business needs.
You Will Thrive in This Role If You:
Must Haves:
- Experience: 3+ years in DevOps or related roles, focusing on cloud environments, automation, CI/CD, and Linux system administration. Strong expertise in debugging and infrastructure performance improvements.
- Cloud Expertise: In-depth experience with one or more cloud platforms (AWS, GCP), including services like EC2, RDS, S3, VPC, etc.
- IaC Tools: Proficiency in Terraform, Ansible, CloudFormation, or similar tools.
- Scripting Skills: Strong scripting abilities in Python, Bash, or PowerShell.
- Containerization: Experience with Docker, including managing containers in production.
- Monitoring Tools: Hands-on experience with tools like ELK, Prometheus, Grafana, CloudWatch, New Relic, and Data dog.
- Version Control: Proficiency with Git and code repository management.
- Soft Skills: Excellent problem-solving skills, attention to detail, and effective communication with both technical and non-technical team members.
- Database Management: Experience with managing and tuning databases like MySQL and PostgreSQL.
- Deployment Pipelines: Experience with Jenkins and similar CI/CD tools.
- Message Queues: Experience with rabbitMQ/SQS/Kafka.
Nice to Have:
- Certifications: AWS Certified DevOps Engineer, Certified Kubernetes Administrator (CKA), or similar.
- SRE Practices: Familiarity with Site Reliability Engineering (SRE) principles, including error budgeting and service level objectives (SLOs).
- Serverless Computing: Knowledge of AWS Lambda, Azure Functions, or similar architectures.
- Containerization: Experience with Docker and Kubernetes, including managing production clusters.
- Security: Awareness of security best practices and implementations.
- Cloud Cost Optimization: Experience with cost-saving initiatives in cloud environments.
- Data Pipelines & ETL: Experience in setting up and managing data pipelines and ETL workflows.
- Familiarity with Modern Tech Stacks: Exposure to Python, Node.js, React.js, and Kotlin for app deployment CI/CD pipelines.
- MLOps Pipelines: Understanding of ML model deployment and operationalization.
- Data Retrieval & Snapshots: Experience with PITI, EC2, and RDS snapshots.
- System Resiliency & Recovery: Strategies for ensuring system reliability and recovery in case of downtime.
At Vahan, you’ll have the opportunity to make a real impact in a sector that touches millions of lives. We’re committed to not only advancing the livelihoods of our workforce but also, in taking care of the people who make this mission possible. Here’s what we offer:
- Unlimited PTO: Trust and flexibility to manage your time in the way that works best for you.
- Comprehensive Medical Insurance: We’ve got you covered with plans designed to support you and your loved ones.
- Monthly Wellness Leaves: Regular time off to recharge and focus on what matters most.
- Competitive Pay: Your contributions are recognized and rewarded with a compensation package that reflects your impact.
Join us, and be part of something bigger—where your work drives real,
positive change in the world.


About Vahan
At Vahan.ai, we are building India’s first AI-powered recruitment marketplace for the 300 million-strong blue-collar workforce — opening doors to economic opportunities and brighter futures.
Already India’s largest recruitment platform, Vahan.ai is backed by marquee investors like Khosla Ventures, Bharti Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook.
Our customers include Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious:
To become the go-to platform for blue-collar professionals worldwide — empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives globally, creating a future where everyone has access to economic prosperity.
If our vision excites you, Vahan.ai might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. Dive into the details below and see where you can make your mark.
Role Overview:
We're seeking an experienced Senior Engineering Manager to lead multiple engineering pods/projects (2–3) and drive technical excellence across our product portfolio. This role combines hands-on technical leadership with people management, requiring someone who can scale teams, deliver complex projects, and maintain high engineering standards in a fast-paced startup environment.
Key Responsibilities
Team Leadership & Management
- Lead and manage multiple engineering pods (15–20 engineers in total)
- Hire, onboard, and develop engineering talent across different experience levels
- Conduct performance reviews, provide career guidance, and manage team growth
- Foster a culture of technical excellence, collaboration, and continuous learning
- Build and implement engineering processes that scale with company growth
Technical Leadership
- Guide technical architecture decisions across full-stack applications
- Provide technical mentorship and code review guidance to engineers
- Lead both frontend (React.js, React Native) and backend (Node.js) development initiatives
- Oversee web and mobile application development strategies
- Drive technical problem-solving and debugging for complex production issues
Infrastructure & Operations
- Manage AWS infrastructure, ensuring scalability, reliability, and cost-effectiveness
- Build and maintain production support processes and incident response procedures
- Implement monitoring, alerting, and observability practices
- Lead post-mortem processes and drive continuous improvement initiatives
Cross-Functional Collaboration
- Partner closely with Product, QA, and Business teams to deliver on company objectives
- Translate business requirements into technical roadmaps and delivery plans
- Manage stakeholder expectations and communicate technical decisions effectively
- Balance feature delivery with technical debt management and system scalability
Strategic Planning
- Develop engineering roadmaps aligned with business goals and product strategy
- Make data-driven decisions about technology choices and team structure
- Identify and mitigate technical and operational risks
- Drive engineering metrics and KPIs to measure team performance and product quality
Required Qualifications
Experience
- 7–10 years of software engineering experience
- 3+ years of engineering management experience, preferably in startup environments
- Proven track record of leading teams of 10+ engineers across multiple projects
- Experience hiring and scaling engineering teams in high-growth companies
Technical Skills
- Strong expertise in Node.js, React.js, and React Native (at least one frontend and backend)
- Experience with full-stack web and mobile application development
- Solid understanding of AWS services and cloud infrastructure management
- Knowledge of software architecture, system design, and scalability principles
- Experience with CI/CD pipelines, monitoring tools, and DevOps practices
Leadership & Management
- Demonstrated ability to mentor and develop engineering talent
- Experience conducting performance reviews and managing career development
- Strong stakeholder management and cross-functional collaboration skills
- Proven ability to balance technical decisions with business objectives
- Experience managing production systems and incident response processes
Soft Skills
- Excellent communication and presentation skills
- Strong problem-solving and analytical thinking abilities
- Ability to work effectively in fast-paced, ambiguous environments
- Experience making difficult prioritization decisions under resource constraints
- Cultural fit for startup environment with hands-on, results-oriented approach
Preferred Qualifications
- Experience in Series B+ stage startups or high-growth technology companies
- Background in tech-first / AI-first startups
- AI/ML product knowledge and experience (good to have)
- Experience with microservices architecture and distributed systems
- Contributions to open-source projects or technical community involvement
What We Offer
- Unlimited PTO – Trust and flexibility to manage your time in the way that works best for you
- Comprehensive Medical Insurance – Plans designed to support you and your loved ones
- Monthly Wellness Leaves – Regular time off to recharge and focus on what matters most
- Competitive Pay – Your contributions are recognized and rewarded with a compensation package that reflects your impact
Join us, and be part of something bigger — where your work drives real, positive change in the world.

Job Title: Full Stack Developer (MERN + Python)
Location: Bangalore
Job Type: Full-time
Experience: 4–8 years
About Miror
Miror is a pioneering FemTech company transforming how midlife women navigate perimenopause and menopause. We offer medically-backed solutions, expert consultations, community engagement, and wellness products to empower women in their health journey. Join us to make a meaningful difference through technology.
Role Overview
· We are seeking a passionate and experienced Full Stack Developer skilled in MERN stack and Python (Django/Flask) to build and scale high-impact features across our web and mobile platforms. You will collaborate with cross-functional teams to deliver seamless user experiences and robust backend systems.
Key Responsibilities
· Design, develop, and maintain scalable web applications using MySQL/Postgres, MongoDB, Express.js, React.js, and Node.js
· Build and manage RESTful APIs and microservices using Python (Django/Flask/FastAPI)
· Integrate with third-party platforms like OpenAI, WhatsApp APIs (Whapi), Interakt, and Zoho
· Optimize performance across the frontend and backend
· Collaborate with product managers, designers, and other developers to deliver high-quality features
· Ensure security, scalability, and maintainability of code
· Write clean, reusable, and well-documented code
· Contribute to DevOps, CI/CD, and server deployment workflows (AWS/Lightsail)
· Participate in code reviews and mentor junior developers if needed
Required Skills
· Strong experience with MERN Stack: MongoDB, Express.js, React.js, Node.js
· Proficiency in Python and web frameworks like Django, Flask, or FastAPI
· Experience working with REST APIs, JWT/Auth, and WebSockets
· Good understanding of frontend design systems, state management (Redux/Context), and responsive UI
· Familiarity with database design and queries (MongoDB, PostgreSQL/MySQL)
· Experience with Git, Docker, and deployment pipelines
· Comfortable working in Linux-based environments (e.g., Ubuntu on AWS)
Bonus Skills
· Experience with AI integrations (e.g., OpenAI, LangChain)
· Familiarity with WooCommerce, WordPress APIs
· Experience in chatbot development or WhatsApp API integration
Who You Are
· You are a problem-solver with a product-first mindset
· You care about user experience and performance
· You enjoy working in a fast-paced, collaborative environment
· You have a growth mindset and are open to learning new technologies
Why Join Us?
· Work at the intersection of healthcare, community, and technology
· Directly impact the lives of women across India and beyond
· Flexible work environment and collaborative team
· Opportunity to grow with a purpose-driven startup
·
In you are interested please apply here and drop me a message here in cutshort.
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain


Must Have Skills: ● .NET Expertise: 8+ years of hands-on experience with C# and .NET Framework (preferably 4.8+), with deep knowledge of MVC5 architecture. ● Front-End Mastery: Advanced skills in Razor Views, Bootstrap 5, CSS/LESS, JavaScript, and JQuery for building modern, responsive UIs. ● Database Proficiency: Strong experience with MSSQL, including schema design, performance tuning, and writing complex queries and stored procedures. ● DevOps Knowledge: Proficient in IIS configuration, deployment automation, and CI/CD best practices. ● Team Leadership: Proven ability to lead and mentor development teams, manage project timelines, and deliver high-quality software. ● Problem Solving: Exceptional debugging, analytical, and troubleshooting abilities across the stack. ● Collaboration: Experience working in Agile/Scrum teams and participating in technical discussions and code reviews.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Minimum 5 years of experience in a customer-facing role such as pre-sales, solutions engineering or technical architecture.
- Exceptional communication and presentation skills.
- Proven ability in technical integrations and conducting POCs.
- Proficiency in coding with high-level programming languages (Java, Go, Python).
- Solid understanding of Monitoring, Observability, Log Management, SIEM.
- Background in Engineering/DevOps will be considered an advantage.
- Previous experience in Technical Sales of Log Analytics, Monitoring, APM, RUM, SIEM is desirable.
Technical Expertise :
- In-depth knowledge of Kubernetes, AWS, Azure, GCP, Docker, Prometheus, OpenTelemetry.
- Candidates should have hands-on experience and the ability to integrate these technologies into customer environments, providing tailored solutions that meet diverse operational requirements.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two
Job Description
Role: Sr. DevOps – Architect
Location: Bangalore
Who are we looking for?
A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;
Technical Skills:
• 8+ years of relevant DevOps /Operations/Development experience working under Agile DevOps culture on large scale distributed systems.
• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.
• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash
• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.
• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.
• Experience with Docker and Kubernetes.
• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.
• Experience with planning tools like Jira, Rally etc.
• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.
• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.
• Experience on setting up and managing DevOps tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)
• Understanding of Applications, Networking and Open source tools.
• Experience on security side of DevOps i.e. DevSecOps
• Good to have understanding of Micro services architecture.
• Experience working with remote/offshore teams is a huge plus
• Experience in building a Dashboard based on latest JS technologies like NodeJS
• Experience with NoSQL database like MongoDB
• Experience in working with REST APIs
• Experience with tools like NPM, Gulp
Process Skills:
• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets
• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers
• Compile, deliver, and evangelize roadmaps that guide the evolution of services
• Grasp and communicate big-picture enterprise-wide issues to team
• Experience working in an Agile / Scrum / SAFe environment preferred
Behavioral Skills :
• Should have directly worked on creating enterprise level operating models, architecture options
• Model as-is and to-be architectures based on business requirements
• Good communication & presentation skills
• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work
• Flexible for short term travel
Primary Duties / Responsibilities:
• Build Automations and modules for DevOps platform
• Build integrations between various DevOps tools
• Interface with another teams to provide support and understand the overall vision of the transformation platform.
• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.
• Preparing HLDs and LLDs.
• Presenting status to leadership and key stakeholders at regular intervals
Qualification:
• Somebody who has at least 12+ years of work experience in software development.
• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience
• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college
Job Description
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
Behavioral Skills :
• Good Communication skills
Skills
PRIMARY COMPETENCY : Cloud Infra PRIMARY SKILL : DevOps PRIMARY SKILL PERCENTAGE : 100
Job Title : Oracle Analytics Cloud (OAC) / Fusion Data Intelligence (FDI) Specialist
Experience : 3 to 8 years
Location : All USI locations – Hyderabad, Bengaluru, Mumbai, Gurugram (preferred) and Pune, Chennai, Kolkata
Work Mode : Hybrid Only (2-3 days from office or all 5 days from office)
Mandatory Skills : Oracle Analytics Cloud (OAC), Fusion Data Intelligence (FDI), RPD, OAC Reports, Data Visualizations, SQL, PL/SQL, Oracle Databases, ODI, Oracle Cloud Infrastructure (OCI), DevOps tools, Agile methodology.
Key Responsibilities :
- Design, develop, and maintain solutions using Oracle Analytics Cloud (OAC).
- Build and optimize complex RPD models, OAC reports, and data visualizations.
- Utilize SQL and PL/SQL for data querying and performance optimization.
- Develop and manage applications hosted on Oracle Cloud Infrastructure (OCI).
- Support Oracle Cloud migrations, OBIEE upgrades, and integration projects.
- Collaborate with teams using the ODI (Oracle Data Integrator) tool for ETL processes.
- Implement cloud scripting using CURL for Oracle Cloud automation.
- Contribute to the design and implementation of Business Continuity and Disaster Recovery strategies for cloud applications.
- Participate in Agile development processes and DevOps practices including CI/CD and deployment orchestration.
Required Skills :
- Strong hands-on expertise in Oracle Analytics Cloud (OAC) and/or Fusion Data Intelligence (FDI).
- Deep understanding of data modeling, reporting, and visualization techniques.
- Proficiency in SQL, PL/SQL, and relational databases on Oracle.
- Familiarity with DevOps tools, version control, and deployment automation.
- Working knowledge of Oracle Cloud services, scripting, and monitoring.
Good to Have :
- Prior experience in OBIEE to OAC migrations.
- Exposure to data security models and cloud performance tuning.
- Certification in Oracle Cloud-related technologies.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.
Job Title : Java Backend Developer
Experience : 3 to 6 Years
Locations : Bangalore / Gurgaon (Hybrid – 3 Days Work From Office)
Shift Timings : 11:00 AM – 8:00 PM IST
Notice Period : Immediate to 15 Days Only
Job Description :
We are looking for experienced Java Backend Developers with strong expertise in building scalable microservices-based architectures. The ideal candidate should have hands-on experience with Spring Boot, containerized deployments, and DevOps tools.
✅ Must-Have Skills :
- Java – Strong programming skills in core Java.
- Spring Boot (2.x / 3.x) – Deep understanding of microservices architecture and patterns.
- Microservices – Design and implementation experience.
- Kubernetes – Experience deploying and managing microservices
- Jenkins & Maven – Build and CI/CD pipeline experience
- PostgreSQL – Experience with relational database management
✨ Good-to-Have Skills :
- Git – Source control management
- CI/CD Pipeline Tools – End-to-end pipeline automation
- Cloud & DevOps Knowledge – Experience with cloud-based deployments


Role Summary :
We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Key Responsibilities :
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration, regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Required Qualifications
● Bachelor’s degree in Computer Science, Engineering, or related field. ● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g., Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar). ● Understanding of observability, monitoring, and incident response tooling (e.g., Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments ● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.



Key Responsibilities
🖥️ Frontend (Angular)
- Develop responsive, accessible web UI for assessment tools, educator dashboards, and parent reports.
- Implement dynamic form builders and scoring interfaces for multiple domains (reading, writing, math).
- Build PDF report previews, filtering tools, and real-time updates.
🔙 Backend (Python/FastAPI)
- Design and implement APIs for user management, assessments, intervention logs, and reports.
- Integrate scoring logic and data aggregation for AI insights.
- Implement secure role-based access (admin, educator, parent).
🛠️ DevOps / Systems
- Maintain scalable architecture with PostgreSQL / MongoDB, Docker, and deployment pipelines.
- Work with AI engineer to connect ML models to backend for prediction and recommendation layers.
- Optimize performance and ensure data security (GDPR-aligned).
Key Responsibilities:
Cloud Management:
- Manage and troubleshoot Linux environments.
- Create and manage Linux users on EC2 instances.
- Handle AWS services, including ECR, EKS, EC2, SNS, SES, S3, RDS,
- Lambda, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Perform start/stop operations for SageMaker and EC2 instances.
- Solve IAM permission issues.
Containerization and Deployment:
- Create and manage ECS services.
- Implement IP whitelisting for enhanced security.
- Configure target mapping for load balancers and manage Glue jobs.
- Create load balancers (as needed).
CI/CD Setup:
- Set up and maintain CI/CD pipelines using AWS CodeCommit, CodeBuild, CodeDeploy, and CodePipeline.
Database Management:
- o Manage PostgreSQL RDS instances, ensuring optimal performance and security.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum 3.5 years of experience in a AWS and DevOps.
- Strong experience with Linux administration.
- Proficient in AWS services, particularly ECR, EKS, EC2, SNS, SES, S3, RDS, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Experience with CI/CD tools (AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline).
- Familiarity with PostgreSQL and database management.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
About SAP Fioneer
Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.
A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.
Product Technology Stack
- Languages: PowerShell, MgGraph, Git
- Storage & Databases: Azure Storage, Azure Databases
Role Overview
As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.
Key Responsibilities
Architecture & Design
- Design and document architecture blueprints and solution patterns for Azure-based applications.
- Implement hierarchical organizational governance using Azure Management Groups.
- Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.
Development & Implementation
- Build closed-loop, data-driven DevOps architectures using Azure Insights.
- Apply code-driven administration practices with PowerShell, MgGraph, and Git.
- Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
- Develop IAM standards with RBAC and EntraID.
Leadership & Collaboration
- Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
- Support the delivery of SaaS solutions on Azure.
Required Qualifications & Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in cloud solutions architecture and DevOps engineering.
- Extensive expertise in Azure services, core web technologies, and security best practices.
- Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
- Strong understanding of IAM, security best practices, and governance models in Azure.
- Experience working in Scrum-based environments with backlog management.
- Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.
Benefits
- Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
- Flexible work environment encouraging creativity and innovation.
- Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.
Diversity & Inclusion
At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
- Development/Technical support experience in preferably DevOps.
- Looking for an engineer to be part of GitHub Actions support. Experience with CI/CD tools like Bamboo, Harness, Ansible, Salt Scripting.
- Hands-on expertise with GitHub Actions and CICD Tools like Bamboo, Harness, CI/CD Pipeline stages, Build Tools, SonarQube, Artifactory, Nuget, Proget Veracode, LaunchDarkly, GitHub/Bitbucket repos, Monitoring tools.
- Handelling Xmatters,Techlines,Incidents
- Strong Scripting skills (PowerShell, Python, Bash/Shell Scripting) for Implementing automation scripts and Tools to streamline administrative tasks and improve efficiency.
- An Atlassian Tools Administrator is responsible for managing and maintaining Atlassian products such as Jira, Confluence, Bitbucket, and Bamboo.
- Expertise in Bitbucket, GitHub for version control and collaboration global level.
- Good experience on Linux/Windows systems activities, Databases.
- Aware of SLA and Error concepts and their implementations; provide support and participate in Incident management & Jira Stories. Continuously Monitoring system performance and availability, and responding to incidents promptly to minimize downtime.
- Well-versed with Observability tool as Splunk for Monitoring, alerting and logging solutions to identify and address potential issues, especially in infrastructure.
- Expert with Troubleshooting production issues and bugs. Identifying and resolving issues in production environments.
- Experience in providing 24x5 support.
- GitHub Actions
- Atlassian Tools (Bamboo, Bitbucket, Jira, Confluence)
- Build Tools (Maven, Gradle, MS Build, NodeJS)
- SonarQube, Veracode.
- Nexus, JFrog, Nuget, Proget
- Harness
- Salt Services, Ansible
- PowerShell, Shell scripting
- Splunk
- Linux, Windows
Job Summary:
We are seeking a skilled DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for managing, maintaining, and troubleshooting Rancher clusters, with a strong emphasis on Kubernetes operations. This role requires expertise in automation through shell scripting and proficiency in configuration management tools like Puppet and Ansible. Candidates should be highly self-motivated, capable of working on a rotating schedule, and committed to owning tasks through to delivery.
Key Responsibilities:
- Set up, operate, and maintain Rancher and Kubernetes (K8s) clusters, including on bare-metal environments.
- Perform upgrades and manage the lifecycle of Rancher clusters.
- Troubleshoot and resolve Rancher cluster issues efficiently.
- Write, maintain, and optimize shell scripts to automate Kubernetes-related tasks.
- Work collaboratively with the team to implement best practices for system automation and orchestration.
- Utilize configuration management tools like Puppet and Ansible (preferred but not mandatory).
- Participate in a rotating schedule, with the ability to work until 1 AM as required.
- Take ownership of tasks, ensuring timely delivery with high-quality standards.
Key Requirements:
- Strong expertise in Rancher and Kubernetes operations and maintenance.
- Experience in setting up and managing Kubernetes clusters on bare-metal systems is highly desirable.
- Proficiency in shell scripting for task automation.
- Familiarity with configuration management tools like Puppet and Ansible (good to have).
- Strong troubleshooting skills for Kubernetes and Rancher environments.
- Ability to work effectively in a rotating schedule and flexible hours.
- Strong ownership mindset and accountability for deliverables.


We are currently seeking skilled and motivated Senior Java Developers to join our dynamic and innovative development team. As a Senior Java Developer, you will be responsible for designing, developing, and maintaining high-performance, scalable Java applications.
Join DataCaliper and step into the vanguard of technological advancement, where your proficiency will shape the landscape of data management and drive businesses toward unparalleled success.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: Data caliper
Work location: Coimbatore
Experience: 3+ years
Joining time: Immediate – 4 weeks
Required skills:
-Good experience in Java/J2EE programming frameworks like Spring (Spring MVC, Spring Security, Spring JPA, Spring Boot, Spring Batch, Spring AOP).
-Deep knowledge in developing enterprise web applications using Java Spring
-Good experience in REST webservices.
-Understanding of DevOps processes like CI/CD
-Exposure to Maven, Jenkins, GIT, data formats json /xml, Quartz, log4j, logback
-Good experience in database technologies / SQL / PLSQL or any database experience
-The candidate should have excellent communication skills with an ability to interact with non-technical stakeholders as well.
Thank you

We are looking for multiple hands-on software engineers to handle CI/CD build and packaging engineering to facilitate RtBrick Full Stack (RBFS) software packages for deployment on various hardware platforms. You will be part of a high-performance team responsible for platform and infrastructure
Requirements
1. About 2-6 years of industry experience in Linux administration with an emphasis on automation
2. Experience with CI/CD tooling framework and cloud deployments
3. Experience With Software Development Tools like Git, Gitlab, Jenkins, Cmake, GNU build tools & Ansible
4. Proficient in Python and Shell scripting. Experience with Go-lang is excellent to have
5. Experience with Linux Apt Package Management, Web server, optional Open Network Linux (ONL), infrastructure like boot, pxe, IPMI, APC
6. Experience with Open Networking Linux (ONL) is highly desirable. SONIC build experience will be a plus
Responsibilities
CI/CD- Packaging
Knowledge of compilation, packaging and repository usage in various flavors of Linux.
Expertise in Linux system administration and internals is essential. Ability to build custom images with container, Virtual Machine environment, modify bootloader, reduce image and optimize containers for low power consumption.
Linux Administration
Install and configure Linux systems, including back-end database and scripts, perform system maintenance by reviewing error logs, create systems backup, and build Linux modules and packages for software deployment. Build packages in Open Network Linux and SONIC distributions in the near future.
RedHat OpenShift (L2/L3 Expetise)
1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress)
2. Setup OpenShift Image Registry
3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.
4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities
5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.
2. Architect:
- Suggestions on architecture setup
- Validate architecture and let us know pros and cons and feasibility.
- Managing of Multi Location Sharded Architecture
- Multi Region Sharding setup
3. Application DBA:
- Validate and help with Sharding decisions at collection level
- Providing deep analysis on performance by looking at execution plans
- Index Suggestions
- Archival Suggestions and Options
4. Collaboration
Ability to plan and delegate work by providing specific instructions.


Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Job Title: Devops+Java Engineer
Location: Bangalore
Mode of work- Hybrid (3 days work from office)
Job Summary: We are looking for a skilled Java+ DevOps Engineer to help enhance and maintain our infrastructure and applications. The ideal candidate will have a strong background in Java development combined with expertise in DevOps practices, ensuring seamless integration and deployment of software solutions. You will collaborate with cross-functional teams to design, develop, and deploy robust and scalable solutions.
Key Responsibilities:
- Develop and maintain Java-based applications and microservices.
- Implement CI/CD pipelines to automate the deployment process.
- Design and deploy monitoring, logging, and alerting systems.
- Manage cloud infrastructure using tools such as AWS, Azure, or GCP.
- Ensure security best practices are followed throughout all stages of development and deployment.
- Troubleshoot and resolve issues in development, test, and production environments.
- Collaborate with software engineers, QA analysts, and product teams to deliver high-quality solutions.
- Stay current with industry trends and best practices in Java development and DevOps.
Required Skills and Experience:
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Proficient in Java programming language and frameworks (Spring, Hibernate, etc.).
- Strong understanding of DevOps principles and experience with DevOps tools (e.g., Jenkins, Git, Docker, Kubernetes).
- Knowledge of containerization and orchestration technologies (Docker, Kubernetes).
- Familiarity with monitoring and logging tools (ELK stack, Prometheus, Grafana).
- Solid understanding of CI/CD pipelines and automated testing frameworks.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.



Responsibilities:
- Design, develop, and implement robust and efficient backend services using microservices architecture principles.
- Write clean, maintainable, and well-documented code using C# and the .NET framework.
- Develop and implement data access layers using Entity Framework.
- Utilize Azure DevOps for version control, continuous integration, and continuous delivery (CI/CD) pipelines.
- Design and manage databases on Azure SQL.
- Perform code reviews and participate in pair programming to ensure code quality.
- Troubleshoot and debug complex backend issues.
- Optimize backend performance and scalability to ensure a smooth user experience.
- Stay up-to-date with the latest advancements in backend technologies and cloud platforms.
- Collaborate effectively with frontend developers, product managers, and other stakeholders.
- Clearly communicate technical concepts to both technical and non-technical audiences.
Qualifications:
- Strong understanding of microservices architecture principles and best practices.
- In-depth knowledge of C# programming language and the .NET framework (ASP.NET MVC/Core, Web API).
- Experience working with Entity Framework for data access.
- Proficiency with Azure DevOps for CI/CD pipelines and version control (Git).
- Experience with Azure SQL for database design and management.
- Experience with unit testing and integration testing methodologies.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A passion for building high-quality, scalable, and secure software applications.
Position: SRE/ DevOps
Experience: 6-10 Years
Location: Bengaluru/Mangalore
CodeCraft Technologies is a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.
We are seeking a highly skilled and motivated Site Reliability Engineer (SRE) to join our dynamic team. As an SRE, you will play a crucial role in ensuring the reliability, availability, and performance of our systems and applications. You will work closely with the development team to build and maintain scalable infrastructure, implement best practices in CI/CD, and contribute to the overall stability of our technology stack.
Roles and Responsibilities:
· CI/CD and DevOps:
o Implement and maintain robust Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure efficient and reliable software delivery.
o Collaborate with development teams to integrate DevOps principles into the software development lifecycle.
o Experience with pipelines such as Github actions, GitLab, Azure DevOps,CircleCI is a plus.
· Test Automation:
o Develop and maintain automated testing frameworks to validate system functionality, performance, and reliability.
o Collaborate with QA teams to enhance test coverage and improve overall testing efficiency.
· Logging/Monitoring:
o Design, implement, and manage logging and monitoring solutions to proactively identify and address potential issues.
o Respond to incidents and alerts to ensure system uptime and performance.
· Infrastructure as Code (IaC):
o Utilize Terraform (or other tools) to define and manage infrastructure as code, ensuring scalability, security, and consistency across environments.
· Elastic Stack:
o Implement and manage Elastic Stack (ELK) for log and data analysis to gain insights into system performance and troubleshoot issues effectively.
· Cloud Platforms:
o Work with cloud platforms such as AWS, GCP, and Azure to deploy and manage scalable and resilient infrastructure.
o Optimize cloud resources for cost efficiency and performance.
· Vulnerability Management:
o Conduct regular vulnerability assessments and implement measures to address and remediate identified vulnerabilities.
o Collaborate with security teams to ensure a robust security posture.
· Security Assessment:
o Perform security assessments and audits to identify and address potential security risks.
o Implement security best practices and stay current with industry trends and emerging threats.
o Experience with tools such as GCP Security Command Center, and AWS Security Hub is a plus.
· Third-Party Hardware Providers:
o Collaborate with third-party hardware providers to integrate and support hardware components within the infrastructure.
Desired Profile:
· The candidate should be willing to work in the EST time zone, i.e. from 6 PM to 2 AM.
· Excellent communication and interpersonal skills
· Bachelor’s Degree
· Certifications related to this field shall be an added advantage.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years

Opportunity to work on Product Development
The Technical Project Manager is responsible for managing projects to make sure the proposed plan adheres to the timeline, budget, and scope. Their duties include planning projects in detail, setting schedules for all stakeholders, and executing each step of the project for our proprietary product, with some of the World’s biggest brands across the BFSI domain. The role is cross-functional and requires the individual to own and push through projects that touch upon business, operations, technology, marketing, and client experience.
• 5-7 years of experience in technical project management.
• Professional Project Management Certification from accredited intuition is mandatory.
• Proven experience overseeing all elements of the project/product lifecycle.
• Working knowledge of Agile and Waterfall methodologies.
• Prior experience in Fintech, Blockchain, and/or BFSI domain will be an added advantage.
• Demonstrated understanding of Project Management processes, strategies, and methods.
• Strong sense of personal accountability regarding decision-making and supervising department team.
• Collaborate with cross-functional teams and stakeholders to define project requirements and scope.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.
Responsibilities: • Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. • Develop and maintain data-oriented scripting using languages such as Python. • Create and manage data structures to ensure efficient and accurate data storage and retrieval. • Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. • Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. • Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. • Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. • Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. • Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.
Requirements: • A minimum of 6 years of relevant experience as a Data Engineer. • Proficiency in ETL, SQL, and other advanced data engineering techniques. • Strong programming skills in scripting languages such as Python. • Experience in creating and maintaining data structures for efficient data storage and retrieval. • Familiarity with cloud and big data technologies, specifically AWS and Azure stack. • Hands-on experience with ETL tools, particularly Nifi and Tibco. • In-depth knowledge of database structures, including MSSQL and Vertica. • Proven experience in managing and operating data platforms. • Strong problem-solving and analytical skills with the ability to handle complex data challenges. • Excellent communication and collaboration skills to work effectively in a team environment. • Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.


About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
Join us as a Senior Software Engineer within our Web Application Development team, based out of Pune to deliver end-to-end customized application development.
We expect you to participate & contribute to every stage of project right from interacting with internal customers/stakeholders, understanding their requirements, and proposing them the solutions which will be best fit to their expectations. You will be part of local team you will have chance to be part of Global Projects delivery with the possibility of working On-site (Belgium) if required.You will be most important member of highly motivated Application development team leading the Microsoft Technology stack enabling the team members to deliver “first time right” application delivery.
Principal Duties and Responsibilities
• You will be responsible for the technical analysis of requirements and lead the project from Technical perspective
• You should be a problem solver and provide scalable and efficient technical solutions
• You guarantee an excellent and scalable application development in an estimated timeline
• You will interact with the customers/stakeholders and understand their requirements and propose the solutions
• You will work closely with the ‘Application Owner’ and carry the entire responsibility of end-to-end processes/development
• You will make technical & functional application documentation, release notes that will facilitate the aftercare of the application Knowledge, Skills and Qualifications
• Education: Master’s degree in computer science or equivalent
• Experience: Minimum 5- 10 years
Required Skills
• Strong working knowledge of C#, Angular 2+, SQL Server, ASP.Net Web API
• Good understanding on OOPS, SOLID principals, Development practices
• Good understanding of DevOps, Git, CI/CD
• Experience with development of client and server-side applications
• Excellent English communication skills (written, oral), with good listening capabilities
• Exceptionally good Excellent technical analytical, debugging, and problem-solving skills
• Has a reasonable balance between getting the job done vs technical debt
• Enjoys producing top quality code in a fast-moving environment
• Effective team player working in a team; willingness to put the needs of the team over their own
Preferred Skills
• Experience with product development for the Microsoft Azure platform
• Experience with product development life cycle would be a plus
• Experience with agile development methodology (Scrum)
• Functional analysis skills and experience (Use cases, UML) is an asset

About Apexon:
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.
Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.
To know more about us please visit: https://www.apexon.com/" target="_blank">https://www.apexon.com/
Responsibilities:
- C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products.
- Good object-oriented programming concepts and practical knowledge.
- Strong programming skills in C# are required.
- Good knowledge of C# Automation is preferred.
- Good to have experience with the Robot framework.
- Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
- Good to have knowledge of Azure cloud.
- Take end-to-end ownership of test automation development, execution and delivery.
Good to have:
- Experience in tools like SharePoint, Azure DevOps
.
Other skills:
- Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges.
Job Purpose :
Working with the Tech Data Sales Team, the Presales Consultant is responsible for providing presales technical support to the Sales team and presenting tailored demonstrations or qualification discussions to customers and/or prospects. The Presales Consultant also assists the Sales Team with qualifying opportunities - in or out and helping expand existing opportunities through solid questioning. The Presales Consultant will be responsible on conducting Technical Proof of Concept, Demonstration & Presentation on the supported products & solution.
Responsibilities :
- Subject Matter Expert (SME) in the development of Microsoft Cloud Solutions (Compute, Storage, Containers, Automation, DevOps, Web applications, Power Apps etc.)
- Collaborate and align with business leads to understand their business requirement and growth initiatives to propose the required solutions for Cloud and Hybrid Cloud
- Work with other technology vendors, ISVs to build solutions use cases in the Center of Excellence based on sales demand (opportunities, emerging trends)
- Manage the APJ COE environment and Click-to-Run Solutions
- Provide solution proposal and pre-sales technical support for sales opportunities by identifying the requirements and design Hybrid Cloud solutions
- Create Solutions Play and blueprint to effectively explain and articulate solution use cases to internal TD Sales, Pre-sales and partners community
- Support in-country (APJ countries) Presales Team for any technical related enquiries
- Support Country's Product / Channel Sales Team in prospecting new opportunities in Cloud & Hybrid Cloud
- Provide technical and sales trainings to TD sales, pre-sales and partners.
- Lead & Conduct solution presentations and demonstrations
- Deliver presentations at Tech Data, Partner or Vendor led solutions events.
- Achieve relevant product certifications
- Conduct customer workshops that help accelerate sales opportunities
Knowledge, Skills and Experience :
- Bachelor's degree in information technology/Computer Science or equivalent experience certifications preferred
- Minimum of 7 years relevant working experience, ideally in IT multinational environment
- Track record on the assigned line cards experience is an added advantage
- IT Distributor and/or SI experience would also be an added advantage
- Has good communication skills and problem solving skills
- Proven ability to work independently, effectively in an off-site environment and under high pressure
What's In It For You?
- Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle.
- Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses.
- Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program.
- Diversity, Equity & Inclusion: It's not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities.
- Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program.
- Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives.
Don't meet every single requirement? Apply anyway.
At Tech Data, a TD SYNNEX Company, we're proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you're excited about working for our company and believe you're a good fit for this role, we encourage you to apply. You may be exactly the person we're looking for!
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.

at CodeCraft Technologies Private Limited

Roles and Responsibilities:
• Gather and analyse cloud infrastructure requirements
• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby
preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and
discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,
CloudFormation), and automated imaging tools (Packer)
• Support existing infrastructure, analyse problem areas and come up with solutions
• An eye for monitoring – the candidate should be able to look at complex infrastructure and be
able to figure out what to monitor and how.
• Work along with the Engineering team to help out with Infrastructure / Network automation needs.
• Deploy infrastructure as code and automate as much as possible
• Manage a team of DevOps
Desired Profile:
• Understanding of provisioning of Bare Metal and Virtual Machines
• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.
• Experience in scripting languages like Ruby/ Python/ Shell Scripting
• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts
• Strong Linux/Unix administration skills.
• Self-starter who can implement with minimal guidance
• Hands-on experience setting up CICD from SCRATCH in Jenkins
• Experience with Managing K8s infrastructure
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby