50+ DevOps Jobs in Bangalore (Bengaluru) | DevOps Job openings in Bangalore (Bengaluru)
Apply to 50+ DevOps Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.

We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 3-6 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.


Role Summary :
We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Key Responsibilities :
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration, regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Required Qualifications
● Bachelor’s degree in Computer Science, Engineering, or related field. ● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g., Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar). ● Understanding of observability, monitoring, and incident response tooling (e.g., Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments ● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.



Key Responsibilities
🖥️ Frontend (Angular)
- Develop responsive, accessible web UI for assessment tools, educator dashboards, and parent reports.
- Implement dynamic form builders and scoring interfaces for multiple domains (reading, writing, math).
- Build PDF report previews, filtering tools, and real-time updates.
🔙 Backend (Python/FastAPI)
- Design and implement APIs for user management, assessments, intervention logs, and reports.
- Integrate scoring logic and data aggregation for AI insights.
- Implement secure role-based access (admin, educator, parent).
🛠️ DevOps / Systems
- Maintain scalable architecture with PostgreSQL / MongoDB, Docker, and deployment pipelines.
- Work with AI engineer to connect ML models to backend for prediction and recommendation layers.
- Optimize performance and ensure data security (GDPR-aligned).
Key Responsibilities:
Cloud Management:
- Manage and troubleshoot Linux environments.
- Create and manage Linux users on EC2 instances.
- Handle AWS services, including ECR, EKS, EC2, SNS, SES, S3, RDS,
- Lambda, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Perform start/stop operations for SageMaker and EC2 instances.
- Solve IAM permission issues.
Containerization and Deployment:
- Create and manage ECS services.
- Implement IP whitelisting for enhanced security.
- Configure target mapping for load balancers and manage Glue jobs.
- Create load balancers (as needed).
CI/CD Setup:
- Set up and maintain CI/CD pipelines using AWS CodeCommit, CodeBuild, CodeDeploy, and CodePipeline.
Database Management:
- o Manage PostgreSQL RDS instances, ensuring optimal performance and security.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum 3.5 years of experience in a AWS and DevOps.
- Strong experience with Linux administration.
- Proficient in AWS services, particularly ECR, EKS, EC2, SNS, SES, S3, RDS, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Experience with CI/CD tools (AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline).
- Familiarity with PostgreSQL and database management.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation


Full Stack Developer – SaaS Platforms (Ruby on Rails, React.js, Next.js, AWS)
Location: Office
Experience Level: 5+ Years
Employment Type: Full-Time
F
✨ About the Role
We are looking for a skilled and passionate Full Stack Developer to join our SaaS Product Engineering team. You will work across backend and frontend technologies to build, optimize, and scale multiple SaaS products in a dynamic environment.
If you are excited by clean code, modern cloud-native practices, and the chance to contribute to impactful products from the ground up, we would love to meet you!
u
🔥 Key Responsibilities
- Develop and maintain scalable SaaS-based platforms using Ruby on Rails (backend) and React.js / Next.js (frontend).
- Build RESTful APIs and integrate third-party services as needed.
- Collaborate with Product Managers, Designers, and QA teams to deliver high-quality product features for multiple projects.
- Write clean, secure, maintainable, and efficient code following best practices.
- Optimize applications for performance, scalability, maintainability, and security.
- Participate actively in code reviews, sprint planning, and team discussions.
- Support DevOps practices including CI/CD pipelines and cloud deployments on AWS.
- Take technical architectural level decisions for the products
- Continuously research and learn new technologies to enhance product performance.
l
🛠️ Required Skills and Experience
- 3–6 years of hands-on software engineering experience, preferably with SaaS platforms.
- Strong Full Stack Development Skills:
- Backend: Ruby on Rails (6+ preferred)
- Frontend: React.js, Next.js (static generation and server-side rendering)
- Database: PostgreSQL, MongoDB, Redis
- Experience deploying applications to AWS cloud environment.
- Good understanding of APIs (RESTful and/or GraphQL) and third-party integrations.
- Familiarity with Docker and CI/CD pipelines (GitHub Actions, GitLab CI, etc.).
- Knowledge of security principles (OAuth2, API security best practices).
- Familiarity with Agile development methodologies (Scrum, Kanban).
- Experience in handling a team.
- Basic understanding of test-driven development (RSpec, Jest or similar frameworks).
l
🎯 Preferred (Nice-to-Have)
- Exposure to AWS Lightsail, EC2, or Lambda.
- Experience with SaaS multi-tenant system design.
- Experience with third-party integrations like payments application.
- Previous work experience in startups or high-growth product companies.
- Basic knowledge of performance tuning and system optimization.
👤 Who You Are
- A problem solver with strong technical fundamentals.
- A self-motivated learner who enjoys working in collaborative environments.
- Someone who takes ownership and accountability for deliverables.
- A team player willing to mentor junior developers and contribute to team goals.
S
📈 What We Offer
- Opportunity to work on innovative, impactful SaaS products.
- A collaborative and transparent work culture.
- Growth and learning opportunities across technologies and domains.
- Competitive compensation and benefits.
We're on the hunt for a Backend Developer who not only writes clean, efficient code but also thinks in systems and structures. If you enjoy crafting microservices, solving real-world problems using solid design principles, and love optimizing performance — this one’s for you!
🧠 Responsibilities:
- Design, develop, and maintain scalable and high-performance backend services
- Build and manage RESTful APIs and microservices
- Architect and implement Low-Level Design (LLD) for core backend features
- Apply Data Structures and Algorithms (DSA) to write optimal, scalable solutions
- Collaborate with frontend and product teams to integrate user-facing elements
- Ensure code quality through reviews, unit tests, and automation
- Optimize applications for speed, performance, and scalability
- Troubleshoot, debug, and upgrade existing systems
🛠️ Required Skills:
- 2+ years of experience in Java / Python / Node.js / GoLang
- Strong knowledge of Object-Oriented Programming (OOP) and Design Patterns
- Good grasp of Low-Level Design (LLD) and System Design fundamentals
- Proficient in Data Structures and Algorithms (DSA) — must know how to use them, not just define them 😎
- Experience with REST APIs and Microservices Architecture
- Good understanding of SQL and/or NoSQL Databases (e.g., MySQL, MongoDB, PostgreSQL)
- Familiarity with version control systems like Git
⭐ Nice-to-Haves:
- Experience with cloud platforms (AWS, GCP, Azure)
- Familiarity with Docker, Kubernetes, or container orchestration
- Exposure to CI/CD pipelines
Location: Bangalore Indiranagar
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Key Responsibilities:
- Lead the development and delivery of high-quality, scalable web/mobile applications.
- Design, build, Manage and mentor a team of Developers across backend, frontend, and DevOps.
- Collaborate with cross-functional teams including Product, Design, and QA to ship fast and effectively.
- Integrate third-party APIs and financial systems (e.g., payment gateways, fraud detection, etc.).
- Troubleshoot production issues, optimize performance, and implement robust logging & monitoring.
- Define and enforce best practices in Planning, coding, architecture, and agile development.
- Identify and implement the right tools, frameworks, and technologies.
- Own the system architecture and make key decisions on performance, security, and scalability.
- Continuously monitor tech performance and drive improvements.
Requirements:
- 8+ years of software development experience, with at least 2+ years in a leadership role.
- Proven track record in managing Development teams and delivering consumer-facing products.
- Strong knowledge of backend technologies (Node.js, Java, Python, etc.) and frontend frameworks (React, Angular, etc.).
- Experience in cloud infrastructure (AWS/GCP), CI/CD pipelines, and containerization (Docker, Kubernetes).
- Deep understanding of system design, REST APIs, microservices architecture, and database management.
- Excellent communication and stakeholder management skills.
- Experience in Any Ecommerce Online app a web Applications or any Payment Transaction Environment will be a plus point.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.

Location: Malleshwaram/MG Road
Work: Initially Onsite and later Hybrid
We are committed to becoming a true DevOps house and want your help. The role will
require close liaison with development and test teams to increase effectiveness of
current dev processes. Participation in an out-of-hours emergency support rotationally
will be required. You will be shaping the way that we use our DevOps tools and
innovating to deliver business value and improve the cadence of the entire dev team.
Required Skills:
Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,
S3, RDS, Lambda, Cloudwatch, IAM etc)
• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS
• Good knowledge on creating and maintaining the infrastructure as code using Terraform
• Solid experience with CI-CD tools like Jenkins, git and Ansible
• Working experience on supporting Microservices (Deploying, maintaining and
monitoring Java web-based production applications using docker container)
• Strong knowledge on debugging production issues across the services and
technology stack and application monitoring (we use Splunk & Cloudwatch)
• Experience with software build tools (maven, and node)
• Experience with scripting and automation languages (Bash, groovy,
JavaScript, python)
• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu
• 4+ years in AWS DevOps Engineer
Optional skills:
• Oracle/SQL database maintenance experience
• Elastic Search
• Serverless/container based approach
• Automated testing of infrastructure deployments
• Experience of performance testing & JVM tuning
• Experience of a high-volume distributed eCommerce environment
• Experience working closely with Agile development teams
• Familiarity with load testing tools & process
• Experience with nginx, tomcat and apache
• Experience with Cloudflare
Personal attributes
The successful candidate will be comfortable working autonomously and
independently.
They will be keen to bring an entire team to the next level of delivering business value.
A proactive approach to problem
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
About SAP Fioneer
Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.
A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.
Product Technology Stack
- Languages: PowerShell, MgGraph, Git
- Storage & Databases: Azure Storage, Azure Databases
Role Overview
As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.
Key Responsibilities
Architecture & Design
- Design and document architecture blueprints and solution patterns for Azure-based applications.
- Implement hierarchical organizational governance using Azure Management Groups.
- Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.
Development & Implementation
- Build closed-loop, data-driven DevOps architectures using Azure Insights.
- Apply code-driven administration practices with PowerShell, MgGraph, and Git.
- Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
- Develop IAM standards with RBAC and EntraID.
Leadership & Collaboration
- Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
- Support the delivery of SaaS solutions on Azure.
Required Qualifications & Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in cloud solutions architecture and DevOps engineering.
- Extensive expertise in Azure services, core web technologies, and security best practices.
- Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
- Strong understanding of IAM, security best practices, and governance models in Azure.
- Experience working in Scrum-based environments with backlog management.
- Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.
Benefits
- Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
- Flexible work environment encouraging creativity and innovation.
- Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.
Diversity & Inclusion
At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security

Key Responsibilities:
- Build and Automation: Utilize Gradle for building and automating software projects. Ensure efficient and reliable build processes.
- Scripting: Develop and maintain scripts using Python and Shell scripting to automate tasks and improve workflow efficiency.
- CI/CD Tools: Implement and manage Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools such as Harness, Github Actions, Jenkins, and other relevant technologies. Ensure seamless integration and delivery of code changes.
- Cloud Platforms: Demonstrate proficiency in working with cloud platforms including OpenShift, Azure, and Google Cloud Platform (GCP). Deploy, manage, and monitor applications in cloud environments.
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
Job Summary:
We are seeking a skilled DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for managing, maintaining, and troubleshooting Rancher clusters, with a strong emphasis on Kubernetes operations. This role requires expertise in automation through shell scripting and proficiency in configuration management tools like Puppet and Ansible. Candidates should be highly self-motivated, capable of working on a rotating schedule, and committed to owning tasks through to delivery.
Key Responsibilities:
- Set up, operate, and maintain Rancher and Kubernetes (K8s) clusters, including on bare-metal environments.
- Perform upgrades and manage the lifecycle of Rancher clusters.
- Troubleshoot and resolve Rancher cluster issues efficiently.
- Write, maintain, and optimize shell scripts to automate Kubernetes-related tasks.
- Work collaboratively with the team to implement best practices for system automation and orchestration.
- Utilize configuration management tools like Puppet and Ansible (preferred but not mandatory).
- Participate in a rotating schedule, with the ability to work until 1 AM as required.
- Take ownership of tasks, ensuring timely delivery with high-quality standards.
Key Requirements:
- Strong expertise in Rancher and Kubernetes operations and maintenance.
- Experience in setting up and managing Kubernetes clusters on bare-metal systems is highly desirable.
- Proficiency in shell scripting for task automation.
- Familiarity with configuration management tools like Puppet and Ansible (good to have).
- Strong troubleshooting skills for Kubernetes and Rancher environments.
- Ability to work effectively in a rotating schedule and flexible hours.
- Strong ownership mindset and accountability for deliverables.


We are currently seeking skilled and motivated Senior Java Developers to join our dynamic and innovative development team. As a Senior Java Developer, you will be responsible for designing, developing, and maintaining high-performance, scalable Java applications.
Join DataCaliper and step into the vanguard of technological advancement, where your proficiency will shape the landscape of data management and drive businesses toward unparalleled success.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: Data caliper
Work location: Coimbatore
Experience: 3+ years
Joining time: Immediate – 4 weeks
Required skills:
-Good experience in Java/J2EE programming frameworks like Spring (Spring MVC, Spring Security, Spring JPA, Spring Boot, Spring Batch, Spring AOP).
-Deep knowledge in developing enterprise web applications using Java Spring
-Good experience in REST webservices.
-Understanding of DevOps processes like CI/CD
-Exposure to Maven, Jenkins, GIT, data formats json /xml, Quartz, log4j, logback
-Good experience in database technologies / SQL / PLSQL or any database experience
-The candidate should have excellent communication skills with an ability to interact with non-technical stakeholders as well.
Thank you

We are looking for multiple hands-on software engineers to handle CI/CD build and packaging engineering to facilitate RtBrick Full Stack (RBFS) software packages for deployment on various hardware platforms. You will be part of a high-performance team responsible for platform and infrastructure
Requirements
1. About 2-6 years of industry experience in Linux administration with an emphasis on automation
2. Experience with CI/CD tooling framework and cloud deployments
3. Experience With Software Development Tools like Git, Gitlab, Jenkins, Cmake, GNU build tools & Ansible
4. Proficient in Python and Shell scripting. Experience with Go-lang is excellent to have
5. Experience with Linux Apt Package Management, Web server, optional Open Network Linux (ONL), infrastructure like boot, pxe, IPMI, APC
6. Experience with Open Networking Linux (ONL) is highly desirable. SONIC build experience will be a plus
Responsibilities
CI/CD- Packaging
Knowledge of compilation, packaging and repository usage in various flavors of Linux.
Expertise in Linux system administration and internals is essential. Ability to build custom images with container, Virtual Machine environment, modify bootloader, reduce image and optimize containers for low power consumption.
Linux Administration
Install and configure Linux systems, including back-end database and scripts, perform system maintenance by reviewing error logs, create systems backup, and build Linux modules and packages for software deployment. Build packages in Open Network Linux and SONIC distributions in the near future.
RedHat OpenShift (L2/L3 Expetise)
1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress)
2. Setup OpenShift Image Registry
3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.
4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities
5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.
2. Architect:
- Suggestions on architecture setup
- Validate architecture and let us know pros and cons and feasibility.
- Managing of Multi Location Sharded Architecture
- Multi Region Sharding setup
3. Application DBA:
- Validate and help with Sharding decisions at collection level
- Providing deep analysis on performance by looking at execution plans
- Index Suggestions
- Archival Suggestions and Options
4. Collaboration
Ability to plan and delegate work by providing specific instructions.


Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Job Title: Devops+Java Engineer
Location: Bangalore
Mode of work- Hybrid (3 days work from office)
Job Summary: We are looking for a skilled Java+ DevOps Engineer to help enhance and maintain our infrastructure and applications. The ideal candidate will have a strong background in Java development combined with expertise in DevOps practices, ensuring seamless integration and deployment of software solutions. You will collaborate with cross-functional teams to design, develop, and deploy robust and scalable solutions.
Key Responsibilities:
- Develop and maintain Java-based applications and microservices.
- Implement CI/CD pipelines to automate the deployment process.
- Design and deploy monitoring, logging, and alerting systems.
- Manage cloud infrastructure using tools such as AWS, Azure, or GCP.
- Ensure security best practices are followed throughout all stages of development and deployment.
- Troubleshoot and resolve issues in development, test, and production environments.
- Collaborate with software engineers, QA analysts, and product teams to deliver high-quality solutions.
- Stay current with industry trends and best practices in Java development and DevOps.
Required Skills and Experience:
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Proficient in Java programming language and frameworks (Spring, Hibernate, etc.).
- Strong understanding of DevOps principles and experience with DevOps tools (e.g., Jenkins, Git, Docker, Kubernetes).
- Knowledge of containerization and orchestration technologies (Docker, Kubernetes).
- Familiarity with monitoring and logging tools (ELK stack, Prometheus, Grafana).
- Solid understanding of CI/CD pipelines and automated testing frameworks.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.



Responsibilities:
- Design, develop, and implement robust and efficient backend services using microservices architecture principles.
- Write clean, maintainable, and well-documented code using C# and the .NET framework.
- Develop and implement data access layers using Entity Framework.
- Utilize Azure DevOps for version control, continuous integration, and continuous delivery (CI/CD) pipelines.
- Design and manage databases on Azure SQL.
- Perform code reviews and participate in pair programming to ensure code quality.
- Troubleshoot and debug complex backend issues.
- Optimize backend performance and scalability to ensure a smooth user experience.
- Stay up-to-date with the latest advancements in backend technologies and cloud platforms.
- Collaborate effectively with frontend developers, product managers, and other stakeholders.
- Clearly communicate technical concepts to both technical and non-technical audiences.
Qualifications:
- Strong understanding of microservices architecture principles and best practices.
- In-depth knowledge of C# programming language and the .NET framework (ASP.NET MVC/Core, Web API).
- Experience working with Entity Framework for data access.
- Proficiency with Azure DevOps for CI/CD pipelines and version control (Git).
- Experience with Azure SQL for database design and management.
- Experience with unit testing and integration testing methodologies.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A passion for building high-quality, scalable, and secure software applications.
Position: SRE/ DevOps
Experience: 6-10 Years
Location: Bengaluru/Mangalore
CodeCraft Technologies is a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.
We are seeking a highly skilled and motivated Site Reliability Engineer (SRE) to join our dynamic team. As an SRE, you will play a crucial role in ensuring the reliability, availability, and performance of our systems and applications. You will work closely with the development team to build and maintain scalable infrastructure, implement best practices in CI/CD, and contribute to the overall stability of our technology stack.
Roles and Responsibilities:
· CI/CD and DevOps:
o Implement and maintain robust Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure efficient and reliable software delivery.
o Collaborate with development teams to integrate DevOps principles into the software development lifecycle.
o Experience with pipelines such as Github actions, GitLab, Azure DevOps,CircleCI is a plus.
· Test Automation:
o Develop and maintain automated testing frameworks to validate system functionality, performance, and reliability.
o Collaborate with QA teams to enhance test coverage and improve overall testing efficiency.
· Logging/Monitoring:
o Design, implement, and manage logging and monitoring solutions to proactively identify and address potential issues.
o Respond to incidents and alerts to ensure system uptime and performance.
· Infrastructure as Code (IaC):
o Utilize Terraform (or other tools) to define and manage infrastructure as code, ensuring scalability, security, and consistency across environments.
· Elastic Stack:
o Implement and manage Elastic Stack (ELK) for log and data analysis to gain insights into system performance and troubleshoot issues effectively.
· Cloud Platforms:
o Work with cloud platforms such as AWS, GCP, and Azure to deploy and manage scalable and resilient infrastructure.
o Optimize cloud resources for cost efficiency and performance.
· Vulnerability Management:
o Conduct regular vulnerability assessments and implement measures to address and remediate identified vulnerabilities.
o Collaborate with security teams to ensure a robust security posture.
· Security Assessment:
o Perform security assessments and audits to identify and address potential security risks.
o Implement security best practices and stay current with industry trends and emerging threats.
o Experience with tools such as GCP Security Command Center, and AWS Security Hub is a plus.
· Third-Party Hardware Providers:
o Collaborate with third-party hardware providers to integrate and support hardware components within the infrastructure.
Desired Profile:
· The candidate should be willing to work in the EST time zone, i.e. from 6 PM to 2 AM.
· Excellent communication and interpersonal skills
· Bachelor’s Degree
· Certifications related to this field shall be an added advantage.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years

Opportunity to work on Product Development
The Technical Project Manager is responsible for managing projects to make sure the proposed plan adheres to the timeline, budget, and scope. Their duties include planning projects in detail, setting schedules for all stakeholders, and executing each step of the project for our proprietary product, with some of the World’s biggest brands across the BFSI domain. The role is cross-functional and requires the individual to own and push through projects that touch upon business, operations, technology, marketing, and client experience.
• 5-7 years of experience in technical project management.
• Professional Project Management Certification from accredited intuition is mandatory.
• Proven experience overseeing all elements of the project/product lifecycle.
• Working knowledge of Agile and Waterfall methodologies.
• Prior experience in Fintech, Blockchain, and/or BFSI domain will be an added advantage.
• Demonstrated understanding of Project Management processes, strategies, and methods.
• Strong sense of personal accountability regarding decision-making and supervising department team.
• Collaborate with cross-functional teams and stakeholders to define project requirements and scope.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.
Responsibilities: • Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. • Develop and maintain data-oriented scripting using languages such as Python. • Create and manage data structures to ensure efficient and accurate data storage and retrieval. • Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. • Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. • Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. • Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. • Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. • Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.
Requirements: • A minimum of 6 years of relevant experience as a Data Engineer. • Proficiency in ETL, SQL, and other advanced data engineering techniques. • Strong programming skills in scripting languages such as Python. • Experience in creating and maintaining data structures for efficient data storage and retrieval. • Familiarity with cloud and big data technologies, specifically AWS and Azure stack. • Hands-on experience with ETL tools, particularly Nifi and Tibco. • In-depth knowledge of database structures, including MSSQL and Vertica. • Proven experience in managing and operating data platforms. • Strong problem-solving and analytical skills with the ability to handle complex data challenges. • Excellent communication and collaboration skills to work effectively in a team environment. • Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.


About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
Join us as a Senior Software Engineer within our Web Application Development team, based out of Pune to deliver end-to-end customized application development.
We expect you to participate & contribute to every stage of project right from interacting with internal customers/stakeholders, understanding their requirements, and proposing them the solutions which will be best fit to their expectations. You will be part of local team you will have chance to be part of Global Projects delivery with the possibility of working On-site (Belgium) if required.You will be most important member of highly motivated Application development team leading the Microsoft Technology stack enabling the team members to deliver “first time right” application delivery.
Principal Duties and Responsibilities
• You will be responsible for the technical analysis of requirements and lead the project from Technical perspective
• You should be a problem solver and provide scalable and efficient technical solutions
• You guarantee an excellent and scalable application development in an estimated timeline
• You will interact with the customers/stakeholders and understand their requirements and propose the solutions
• You will work closely with the ‘Application Owner’ and carry the entire responsibility of end-to-end processes/development
• You will make technical & functional application documentation, release notes that will facilitate the aftercare of the application Knowledge, Skills and Qualifications
• Education: Master’s degree in computer science or equivalent
• Experience: Minimum 5- 10 years
Required Skills
• Strong working knowledge of C#, Angular 2+, SQL Server, ASP.Net Web API
• Good understanding on OOPS, SOLID principals, Development practices
• Good understanding of DevOps, Git, CI/CD
• Experience with development of client and server-side applications
• Excellent English communication skills (written, oral), with good listening capabilities
• Exceptionally good Excellent technical analytical, debugging, and problem-solving skills
• Has a reasonable balance between getting the job done vs technical debt
• Enjoys producing top quality code in a fast-moving environment
• Effective team player working in a team; willingness to put the needs of the team over their own
Preferred Skills
• Experience with product development for the Microsoft Azure platform
• Experience with product development life cycle would be a plus
• Experience with agile development methodology (Scrum)
• Functional analysis skills and experience (Use cases, UML) is an asset

About Apexon:
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.
Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.
To know more about us please visit: https://www.apexon.com/" target="_blank">https://www.apexon.com/
Responsibilities:
- C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products.
- Good object-oriented programming concepts and practical knowledge.
- Strong programming skills in C# are required.
- Good knowledge of C# Automation is preferred.
- Good to have experience with the Robot framework.
- Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
- Good to have knowledge of Azure cloud.
- Take end-to-end ownership of test automation development, execution and delivery.
Good to have:
- Experience in tools like SharePoint, Azure DevOps
.
Other skills:
- Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges.
Job Purpose :
Working with the Tech Data Sales Team, the Presales Consultant is responsible for providing presales technical support to the Sales team and presenting tailored demonstrations or qualification discussions to customers and/or prospects. The Presales Consultant also assists the Sales Team with qualifying opportunities - in or out and helping expand existing opportunities through solid questioning. The Presales Consultant will be responsible on conducting Technical Proof of Concept, Demonstration & Presentation on the supported products & solution.
Responsibilities :
- Subject Matter Expert (SME) in the development of Microsoft Cloud Solutions (Compute, Storage, Containers, Automation, DevOps, Web applications, Power Apps etc.)
- Collaborate and align with business leads to understand their business requirement and growth initiatives to propose the required solutions for Cloud and Hybrid Cloud
- Work with other technology vendors, ISVs to build solutions use cases in the Center of Excellence based on sales demand (opportunities, emerging trends)
- Manage the APJ COE environment and Click-to-Run Solutions
- Provide solution proposal and pre-sales technical support for sales opportunities by identifying the requirements and design Hybrid Cloud solutions
- Create Solutions Play and blueprint to effectively explain and articulate solution use cases to internal TD Sales, Pre-sales and partners community
- Support in-country (APJ countries) Presales Team for any technical related enquiries
- Support Country's Product / Channel Sales Team in prospecting new opportunities in Cloud & Hybrid Cloud
- Provide technical and sales trainings to TD sales, pre-sales and partners.
- Lead & Conduct solution presentations and demonstrations
- Deliver presentations at Tech Data, Partner or Vendor led solutions events.
- Achieve relevant product certifications
- Conduct customer workshops that help accelerate sales opportunities
Knowledge, Skills and Experience :
- Bachelor's degree in information technology/Computer Science or equivalent experience certifications preferred
- Minimum of 7 years relevant working experience, ideally in IT multinational environment
- Track record on the assigned line cards experience is an added advantage
- IT Distributor and/or SI experience would also be an added advantage
- Has good communication skills and problem solving skills
- Proven ability to work independently, effectively in an off-site environment and under high pressure
What's In It For You?
- Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle.
- Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses.
- Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program.
- Diversity, Equity & Inclusion: It's not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities.
- Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program.
- Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives.
Don't meet every single requirement? Apply anyway.
At Tech Data, a TD SYNNEX Company, we're proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you're excited about working for our company and believe you're a good fit for this role, we encourage you to apply. You may be exactly the person we're looking for!
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.

at CodeCraft Technologies Private Limited

Roles and Responsibilities:
• Gather and analyse cloud infrastructure requirements
• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby
preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and
discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,
CloudFormation), and automated imaging tools (Packer)
• Support existing infrastructure, analyse problem areas and come up with solutions
• An eye for monitoring – the candidate should be able to look at complex infrastructure and be
able to figure out what to monitor and how.
• Work along with the Engineering team to help out with Infrastructure / Network automation needs.
• Deploy infrastructure as code and automate as much as possible
• Manage a team of DevOps
Desired Profile:
• Understanding of provisioning of Bare Metal and Virtual Machines
• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.
• Experience in scripting languages like Ruby/ Python/ Shell Scripting
• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts
• Strong Linux/Unix administration skills.
• Self-starter who can implement with minimal guidance
• Hands-on experience setting up CICD from SCRATCH in Jenkins
• Experience with Managing K8s infrastructure
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
- Recommend a migration and consolidation strategy for DevOps tools
- Design and implement an Agile work management approach
- Make a quality strategy
- Design a secure development process
- Create a tool integration strategy

What is the role?
Expected to manage the product plan, engineering, and delivery of Xoxoday Plum. Plum is a rewarding and incentives infrastructure for businesses. It's a unified integrated suite of products to handle various rewarding use cases for consumers, sales, channel partners, and employees. 31% of the total tech team is aligned towards this product and comprises of 32 members within Plum Tech, Quality, Design, and Product management. The annual FY 2019-20 revenue for Plum was $ 40MN and is showing high growth potential this year as well. The product has a good mix of both domestic and international clientele and is expanding. The role will be based out of our head office in Bangalore, Karnataka however we are open to discuss the option of remote working with 25 - 50% travel.
Key Responsibilities
- Scope and lead technology with the right product and business metrics.
- Directly contribute to product development by writing code if required.
- Architect systems for scale and stability.
- Serve as a role model for our high engineering standards and bring consistency to the many codebases and processes you will encounter.
- Collaborate with stakeholders across disciplines like sales, customers, product, design, and customer success.
- Code reviews and feedback.
- Build simple solutions and designs over complex ones, and have a good intuition for what is lasting and scalable.
- Define a process for maintaining a healthy engineering culture ( Cadence for one-on-ones, meeting structures, HLDs, Best Practices In development, etc).
What are we looking for?
- Manage a senior tech team of more than 5 direct and 25 indirect developers.
- Should have experience in handling e-commerce applications at scale.
- Should have at least 7+ years of experience in software development, agile processes for international e-commerce businesses.
- Should be extremely hands-on, full-stack developer with modern architecture.
- Should exhibit skills to build a good engineering team and culture.
- Should be able to handle the chaos with product planning, prioritizing, customer-first approach.
- Technical proficiency
- JavaScript, SQL, NoSQL, PHP
- Frameworks like React, ReactNative, Node.js, GraphQL
- Databases technologies like ElasticSearch, Redis, MySql, Cassandra, MongoDB, Kafka
- Dev ops to manage and architect infra - AWS, CI/CD (Jenkins)
- System Architecture w.r.t Microservices, Cloud Development, DB Administration, Data Modeling
- Understanding of security principles and possible attacks and mitigate them.
Whom will you work with?
You will lead the Plum Engineering team and work in close conjunction with the Tech leads of Plum with some cross-functional stake with other products. You'll report to the co-founder directly.
What can you look for?
A wholesome opportunity in a fast-paced environment with scale, international flavour, backend, and frontend. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Focussed on delivering scalable performant database platforms that underpin our customer data services in a dynamic and fast-moving agile engineering environment.
· Experience with different types of enterprise application databases (PostgreSQL a must)
· Familiar with developing in a Cloud environment (AWS RDS, DMS & DevOps highly desirable).
· Proficient in using SQL to interrogate, analyze and report on customer data and interactions on live systems and in testing environments.
· Proficient in using PostgreSQL PL/pgSQL
· Experienced in delivering deployments and infrastructure as code with automation tools such as Jenkins, Terraform, Ansible, etc.
· Comfortable using code hosting platforms for version control and collaboration. (git, github, etc)
· Exposed to and have an opportunity to master automation and learn to use technologies and tools like Oracle, PostgreSQL, AWS, Terraform, GitHub, Nexus, Jenkins, Packer, Bash Scripting, Python, Groovy, and Ansible
· Comfortable leading complex investigations into service failures and data abnormalities that touch your applications.
· Experience with Batch and ETL methodologies.
· Confident in making technical decisions and acting on them (within reason) when under pressure.
· Calm dealing with stakeholders and easily be able to translate complex technical scenarios to non-technical individuals.
· Managing incidents, problems, and change in line with best practice
· Expected to lead and inspire others in your team and department, drive engineering best practice and compliance, strategic direction, and encourage collaboration and transparency.
The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).
The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:
• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required
This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models

at Altimetrik


Senior .NET Cloud (Azure) Practitioner
Job Description Experience: 5-12 years (approx.)
Education: B-Tech/MCA
Mandatory Skills
- Strong Restful API, Micro-services development experience using ASP.NET CORE Web APIs (C#);
- Must have exceptionally good software design and programming skills in .Net Core (.NET 3.X, .NET 6) Platform, C#, ASP.net MVC, ASP.net Web API (RESTful), Entity Framework & LINQ
- Good working knowledge on Azure Functions, Docker, and containers
- Expertise in Microsoft Azure Platform - Azure Functions, Application Gateway, API Management, Redis Cache, App Services, Azure Kubernetes, CosmosDB, Azure Search, Azure Service Bus, Function Apps, Azure Storage Accounts, Azure KeyVault, Azure Log Analytics, Azure Active Directory, Application Insights, Azure SQL Database, Azure IoT, Azure Event Hubs, Azure Data Factory, Virtual Networks and networking.
- Strong SQL Server expertise and familiarity with Azure Cosmos DB, Azure (Blob, Table, queue) storage, Azure SQL etc
- Experienced in Test-Driven Development, unit testing libraries, testing frameworks.
- Good knowledge of Object Oriented programming, including Design Patterns
- Cloud Architecture - Technical knowledge and implementation experience using common cloud architecture, enabling components, and deployment platforms.
- Excellent written and oral communication skills, along with the proven ability to work as a team with other disciplines outside of engineering are a must
- Solid analytical, problem-solving and troubleshooting skills
Desirable Skills:
- Certified Azure Solution Architect Expert
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-900-exam-preparation-microsoft-azure-fundamentals-524%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760717910671%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TmO1fonUFFgb8LzUJHbL8IyOdQeUKdw6xHMM2asosiw%3D&reserved=0" target="_blank">Microsoft Certified: Azure – Fundamentals Exam AZ-900
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-104-exam-preparation-microsoft-azure-administrator-1-1332%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EQPg%2FXxTdiYUCKCAvItXy6TY89udGTIehQ0m9irkGRk%3D&reserved=0" target="_blank">Microsoft Certified: Azure Administrator – Associate Exam AZ-104
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-204-exam-preparation-developing-solutions-for-microsoft-azure-1208%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2BnosXa4TdBUG4jrqP%2B0%2FOikDbQMBNqzuDpvGoUk0IE8%3D&reserved=0" target="_blank">Microsoft Certified: Azure Developer – Associate Exam AZ-204
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23devops-engineer&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NZyP%2F3Euh1SkHfB7896ovm0HDt0vA8UgfHUvTBN4SPM%3D&reserved=0" target="_blank">Microsoft Certified: DevOps Engineer Expert (AZ-400)
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23solutions-architect&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=0infaLzjjPThGGkzwu50jJXikppcC8trnsLGtAoB7S4%3D&reserved=0" target="_blank">Microsoft Certified: Azure Solutions Architect Expert (AZ-305)
- Good understanding of software architecture, scalability, resilience, performance;
- Working knowledge of automation tools such as Azure DevOps, Azure Pipeline or Jenkins or similar
Roles & Responsibilities
- Defining best practices & standards for usage of libraries, frameworks and other tools being used;
- Architecture, design, and implementation of software from development, delivery, and releases.
- Breakdown complex requirements into independent architectural components, modules, tasks and strategies and collaborate with peer leadership through the full software development lifecycle to deliver top quality, on time and within budget.
- Demonstrate excellent communications with stakeholders regarding delivery goals, objectives, deliverables, plans and status throughout the software development lifecycle.
- Should be able to work with various stakeholders (Architects/Product Owners/Leadership) as well as team - Lead/ Principal/ Individual Contributor for Web UI/ Front End Development;
- Should be able to work in an agile, dynamic team environment;
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 3 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira


YOptima is a well capitalized digital startup pioneering full funnel marketing via programmatic media. YOptima is trusted by leading marketers and agencies in India and is expanding its footprint globally.
We are expanding our tech team and looking for a prolific Staff Engineer to lead our tech team as a leader (without necessarily being a people manager). Our tech is hosted on Google cloud and the stack includes React, Node.js, AirFlow, Python, Cloud SQL, BigQuery, TensorFlow.
If you have hands-on experience and passion for building and running scalable cloud-based platforms that change the lives of the customers globally and drive industry leadership, please read on.
- You have 6+ years of quality experience in building scalable digital products/platforms with experience in full stack development, big data analytics and Devops.
- You are great at identifying risks and opportunities, and have the depth that comes with willingness and capability to be hands-on. Do you still code? Do you love to code? Do you love to roll up your sleeves and debug things?
- Do you enjoy going deep into that part of the 'full stack' that you are not an expert of?
Responsibilities:
- You will help build a platform that supports large scale data, with multi-tenancy and near real-time analytics.
- You will lead and mentor a team of data engineers and full stack engineers to build the next generation data-driven marketing platform and solutions.
- You will lead exploring and building new tech and solutions that solve business problems of today and tomorrow.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science or equivalent discipline.
- Excellent computer systems fundamentals, DS/Algorithms and problem solving skills.
- Experience in conceiving, designing, architecting, developing and operating full stack, data-driven platforms using Big data and cloud tech in GCP/AWS environments.
What you get: Opportunity to build a global company. Amazing learning experience. Transparent work culture. Meaningful equity in the business.
At YOptima, we value people who are driven by a higher sense of responsibility, bias for action, transparency, persistence with adaptability, curiosity and humility. We believe that successful people have more failures than average people have attempts. And that success needs the creative mindset to deal with ambiguities when you start, the courage to handle rejections and failure and rise up, and the persistence and humility to iterate and course correct.
- We look for people who are initiative driven, and not interruption driven. The ones who challenge the status quo with humility and candor.
- We believe startup managers and leaders are great individual contributors too, and that there is no place for context free leadership.
- We believe that the curiosity and persistence to learn new skills and nuances, and to apply the smartness in different contexts matter more than just academic knowledge.
Location:
- Brookefield, Bangalore
- Jui Nagar, Navi Mumbai

-
Job Title - DevOps Engineer
-
Reports Into - Lead DevOps Engineer
-
Location - India
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
-
Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
-
Flexible working hours - we trust you to choose how and when you work best
-
Profit sharing scheme - we win, you win
-
Private medical cover - delivered through BUPA
-
Life Assurance - for long term peace of mind
-
On site gym - take care of yourself
-
Relocation support - available
-
Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
-
Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays!
Are You Up To The Challenge?
As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.
Your Team Mates
The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work. Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.
What Does The Job Actually Involve?
-
Find ways to automate tasks and monitoring systems to continuously improve our systems.
-
Develop scripts and tools to make our infrastructure resilient and efficient.
-
Understand our applications and services and keep them running smoothly.
Your Hard Skills
-
Minimum 1 years of experience on a dev ops engineering role
-
Deep experience with Linux and Unix systems
-
Networking basics knowledge (named, nginx, etc)
-
Some coding experience (Python, Ruby, Perl, etc.)
-
Experience with common automation tools (Ex. Chef, Terraform, etc)
-
AWS experience is a plus
-
A creative mindset motivated by challenges and constantly striving for the best
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues.
We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
Do you love leading a team of engineers, coding up new products, and making sure that they work well together? If so, this is the job for you.
As an Engineering Manager in Unscript, you'll be responsible for managing a team of engineers who are focused on developing new products. You'll be able to apply your strong engineering background as well as your experience with large-scale development projects in the past.
You'll also be able to act as Product Owner (we know it's not your job but you'll have to do this :) ) and make sure that the team is working towards the right goals.
Being the Engineering Manager at Unscript means owning up to all things—from technical issues to product decisions—and being comfortable with taking responsibility for everything from hiring and training new hires, to making sure you get the best out of every individual.
About Us:
UnScript uses AI to create videos that were never shot. Our technology saves brands thousands of dollars spent in hiring influencers/actors, shooting videos with them. UnScript was founded by distinguished alums from IIT, with exemplary backgrounds in business and technology. UnScript has raised two rounds of funding from global VCs with Peter Thiel(Co-founder, Paypal) and Ried Hoffman ( Co-founder, Linkedin) as investors.
Required Qualifications:
- B.Tech or higher in Computer Science from a premier institute. (We are willing to waive this requirement if you are an exceptional programmer).
- Building scalable & performant web systems with clear focus on reusable modules.
- You are comfortable in a high paced environment and can respond to urgent (and at times ambiguous) requests
- Ability to translate fuzzy business problems into technical problems & come up with design, estimates, planning, execution & deliver the solution independently.
- Knowledge in AWS or other cloud Infra.
The Team:
Unscript was started by Ritwika Chowdhury. Our team brings experience from other foremost institutions like IIT Kharagpur, Microsoft Research, IIT Bombay, IIIT, BCG etc. We are thrilled to be backed by some of the world's largest VC firms and angel investors.


Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
Relevant Experience: 3-7 Years
Location: Bangalore
Client: IBM
Exposure to Connect Direct on multiple operating systems
Job Scheduler
Analysis, Build and troubleshooting skills
Nice to have skill:
Github (any source control)
CodeFresh (any devops tool).Exposure to Cloud
Client: Born Group
Contratual (Codersbrain Payroll)
Hybrid
Location: Bangalore
Title: Cloud and Automation Engineer
- Budget: 6 L - 3 yrs
- Budget: 10 L - 5 yrs
JD:Position Responsibilities:
- Estimate user stories/features (story point estimation) and tasks in hours with the required level of
accuracy and commit them as part of Sprint Planning.
- Contributes to the backlog grooming meetings by promptly asking relevant questions to ensure
requirements achieve the right level of DOR.
- Raise any impediments/risks (technical/operational/personal) they come across and approaches
Scrum Master/Technical Architect/PO
- Creates and maintains the product test strategy, documents it.
- Creates formal test plans and test reports and ensures they have the correct approvals.
- Coaches and mentors test team members the importance of testing.
- Responsible for test planning
- Organizes and facilitates Test Readiness Review
- Works with the product management team to create the approved user guide ready for the release.
- Provides test coverage reports/automates test cases percentage.
- Ensures high quality deliverable is passed on to UAT phase for stakeholders testing.
- Provides test evaluation summary report (test metrics) for the release.
- Estimates user stories/features (story point estimation) from their point of view and tasks in hours
with the required level of accuracy and commit them as part of Sprint Planning.
- Contributes to the backlog refinement meetings by promptly asking relevant questions to ensure
requirements achieve the right level of DOR.
- Works with the Product Owner to confirm that the acceptance tests reflect the desired functionality.
- Raises any impediments/risks (technical/operational/personal) they come across and approaches
Scrum Masters/Technical Architect/PO accordingly to arrive at a decision.
- Collaborate with other team members on various aspects of development/integration testing etc. to
get the feature working on is delivered with quality and on time.
- Tests features developed by developers throughout sprints to ensure working software with high
quality as per acceptance criteria defined is released as per committed team sprint objectives.
- Have a good understanding of the product features and customer expectations. They ensure all
aspects of the functionality is tested before the product is tested in UAT
- Plan how the features will be tested and should manage the test environments and be ready with the
test data.
- Understands requirements, they create automated test cases thereby ensuring regression testing is
performed on a daily basis. Checks in code into the shared source code repository regularly without
build errors.
- Ensure the defects are reported accurately in Azure DevOps with the relevant details of
severity/description etc and work with the developers to ensure the defects are verified and closed
before the release.
- Update the status and the remaining efforts for their tasks on a daily basis
- Ensures change requests are treated correctly and tracked in the system, impact analysis done and
risks/timelines are appropriately communicated.
Performance testing Roles and responsibilities
- Design, Implement, and Support performance testing systems and strategies
- Designing workload models
- Executing performance tests
- Using consistent metrics for monitoring
- Identifying bottlenecks, and where they occur
- Interpret results and graphs.
- Understand describe the relationship between queues and sub-systems
- Identify suggestions for performance tuning
- Preparing the test report
Employer will not sponsor applicants for employment visa status.
Basic Qualifications (Required Skills/Experience):
- Technical Bachelor's degree or Master degree
- Azure Cloud fundamentals.
- Programming languages: Java
- Development methodologies: Agile(BDD with Junit and Code repositories in Git – with feature
branch based development CI/CD)
- Architectural Paradigms: Microservices
- Multi-deployment model support: Container based approach with Docker
- Relevant work experience into ETL\BI testing.
- Write ANSI SQLs to compare data
- Knowledge on SQLServer Database Analysis service
- Knowledge on Azure Data lake
- Data comparison, Data Verification and Validation using Excel
- Relevant work experience into manual testing.
- Technical background and an understanding of the Aviation industry
- Should have worked with test management tools - like ALM(application lifecycle management) or
other equivalent test management tools
- Good documenting/scripting knowledge
- Excellent verbal and written communication skills
- Good understanding of SDLC and STLC
- Proven ability to manage and prioritize multiple, diverse projects simultaneously
- Must be flexible, independent , self-motivated , Punctual, Regular and consistent attendance
- Automation Testing tools like Selenium (or) QTP (Quick Test Professional) and experience on HP
ALM or Devops, , ETL Testing, SQL
- Experience in Load Testing, Stress Testing, Stability testing ,Reliability testing
- Hands on Performance testing tools like HP Performance Tester (LoadRunner), WebLOAD
- Worked minimum 5 years in test automation
Preferred Qualifications (Desired Skills/Experience):
- BE/B.Tech/M.E/M.Tech/M.Sc/MCA degree in IT/CSE/ECE with 6 to 8 years of relevant IT Software
Testing experience
- This Position would require person to work in Flight Domain, below experience would be preferred
- Past experience related to Aeronautical data / Aerospace / Aviation domain.
- Past experience related to Aircraft Performance computation/optimization, Tail Specific
Performance computations using big data analytics, ML and modeling.
- Past experience related to EFB applications, Flight planning, Data link, Flight Management Computer
and Airline Operations.
- Good understanding of weather, air traffic constraints, ACARS, NOTAM’s, Routes, Flight profile,
Flight Progress and demonstrated ability lead technology project and team management in one or
more technology areas.
- Knowledge of aviation industry is preferred.
Typical Education & Experience:
Education/experience typically acquired through advanced education (e.g. Bachelor) and typically 5 or
more years' related work experience or an equivalent combination of education and experience (e.g.
Master+ 4 years' related work experience.
Relocation:
This position does offer relocation within INDIA.