Cutshort logo
HAProxy Jobs in Bangalore (Bengaluru)

11+ HAProxy Jobs in Bangalore (Bengaluru) | HAProxy Job openings in Bangalore (Bengaluru)

Apply to 11+ HAProxy Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest HAProxy Job opportunities across top companies like Google, Amazon & Adobe.

icon
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹5L - ₹15L / yr
DevOps
Linux administration
HAProxy
Nginx
NOSQL Databases
+5 more

 

KaiOS is a mobile operating system for smart feature phones that stormed the scene to become the 3rd largest mobile OS globally (2nd in India ahead of iOS). We are on 100M+ devices in 100+ countries. We recently closed a Series B round with Cathay Innovation, Google and TCL.

 

What we are looking for:

  • BE/B-Tech in Computer Science or related discipline
  • 3+ years of overall commercial DevOps / infrastructure experience, cross-functional delivery-focused agile environment; previous experience as a software engineer and ability to see into and reason about the internals of an application a major plus
  • Extensive experience designing, delivering and operating highly scalable resilient mission-critical backend services serving millions of users; experience operating and evolving multi-regional services a major plus
  • Strong engineering skills; proficiency in programming languages
  • Experience in Infrastructure-as-code tools, for example Terraform, CloudFormation
  • Proven ability to execute on a technical and product roadmap
  • Extensive experience with iterative development and delivery
  • Extensive experience with observability for operating robust distributed systems – instrumentation, log shipping, alerting, etc.
  • Extensive experience with test and deployment automation, CI / CD
  • Extensive experience with infrastructure-as-code and tooling, such as Terraform
  • Extensive experience with AWS, including services such as ECS, Lambda, DynamoDB, Kinesis, EMR, Redshift, Elasticsearch, RDS, CloudWatch, CloudFront, IAM, etc.
  • Strong knowledge of backend and web technology; infrastructure experience with at-scale modern data technology a major plus
  • Deep expertise in server-side security concerns and best practices
  • Fluency with at least one programming language. We mainly use Golang, Python and JavaScript so far
  • Outstanding analytical thinking, problem solving skills and attention to detail
  • Excellent verbal and written communication skills

Requirements

Designation: DevOps Engineer

Location: Bangalore OR Hongkong

Experience: 4 to 7 Years

Notice period: 30 days or Less

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Coimbatore, Bengaluru (Bangalore)
5 - 8 yrs
₹4.5L - ₹22L / yr
DevOps
Ansible
skill iconKubernetes
skill iconJenkins
Bash
+3 more

Job Description

What does a successful Senior DevOps Engineer do at Fiserv?

This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.

 

What will you do:

 Build, manage, and deploy CI/CD pipelines.

 DevOps Engineer - Helm Chart, Rundesk, Openshift

 Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.

 Implementing various development, testing, automation tools, and IT infrastructure

 Optimize and automate release/development cycles and processes.

 Be part of and help promote our DevOps culture.

 Identify and implement continuous improvements to the development practice

 

What you must have:

 3+ years of experience in devops with hands-on experience in the following:

- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks

- Building docker images and running/managing docker instances

- Building Jenkins pipelines using groovy scripts

- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes

 Has good understanding on infrastructure as code

 Ability to write and update documentation

 Demonstrate a logical, process orientated approach to problems and troubleshooting

 Ability to collaborate with multi development teams

 

What you are preferred to have:

 8+ years of development experience

 Jenkins administration experience

 Hands-on experience in building and deploying helm charts

Process Skills:

• Should have worked in Agile Project

 

Behavioral Skills :

• Good Communication skills

 

Skills

PRIMARY COMPETENCY : Cloud Infra PRIMARY SKILL : DevOps PRIMARY SKILL PERCENTAGE : 100

Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
Splunk

General Description:Owns all technical aspects of software development for assigned applications

Participates in the design and development of systems & application programs

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation



Required Skills:

In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Someshwara Software
Bengaluru (Bangalore)
2 - 3 yrs
₹4L - ₹7L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.


Responsibilities:

• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.

• Configure and manage EC2 instances to meet application requirements.

• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.

• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.

• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.

• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.

• Implement and monitor S3 storage solutions for secure and scalable data storage

• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.

• Configure Route 53 for domain management, DNS routing, and failover configurations.

• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.

• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.

• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.



Qualifications:

• Bachelor's degree in computer science, Information Technology, or a related field.

• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.

• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.

• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.

• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.

• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.

• Strong communication skills with the ability to collaborate effectively with cross-functional teams.

• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.


Additional Information:

• We value creativity, innovation, and a proactive approach to problem-solving.

• We offer a collaborative and supportive work environment where your ideas and contributions are valued.

• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.


We celebrate diversity and are dedicated to creating an inclusive environment for all employees.

 

Read more
"A healthcare Product Company"

"A healthcare Product Company"

Agency job
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹20L / yr
DevOps
CI/CD
skill iconJenkins
skill iconDocker
Release Management
+2 more
  1. Develop and Deploy Software:
    1. Architect and create an effective build and release process using industry best practices and tools
    2. Create and manage build scripts to deploy software in a multi-cloud environment
    3. Look for opportunities to automate as much of the deployment process as possible to provide for repeatability, auditability, scalability and build in process enforcement
  2. Manage Release Schedule:
    1. Act as a “gate keeper” for all releases into production
    2. Work closely with business stakeholders, development managers and developers to prepare a release schedule
    3. Help prioritize deployment requests for version upgrades, patches and hot-fixes
  3. Continuous Delivery of Software:
    1. Implement Continuous Integration (CI) practices to drive development teams to implement smaller changes and commit code to the version control repo frequently
    2. Implement Continuous Development (CD) practices that automates deployment of the application to several environments – Dev, Test and Production
    3. Implement Continuous Testing (functional and non-functional) to execute tests in the CI/CD pipeline
  4. Manage Version Control:
    1. Define and implement branching policies to efficiently manage source-code
    2. Implement business rules as a part of source control standards
  5. Resolve Software Issues:
    1. Assist technical support and development teams to troubleshoot issues and identify areas that need improvement
    2. Address deployment related issues
  6. Maintain Release Documentation:
    1. Maintain release notes (features available in stable versions and known issues) and other documents for both internal and external end users
Read more
Reputed client of People First

Reputed client of People First

Agency job
Chennai, Bengaluru (Bangalore), Gurugram
2 - 9 yrs
₹10L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconJenkins
CI/CD
+4 more
Job Description:
As a MLOps Engineer in QuantumBlack you will:

Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams

Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production
.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)

Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps

Our Tech Stack-

We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects

Key Skills:

• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security

• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)

• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)

• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)

• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)

• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure

• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

Read more
BetterPlace

at BetterPlace

1 video
4 recruiters
Sikha Dash
Posted by Sikha Dash
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.

What we are looking for

  • Must have strong knowledge of Kubernetes and Helm3
  • Should have previous experience in Dockerizing the applications.
  • Should be able to automate manual tasks using Shell or Python
  • Should have good working knowledge on AWS and GCP clouds
  • Should have previous experience working on Bitbucket, Github, or any other VCS.
  • Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
  • Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
  • Should have a good understanding of ELK Stack.
  • Exposure on Jira, confluence, and Sprints.

What you will do:

  • Mentor junior Devops engineers and improve the team’s bar
  • Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
  • Oversight of all server environments, from Dev through Production.
  • Responsible for the automation and configuration management
  • Provides stable environments for quality delivery
  • Assist with day-to-day issue management.
  • Take lead in containerising microservices
  • Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
  • Enables the automation of CI/CD
  • Implement dashboard to monitors various
  • 1-3 years of experience in DevOps
  • Experience in setting up front end best practices
  • Working in high growth startups
  • Ownership and Be Proactive.
  • Mentorship & upskilling mindset.
  • systems and applications

  • what you’ll get
    • Health Benefits
    • Innovation-driven culture
    • Smart and fun team to work with
    • Friends for life
Read more
Pixuate

at Pixuate

1 recruiter
Ramya Kukkaje
Posted by Ramya Kukkaje
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Oracle Cloud
+5 more
  • Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.

    Job Description

    We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!

    What do we expect from this role?

    • This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
    • Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
    • Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
    • Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
    • Troubleshoots basic software or DevOps stack issues

    You would be great at this job, if you have below mentioned competencies

    • Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
    • 5+ years of relevant work experience
    • https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
    • Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
    • Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
    • Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
    • Experience with the design and implementation of big data backup/recovery solutions.
    • Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
    • Working knowledge in Python is a plus
    • Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
    • Strong interpersonal and communication skills
    • Proven ability to complete projects according to outlined scope and timeline
    • Willingness to travel within India and internationally whenever required
    • Demonstrated leadership qualities in past roles

    More about Pixuate:

    Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.

    We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.

    Rewards & Recognitions:

    Why join us?

    You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.   

    Where to find us?

    Website: http://pixuate.com/">http://pixuate.com/                            

    Linked in: https://www.linkedin.com/company/pixuate-ai     

     

    Place of Work:

    Work from Office – Bengaluru
Read more
company logo
Agency job
via Acadecraft by Srashti Jain
Remote, Bengaluru (Bangalore)
5 - 12 yrs
₹9L - ₹21L / yr
DevOps
skill iconDocker
skill iconKubernetes
Terraform
skill iconAmazon Web Services (AWS)
+3 more

Job Description

Please connect me on Linkedin or share your Resume on shrashti jain 
• 8+ years of overall experience and relevant of at least 4+ years. (Devops experience has be more when compared to the overall experience)

• Experience with Kubernetes and other container management solutions

• Should have hands on and good understanding on DevOps tools and automation framework

• Demonstrated hands-on experience with DevOps techniques building continuous integration solutions using Jenkins, Docker, Git, Maven

• Experience with n-tier web application development and experience in J2EE / .Net based frameworks

• Look for ways to improve: Security, Reliability, Diagnostics, and costs

• Knowledge of security, networking, DNS, firewalls, WAF etc

• Familiarity with Helm, Terraform for provisioning GKE,Bash/shell scripting

• Must be proficient in one or more scripting languages: Unix Shell, Perl, Python

• Knowledge and experience with Linux OS

• Should have working experience with monitoring tools like DataDog, Elk, and/or SPLUNK, or any other monitoring tools/processes

• Experience working in Agile environments

• Ability to handle multiple competing priorities in a fast-paced environment

• Strong Automation and Problem-solving skills and ability

• Experience of implementing and supporting AWS based instances and services (e.g. EC2, S3, EBS, ELB, RDS, IAM, Route53, Cloudfront, Elasticache).

•Very strong hands with Automation tools such Terraform

• Good experience with provisioning tools such as Ansible, Chef

• Experience with CI CD tools such as Jenkins.

•Experience managing production.

• Good understanding of security in IT and the cloud

• Good knowledge of TCP/IP

• Good Experience with Linux, networking and generic system operations tools

• Experience with Clojure and/or the JVM

• Understanding of security concepts

• Familiarity with blockchain technology, in particular Tendermint

Read more
Semiconductor based industry
Bengaluru (Bangalore)
8 - 14 yrs
₹10L - ₹50L / yr
DevOps
Red Hat Linux
redhat
EDA
Reporting
Your challenge
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.

Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments

We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications

Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D
Read more
Directi

at Directi

13 recruiters
Shilpa L
Posted by Shilpa L
Bengaluru (Bangalore), Mumbai
3 - 10 yrs
₹15L - ₹35L / yr
DevOps
skill iconAmazon Web Services (AWS)
automation tools
Enterprise solutions team is responsible for end to end developments, enhancements, customization and integrations of Zeta products and solutions based on business requirements. What is the Job like? * Architecting solutions and drive the technical aspects for the projects * Breaking down complex requirements into simpler stories * Working with various stakeholders and helping convert requirements into code * Managing, mentoring and reviewing engineers for their technical contribution * Participating actively in hiring and nurturing of talent Who should apply? Bachelor’s/Master’s degree in engineering (computer science, information systems) 10+ years of experience building enterprise systems including at least 2 years of direct people management experience Worked on large scale java / JSP applications with good understanding of web stack Good understanding of nuances of distributed systems Good understanding of relational databases (preferred - MySQL / PostgreSQL) Good understanding of reporting/BI systems (preferred - Crystal, Jasper) Worked with IaaS like AWS / GCP / Azure etc.. Worked with Message Brokers and Application Containers analyse, design and architect, develop and maintain software solutions across multiple projects direct and provide ongoing leadership for a team of individual contributors, set objectives, review performances, define growth plan and nurture. drive best practices, and is a pro with agile methodologies / practices - SCRUM, Test Driven Development (TDD) manage headcount, deliverables, schedules across ongoing projects, ensure that resources are appropriately allocated and timelines are met in accordance with the project roadmaps. Zeta is a revolutionary fintech start up making strides in the world of employee benefits & rewards, cafeteria management and digital payments. Zeta (bootstrapped fin-tech startup) is part of the Directi group, a prestigious tech conglomerate with a 17-year-long history and 25 software products in the market. The group has churned out successful mass market businesses like Media.net, Flock, Ringo, Radix, Skenzo and Codechef, without any external funding. Here’s what we’ve built so far: 1. Zeta Optima: Fully-digitized employee tax-benefits programme that helps employees save over 80K in taxes and helps organizations save up 90% of their time and resources 2. Zeta Express: A corporate cafeteria solution that makes cafeterias automated and completely cashless 3. Zeta Super Card: An advanced card-based payment solution that is 10x more secure than bank-issued cards 4. Zeta Spotlight: A digitized rewards, recognition and gifting solution that is easy to distribute and easy to spend
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort