- 3 - 6 years of software development, and operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Experience managing any distributed NoSQL system (Kafka/Cassandra/etc.)
- Experience with Containers, Microservices, deployment and service orchestration using Kubernetes, EKS (preferred), AKS or GKE.
- Strong scripting language knowledge, such as Python, Shell
- Experience and a deep understanding of Kubernetes.
- Experience in Continuous Integration and Delivery.
- Work collaboratively with software engineers to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux-based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
- AWS
- Docker
- Kubernetes
- Envoy
- Istio
- Jenkins
- Cloud Security & SIEM stacks
- Terraform

About Jar
About
Hey! We are building this exciting product in the space of micro wealth management for GenZ and Millennials.
400+ Million people are waiting to be financially literate, learn to save and make smart investments one step at a time. That's what India is right now and we are doing everything to ensure that it happens. A financial literate young population is potent enough to change the world.
We are here to help Indians to rediscover savings in the times of instant gratification, inflation and never ending need of assimilation into western philosophy of consumerism.
You ask how we are going to do it? We are building a micro wealth management platform with small steps at a time, starting with micro savings, financial literacy and then to the glory of micro investments.
We are backed by serial entrepreneurs and marquee operators 💰
Company video


Connect with the team
Similar jobs
About GradRight
Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.
GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.
In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.
GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India.
About the Role
We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.
Core Responsibilities
Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.
Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).
Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.
Collaborate with development teams to optimize application performance and deployment processes.
Required Skills & Experience
3–4 years of professional experience as a DevOps Engineer or similar role.
Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).
Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).
Proficiency in CI/CD pipeline design and automation.
Experience with Infrastructure as Code (Terraform / AWS CloudFormation).
Solid understanding of Linux/Unix systems and shell scripting.
Knowledge of monitoring, logging, and alerting tools.
Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).
Basic programming/scripting experience in Python, Bash, or Go.
Nice to Have
Exposure to microservices architecture and service mesh (Istio/Linkerd).
Knowledge of serverless (AWS Lambda, API Gateway).
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.
Specific responsibilities will include:
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.
- Ensuring that the organization's systems are secure and compliant with industry standards.
- Collaborating with software developers to design and implement infrastructure as code.
- Providing mentorship and technical guidance to team members.
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.
• DevOps/Build and Release Engineer with maturity to help, define and automate the processes.
• Work, configure, install, manage, on source control tools like AWS Codecommit / GitHub / BitBucket.
• Automate implementation/deployment of code in the cloud-based infrastructure (AWS Preferred).
• Setup monitoring of infrastructure and applications with alerting frameworks
Requirements:
• Able to code in Python.
• Extensive experience with building and supporting Docker and Kubernetes in
production.
• Understand AWS (Amazon Web Services) and be able to jump right into our
environment.
• Security Clearance will be required.
• Lambda used in conjunction with S3, CloudTrail and EC2.
• CloudFormation (Infrastructure as code)
• CloudWatch and CloudTrail
• Version Control (SVN, Git, Artifactory, Bit bucket)
• CI/CD (Jenkins or similar)
• Docker Compose or other orchestration tools
• Rest API
• DB (Postgres/Oracle/SQL Server or NoSql or Graph DB)
• Bachelor’s Degree in Computer Science, Computer Engineering or a closely
related field.
• Server orchestration using tools like Puppet, Chef, Ansible, etc.
Please send your CV at priyanka.sharma @ neotas.com
Neotas.com
We are a self organized engineering team with a passion for programming and solving business problems for our customers. We are looking to expand our team capabilities on the DevOps front and are on a lookout for 4 DevOps professionals having relevant hands on technical experience of 4-8 years.
We encourage our team to continuously learn new technologies and apply the learnings in the day to day work even if the new technologies are not adopted. We strive to continuously improve our DevOps practices and expertise to form a solid backbone for the product, customer relationships and sales teams which enables them to add new customers every week to our financing network.
As a DevOps Engineer, you :
- Will work collaboratively with the engineering and customer support teams to deploy and operate our systems.
- Build and maintain tools for deployment, monitoring and operations.
- Help automate and streamline our operations and processes.
- Troubleshoot and resolve issues in our test and production environments.
- Take control of various mandates and change management processes to ensure compliance for various certifications (PCI and ISO 27001 in particular)
- Monitor and optimize the usage of various cloud services.
- Setup and enforce CI/CD processes and practices
Skills required :
- Strong experience with AWS services (EC2, ECS, ELB, S3, SES, to name a few)
- Strong background in Linux/Unix administration and hardening
- Experience with automation using Ansible, Terraform or equivalent
- Experience with continuous integration and continuous deployment tools (Jenkins)
- Experience with container related technologies (docker, lxc, rkt, docker swarm, kubernetes)
- Working understanding of code and script (Python, Perl, Ruby, Java)
- Working understanding of SQL and databases
- Working understanding of version control system (GIT is preferred)
- Managing IT operations, setting up best practices and tuning them from time-totime.
- Ensuring that process overheads do not reduce the productivity and effectiveness of small team. - Willingness to explore and learn new technologies and continuously refactor thetools and processes.
- 7-10 years experience with secure SDLC/DevSecOps practices such as automating security processes within CI/CD pipeline.
- At least 4 yrs. experience designing, and securing Data Lake & Web applications deployed to AWS, Azure, Scripting/Automation skills on Python, Shell, YAML, JSON
- At least 4 years of hands-on experience with software development lifecycle, Agile project management (e.g. Jira, Confluence), source code management (e.g. Git), build automation (e.g. Jenkins), code linting and code quality (e.g. SonarQube), test automation (e.g. Selenium)
- Hand-on & Solid understanding of Amazon Web Services & Azure-based Infra & applications
- Experience writing cloud formation templates, Jenkins, Kubernetes, Docker, and microservice application architecture and deployment.
- Strong know-how on VA/PT integration in CI/CD pipeline.
- Experience in handling financial solutions & customer-facing applications
Roles
- Accelerate enterprise cloud adoption while enabling rapid and stable delivery of capabilities using continuous integration and continuous deployment principles, methodologies, and technologies
- Manage & deliver diverse cloud [AWS, Azure, GCP] DevSecOps journeys
- Identify, prototype, engineer, and deploy emerging software engineering methodologies and tools
- Maximize automation and enhance DevSecOps pipelines and other tasks
- Define and promote enterprise software engineering and DevSecOps standards, practices, and behaviors
- Operate and support a suite of enterprise DevSecOps services
- Implement security automation to decrease the loop between the development and deployment processes.
- Support project teams to adopt & integrate the DevSecOps environment
- Managing application vulnerabilities, Data security, encryption, tokenization, access management, Secure SDLC, SAST/DAST
- Coordinate with development and operations teams for practical automation solutions and custom flows.
- Own DevSecOps initiatives by providing objective, practical and relevant ideas, insights, and advice.
- Act as Release gatekeeper with an understanding of OWASP top 10 lists of vulnerabilities, NIST SP-800-xx, NVD, CVSS scoring, etc concepts
- Build workflows to ensure a successful DevSecOps journey for various enterprise applications.
- Understand the strategic direction to reach business goals across multiple projects & teams
- Collaborate with development teams to understand project deliverables and promote DevSecOps culture
- Formulate & deploy cloud automation strategies and tools
Skills
- Knowledge of the DevSecOps culture and principles.
- An understanding of cloud technologies & components
- A flair for programming languages such as Shell, Python, Java Scripts,
- Strong teamwork and communication skills.
- Knowledge of threat modeling and risk assessment techniques.
- Up-to-date knowledge of cybersecurity threats, current best practices, and the latest software.
- An understanding of programs such as Puppet, Chef, ThreatModeler, Checkmarx, Immunio, and Aqua.
- Strong know-how of Kubernetes, Docker, AWS, Azure-based deployments
- On the job learning for new programming languages, automation tools, deployment architectures
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days
Are you the one? Quick self-discovery test:
- Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.
- Be the first one to experiment on new-age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Use your experience in the Google cloud platform, AWS, or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud-based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Participate in technical reviews of requirements, designs, code, and other artifacts
- Identify and keep abreast of new technical concepts in the google cloud platform
- Security, Risk, and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas.
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality.
- Highly organised and efficient.
- Confident working with others to inspire a high-quality standard.
Experience :
- 4-8 years experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems and/OR Windows servers
- Specialize in one or two cloud deployment platforms: AWS, GCP
- Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine, API Gateway, AppSync and ServiceMesh)
- Experience in one or more scripting language-Python, Bash
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
- DevOps Technologies (AWS DevOps, Jenkins, Git, Maven)
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef, Packer
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
Education :
- Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 3-8 years of experience with hands-on experience in Cloud Computing (AWS/GCP) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.
Job role
Anaxee is India's REACH Engine! To provide access across India, we need to build highly scalable technology which needs scalable Cloud infrastructure. We’re seeking an experienced cloud engineer with expertise in AWS (Amazon Web Services), GCP (Google Cloud Platform), Networking, Security, and Database Management; who will be Managing, Maintaining, Monitoring, Handling Cloud Platforms, and ensuring the security of the same.
You will be surrounded by people who are smart and passionate about the work they are doing.
Every day will bring new and exciting challenges to the job.
Job Location: Indore | Full Time | Experience: 1 year and Above | Salary ∝ Expertise | Rs. 1.8 LPA to Rs. 2.64 LPA
About the company:
Anaxee Digital Runners is building India's largest last-mile Outreach & data collection network of Digital Runners (shared feet-on-street, tech-enabled) to help Businesses & Consumers reach the remotest parts of India, on-demand.
We want to make REACH across India (remotest places), as easy as ordering pizza, on-demand. Already serving 11000 pin codes (57% of India) | Anaxee is one of the very few venture-funded startups in Central India | Website: www.anaxee.com
Important: Check out our company pitch (6 min video) to understand this goal - https://www.youtube.com/watch?v=7QnyJsKedz8
Responsibilities (You will enjoy the process):
#Triage and troubleshoot issues on the AWS and GCP and participate in a rotating on-call schedule and address urgent issues quickly
#Develop and leverage expert-level knowledge of supported applications and platforms in support of project teams (architecture guidance, implementation support) or business units (analysis).
#Monitoring the process on production runs, communicating the information to the advisory team, and raising production support issues to the project team.
#Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
#Developing and implementing technical efforts to design, build, and deploy AWS and GCP applications at the direction of lead architects, including large-scale data processing and advanced analytics
#Participate in all aspects of the SDLC for AWS and GCP solutions, including planning, requirements, development, testing, and quality assurance
#Troubleshoot incidents, identify root cause, fix, and document problems, and implement preventive measures
#Educate teams on the implementation of new cloud-based initiatives, providing associated training as required
#Build and maintain operational tools for deployment, monitoring, and analysis of AWS and GCP infrastructure and systems; Design, deploy, maintain, automate & troubleshoot virtual servers and storage systems, firewalls, and Load Balancers in our hybrid cloud environment (AWS and GCP)
What makes a great DevOps Engineer (Cloud) for Anaxee:
#Candidate must have sound knowledge, and hands-on experience, in GCP (Google Cloud Platform) and AWS (Amazon Web Services)
#Good hands-on Linux Operating system OR any other similar distributions, viz. Ubuntu, CentOS, RHEL/RedHat, etc.
#1+ years of experience in the industry
#Bachelor's degree preferred with Science/Maths background (B.Sc/BCA/B.E./B.Tech)
#Enthusiasm to learn new software, take ownership and latent desire and curiosity in the related domain like Cloud, Hosting, Programming, Software development, security.
#Demonstrable skills troubleshooting a wide range of technical problems at application and system level, and have strong organizational skills with eye for detail.
#Prior knowledge of risk-chain is an added advantage
#AWS/GCP certifications is a plus
#Previous startup experience would be a huge plus.
The ideal candidate must be experienced in cloud-based tech, with a firm grasp on emerging technologies, platforms, and applications, and have the ability to customize them to help our business become more secure and efficient. From day one, you’ll have an immediate impact on the day-to-day efficiency of our IT operations, and an ongoing impact on our overall growth
What we offer
#Startup Flexibility
#Exciting challenges to learn grow and implement notions
#ESOPs (Employee Stock Ownership Plans)
#Great working atmosphere in a comfortable office,
#And an opportunity to get associated with a fast-growing VC-funded startup.
What happens after you apply?
You will receive an acknowledgment email with company details.
If gets shortlisted, our HR Team will get in touch with you (Call, Email, WhatsApp) in a couple of days
Rest all the information will be communicated to you then via our AMS.
Our expectations before/after you click “Apply Now”
Read about Anaxee: http://www.anaxee.com/
Watch this six mins pitch to get a better understanding of what we are into https://www.youtube.com/watch?v=7QnyJsKedz8
Let's dive into detail (Company Presentation): https://bit.ly/anaxee-deck-brands
Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.










