
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The candidate will be responsible for automating the deployment of cloud infrastructure and services to support application development and hosting (architecting, engineering, deploying, and operationally managing the underlying logical and physical cloud computing infrastructure).
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless, microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
●Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team ● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, Networking
● Collaborate with Product Management and Product Engineering teams to drive excellence in Google Cloud products and features.
● Ensures efficient data storage and processing functions by company security policies and best practices in cloud security.
● Ensuring scaled database setup/monitoring with near zero downtime

About Carrotclub
About
Defining a new & inclusive shopping ecosystem. A new world where consumers not just buy from brands but also get rewarded to be part of the brand's growth 🚀
Tech stack
Candid answers by the company
Carrot is a deep tech platform that is reimagining Rewards.
We’re talking real skin in the game with your favourite D2C brands — tangible rewards for engaging with them, for being their fan, for showing them love. Beyond cashbacks and discounts.
Product showcase
Similar jobs
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
- 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code.
- Advanced Linux system administration abilities.
- Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus.
- Experience with managing production environments on AWS or GCP.
- Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus.
- Experience on any configuration management tools like Ansible, Puppet or Chef is a must.
- Experience in any one of the scripting languages: Shell, Python, etc.
- Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must.
- Solid understanding of SSL and DNS.
- Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc.
- Basic understanding of networking concepts.
- Always adhere to security best practices.
- Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.
- Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage.
What you get to do
- Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments
- Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues
- Continuously improve Conviva SaaS services and infrastructure for availability, performance and security
- Implement security best practices – primarily patching of operating systems and applications
- Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
- Participate in 12x7 on-call rotations
- Work with third party service/support providers for installations, support related calls, problem resolutions etc.
Responsibilities:
- Writing and maintaining the automation for deployments across various cloud (AWS/Azure/GCP)
- Bring a passion to stay on top of DevOps trends, experiment, and learn new CI/CD technologies.
- Creating the Architecture Diagrams and documentation for various pieces
- Build tools and automation to improve the system's observability, availability, reliability, performance/latency, monitoring, emergency response
Requirements:
- 3 - 5 years of professional experience as a DevOps / System Engineer.
- Strong knowledge in Systems Administration & troubleshooting skills with Linux.
- Experience with CI/CD best practices and tooling, preferably Jenkins, Circle CI.
- Hands-on experience with Cloud platforms such as AWS/Azure/GCP or private cloud environments.
- Experience and understanding of modern container orchestration, Well-versed with the containerised applications (Docker, Docker-compose, Docker-swarm, Kubernetes).
- Experience in Infrastructure as code development using Terraform.
- Basic Networking knowledge VLAN, Subnet, VPC, Webserver like Nginx, Apache.
- Experience in handling different SQL and NoSQL databases (PostgreSQL, MySQL, Mongo).
- Experience with GIT Version Control Software.
- Proficiency in any programming or scripting language such as Shell Script, Python, Golang.
- Strong interpersonal and communication skills; ability to work in a team environment.
- AWS / Kubernetes Certifications: AWS Certified Solutions Architect / CKA.
- Setup and management of a Kubernetes cluster, including writing Docker files.
- Experience working in and advocating for agile environments.
- Knowledge in Microservice architecture.
Location: Bengaluru
Department: DevOps
We are looking for extraordinary infrastructure engineers to build a world class
cloud platform that scales to millions of users. You must have experience
building key portions of a highly scalable infrastructure using Amazon AWS and
should know EC2, S3, EMR like the back of your hand. You must enjoy working
in a fast-paced startup and enjoy wearing multiple hats to get the job done.
Responsibilities
● Manage AWS server farm Own AWS infrastructure automation and
support.
● Own production deployments in multiple AWS environments
● End-end backend engineering infra charter includes Dev ops,Global
deployment, Security and compliances according to latest practices.
Ability to guide the team in debugging production issues and write
best-of-the breed code.
● Drive “engineering excellence” (defects, productivity through automation,
performance of products etc) through clearly defined metrics.
● Stay current with the latest tools, technology ideas and methodologies;
share knowledge by clearly articulating results and ideas to key decision
makers.
● Hiring, mentoring and retaining a very talented team.
Requirements
● B.S. or M.S in Computer Science or a related field (math, physics,
engineering)
● 5-8 years of experience in maintaining infrastructure system/devops
● Enjoy playing with tech like nginx, haproxy, postgres, AWS, ansible,
docker, nagios, or graphite Deployment automation experience with
Puppet/Chef/Ansible/Salt Stack Work with small, tightly knit product
teams that function cohesively to move as quickly as possible.
● Determination to provide reliable and fault tolerant systems to the
application developers that consume them
● Experience in developing Java/C++ backend systems is a huge plus Be a
strong team player.
Preferred
Deep working knowledge of Linux servers and networked environments
Thorough understanding of distributed systems and the protocols they use,
including TCP/IP, RESTful APIs, SQL, NoSQL. Experience in managing a NoSQL
database (Cassandra) is a huge plus.
Position Summary:
Technology Lead provides technical leadership with in-depth DevOps experience and is responsible for enabling delivery of high-quality projects to Saviant clients through highly effective DevOps process. This is a highly technical role, with a focus on analysing, designing, documenting, and implementing a complete DevOps process for enterprise applications using the most advanced technology stacks, methodologies, and best practices within the agreed timelines.
Individuals in this role will need to have good technical and communication skills and strive to be on the cutting edge, innovate, and explore to deliver quality solutions to Saviant Clients.
Your Role & Responsibilities at Saviant:
• Design, analyze, document, and develop the technical architecture for on-premise as well as cloud-based DevOps solutions around customers’ business problems.
• Lead end to end process and setup implementation of configuration management, CI, CD, and monitoring platforms.
• Conduct reviews of design and implementation of DevOps processes while establishing, and maintaining best practices
• Setup new processes to improve the quality of development, delivery and deployment processes
• Provide technical support and guidance to project team members.
• Upgrade by learning technologies beyond traditional area of expertise
• Contribute to pre-sales, proposal creation, POCs, technology incubation from technical and architecture perspective
• Participate in recruitment and people development initiatives.
Job Requirements/Qualifications:
• Educational Qualification: BE, BTech, MTech, MCA from a reputed institute • 6 to 8 years of hands-on experience of the DevOps process using technologies like Dot Net Core, Python, C#, MVC, ReactJS, Python, Android, IOS, Linux, Windows
• Strong hands-on experience of the full life cycle of DevOps: DevOps Orchestration/Configuration/Security/CI-CD/Release Management and Environment management • Solid hands-on knowledge of DevOps technologies and tools such as Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet, JIRA, TFS, Git, SVN, various scripting tools, etc. • Solid hands-on knowledge of containerization technologies and tools such as Docker, Kubernetes, Cloud Foundry • In-depth understanding of various development and deployment architectures from a DevOps perspective
• Expertise in Grounds-up DevOps projects involving multiple agile teams spread across geographies.
• Experience in a various Agile Project Management software /techniques / tools
• Strong analytical and problem solving skills
• Excellent written and oral communication skills
• Enjoys working as part of agile software teams in a startup environment.
Who Should Apply?
• You have independently managed end-to-end DevOps projects, including understanding requirements, design solutions and implementing, setting up best practices with different business domain over last 2 years.
• You are well versed with Agile development methodologies and have successfully implemented them across at least 2-3 projects
• You have lead development team of 5 to 8 developers with Technology responsibility
• You have served as “Single Point of Contact” for managing technical escalations and decisions

What you do :
- Developing automation for the various deployments core to our business
- Documenting run books for various processes / improving knowledge bases
- Identifying technical issues, communicating and recommending solutions
- Miscellaneous support (user account, VPN, network, etc)
- Develop continuous integration / deployment strategies
- Production systems deployment/monitoring/optimization
-
Management of staging/development environments
What you know :
- Ability to work with a wide variety of open source technologies and tools
- Ability to code/script (Python, Ruby, Bash)
- Experience with systems and IT operations
- Comfortable with frequent incremental code testing and deployment
- Strong grasp of automation tools (Chef, Packer, Ansible, or others)
- Experience with cloud infrastructure and bare-metal systems
- Experience optimizing infrastructure for high availability and low latencies
- Experience with instrumenting systems for monitoring and reporting purposes
- Well versed in software configuration management systems (git, others)
- Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
-
Data management skills
Education :
- Degree in Computer Engineering or Computer Science
- 1-3 years of equivalent experience in DevOps roles.
- Work conducted is focused on business outcomes
- Can work in an environment with a high level of autonomy (at the individual and team level)
-
Comfortable working in an open, collaborative environment, reaching across functional.
Our Offering :
- True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
-
The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.
Perks :
- Awesome benefits, social gatherings, etc.
- Work with intelligent, fun and interesting people in a dynamic start-up environment.
1. Developing a video player website where students can learn various courses, view e-books, solve tests, etc.
2. Building the product to reach higher scalability
3. Developing software to integrate with internal back-end systems
4. Working on AWS cloud platform
5. Working on Amazon Ec2, Amazon S3 bucket, and Git
6. Working on the implementation of continuous integration and deployment pipelines using Jenkins (mandatory)
7. Monitoring, troubleshooting, and diagnosing infrastructure systems (excellent knowledge required for the same)
8. Building tools to reduce the occurrences of errors and improve customer experience
9. Should have experience in MERN Stack too.

