

Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.

About Alyke
About
Similar jobs

We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)
environments: AWS / Azure / GCP
• Must have strong work experience (2 + years) developing IaC (i.e. Terraform)
• Must have strong work experience in Ansible development and deployment.
• Bachelor’s degree with a background in math will be a PLUS.
• Must have 8+ years experience with a mix of Linux and Window systems in a medium to large business
environment.
• Must have command level fluency and shell scripting experience in a mix of Linux and Windows
environments.
•
• Must enjoy the experience of working in small, fast-paced teams
• Identify opportunities for improvement in existing process and automate the process using Ansible Flows.
• Fine tune performance and operation issues that arise with Automation flows.
• Experience administering container management systems like Kubernetes would be plus.
• Certification with Red Hat or any other Linux variant will be a BIG PLUS.
• Fluent in the use of Microsoft Office Applications (Outlook / Word / Excel).
• Possess a strong aptitude towards automating and timely completion of standard/routine tasks.
• Experience with automation and configuration control systems like Puppet or Chef is a plus.
• Experience with Docker, Kubernetes (or container orchestration equivalent) is nice to have
Role Introduction
• This role involves guiding the DevOps team towards successful delivery of Governance and
toolchain initiatives by removing manual tasks.
• Operate toolchain applications to empower engineering teams by providing, reliable, governed
self-service tools and supporting their adoption
• Driving good practice for consumption and utilisation of the engineering toolchain, with a focus
on DevOps practices
• Drive good governance for cloud service consumption
• Involves working in a collaborative environment and focus on leading team and providing
technical leadership to team members.
• Involves setting up process and improvements for teams on supporting various DevOps tooling
and governing the tooling.
• Co-ordinating with multiple teams within organization
• Lead on handovers from architecture teams to support major project rollouts which require the
Toolchain governance DevOps team to operationally support tooling
What you will do
• Identify and implement best practices, process improvement and automation initiatives for
improvement towards quicker delivery by removing manual tasks
• Ensure best practices and process are documented for reusability and keeping up-to date on
good practices and standards.
• Re-usable automation and compliance service, tools and processes
• Support and management of toolchain, toolchain changes and selection
• Identify and implement risk mitigation plans, avoid escalations, resolve blockers for teams.
Toolchain governance will involve operating and responding to alerts, enforcing good tooling
governance by driving automation, remediating technical debt and ensuring the latest tools
are utilised and on the latest versions
• Triage product pipelines, performance issues, SLA/SLO breaches, service unavailable along
with ancillary actions such as providing access to logs, tools, environments.
• Involve in initial / detailed estimates during roadmap planning or feature
estimation/planning of any automation identified for a given toolset.
• Develop, refine, and tune integrations between various tools
• Discuss with Product Owner/team on any challenges from implementation, deployment
perspective and assist in arriving probable solution and escalate any risks to get them
resolved w.r.t DevOps toolchain.
• In consultation with Head of DevOps and other stake holders, prioritization of items, item-
task breakdown; accountable for squad deliverables for sprint
• Involve in reviewing current components and plan for upgrade and ensure its communicated
to wider audience within Organization
• Involve in reviewing access / role and enhance and automate provisioning.
• Identify and encourage areas for growth and improvement within the team e.g conducts
regular 1-2-1’s with squad members to provide support, mentoring and goal setting
• Involve in performance management ,rewards and recognition of team members, Involve in
hiring process.• Plan for upskill of team to know about tools and perform tasks. Ensure quicker onboarding
of new joiners/freshers to team to be productive.
• Review ticket metrics to measure the health of the project including SLAs and plan for
improvement.
• Requirement for on call for critical incidents that happen Out of Hours, based on tooling SLA.
This may include planning standby schedule for squad, carrying out retrospective for every
callout and reviewing SLIs/SLOs.
• Owns the tech/repair debt, risk and compliance for the tooling with respect to
infrastructure, pipelines, access etc
• Track optimum utilization of resources and monitor/track the delivery schedule
• Review solutions designs with the Architects / Principal DevOps Engineers as required
• Provide monthly reporting which align to DevOps Tooling KPIs
What you will have
• Candidate should have 8+ years of experience and Hands-on DevOps experience and
experience in team management.
• Strong communication and interpersonal skills, Team player
• Good working experience of CI/CD tools like Jenkins, SonarQube, FOSSA, Harness, Jira, JSM,
ServiceNow etc.
• Good hands on knowledge of AWS Services like EC2, ECS, S3, IAM, SNS, SQS, VPC, Lambda,
API Gateway, Cloud Watch, Cloud Formation etc.
• Experience in operating and governing DevOps Toolchain
• Experience in operational monitoring, alerting and identifying and delivering on both repair
and technical debt
• Experience and background in ITIL/ITSM processes. The candidate will ensure development
of the appropriate (ITSM) model and processes, based on the ITIL Service Management
framework. This includes the strategic, design, transition, and operation services and
continuous service improvement
• Provide ITSM leadership experience and coaching processes
• Experience on various tools like Jenkins, Harness, Fossa,
• Experience of hosting and managing applications on AWS/AZURE•
• Experience in CI/CD pipeline (Jenkins build pipelines)
• Experience in containerization (Docker/Kubernetes)
• Experience in any programming language (Node.js or Python is preferred)
• Experience in Architecting and supporting cloud based products will be a plus
• Experience in PowerShell & Bash will be a plus
• Able to self manage multiple concurrent small projects, including managing priorities
between projects
• Able to quickly learn new tools
• Should be able to mentor/drive junior team members to achieve desired outcome of
roadmap-
• Ability to analyse information to identify problems and issues, and make effective decisions
within short span
• Excellent problem solving and critical thinking
• Experience in integrating various components including unit testing / CI/CD configuration.
• Experience to review current toolset and plan for upgrade.
• Experience with Agile framework/Jira/JSM tool.• Good communication skills and ability to communicate/work independently with external
teams.
• Highly motivated, able to work proficiently both independently and in a team environment
Good knowledge and experience with security constructs –
- Good knowledge of at least one language (C#, Java, Python, Go, PHP, Node.js)
- Have enough experience on application and infrastructure architectures
- Design and plan cloud solution architecture
- Design for security, network, and compliances
- Analyze and optimize technical and business processes
- Ensure solution and operational reliability
- Manage and provision cloud infrastructure
- Manage IaaS, PaaS, and SaaS solutions
- Design strategies around cloud governance, migration, Cloud operations and DevOps
- Design highly scalable, available, and reliable cloud applications
- Build and test applications
- Deploy applications on cloud
- Integration with cloud services
Certification:
- Architect level certificate of any cloud (AWS, GCP, Azure)
About RaRa Delivery
Not just a delivery company…
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳
Future of eCommerce Logistics.
- Datadriven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
About the Role
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
About the company:
Tathastu, the next-generation innovation labs is Future Group’s initiative to provide a new-age retail experience - combining the physical with digital and enhancing it with data. We are creating next-generation consumer interactions by combining AI/ML, Data Science, and emerging technologies with consumer platforms.
The E-Commerce vertical under Tathastu has developed online consumer platforms for Future Group’s portfolio of retail brands -Easy day, Big Bazaar, Central, Brand factory, aLL, Clarks, Coverstory. Backed by our network of offline stores we have built a new retail platform that merges our Online & Offline retail streams. We use data to power all our decisions across our products and build internal tools to help us scale our impact with a small closely-knit team.
Our widespread store network, robust logistics, and technology capabilities have made it possible to launch a ‘2-Hour Delivery Promise’ on every product across fashion, food, FMCG, and home products for orders placed online through the Big Bazaar mobile app and portal. This makes Big Bazaar the first retailer in the country to offer instant home delivery on almost every consumer product ordered online.
Job Responsibilities:
- You’ll streamline and automate the software development and infrastructure management processes and play a crucial role in executing high-impact initiatives and continuously improving processes to increase the effectiveness of our platforms.
- You’ll translate complex use cases into discrete technical solutions in platform architecture, design and coding, functionality, usability, and optimization.
- You will drive automation in repetitive tasks, configuration management, and deliver comprehensive automated tests to debug/troubleshoot Cloud AWS-based systems and BigData applications.
- You’ll continuously discover, evaluate, and implement new technologies to maximize the development and operational efficiency of the platforms.
- You’ll determine the metrics that will define technical and operational success and constantly track such metrics to fine-tune the technology stack of the organization.
Experience: 4 to 8 Yrs
Qualification: B.Tech / MCA
Required Skills:
- Experience with Linux/UNIX systems administration and Amazon Web Services (AWS).
- Infrastructure as Code (Terraform), Kubernetes and container orchestration, Web servers (Nginx, Apache), Application Servers(Tomcat,Node.js,..), document stores and relational databases (AWS RDS-MySQL).
- Site Reliability Engineering patterns and visibility /performance/availability monitoring (Cloudwatch, Prometheus)
- Background in and happy to work hands-on with technical troubleshooting and performance tuning.
- Supportive and collaborative personality - ability to influence and drive progress with your peers
Our Technology Stack:
- Docker/Kubernetes
- Cloud (AWS)
- Python/GoLang Programming
- Microservices
- Automation Tools

- Degree in Computer Science or related discipline.
- AWS Certified Solutions Architect certification required
- 5+ years of architecture, design, implementation, and support of highly complex solutions (i.e. having an architectural sense for ensuring security and compliance, availability, reliability, etc.)
- Deep technical experience in serverless AWS infrastructure
- Understanding of cloud automation and orchestration tools and techniques including git, terraform, ARM or equivalent
- Create Technical Design documents, understand technical designs and translate into the application requirements.
- Exercise independent judgment in evaluating alternative technical solutions
- Participate in code and design review process
- Write unit test cases for quality check of the deliverables
- Ability to work closely with others in a team environment as well as independently
- Proven ability to problem solve and troubleshoot
- Excellent verbal and written communication skills and the ability to interact professionally with a diverse group, executives, managers, and subject matter experts
- Excellent English communication skills are required
We are looking for a Solution Architect with at least 5 years’ experience working on the following to join our growing team:
- AWS
- Postgresql
- EC2 on AWS
- Cognito
- and most importantly Serverless
You will need a strong technical AWS background focused on architecting on serverless (eg Lambda) AWS infrastructure.

