
We are looking for very hands-on DevOps engineers with 3 to 6 years of experience. The person will be part of a team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
This person MUST have:
-
B.E Computer Science or equivalent
-
2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
-
1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
-
1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
-
1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
-
Experience configuring/maintaining one monitoring tool.
-
Excellent verbal & written communication skills.
-
Candidates with certifications - AWS, GCP, CKA, etc will be preferred
-
Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
-
Min 3 years of experience as DevOps automation engineer buildingg, running, and maintaining production sites.
-
Not looking for candidates who have experience only as L1/L2 or Build & Deploy.
Location: Remotely, anywhere in India.
Timings:
-
The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
Position:
-
Full time/Direct
-
We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
-
We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.

Similar jobs
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
The ideal candidate should have automation skills to automate Infrastructure, microservices deployment through automation tools. Should be handling Kubernetes cluster in Production, both cloud and on-premise
Key Responsibilities
- Overall 6+ Years of experience and should have handled Production Kubernetes Cluster both on Cloud and On-premises environments.
- Build monitoring that alerts on symptoms rather than on outages.
- Should migrate VM based applications to Kubernetes cluster.
- Automate Infrastructure component provision.
- Document every task, your findings turn into repeatable actions and then into automation.
- Follow the Agile process and plan the work accordingly.
Must have Skills
- Knowledge on container solutions like Docker, Kubernetes and understanding of Virtualization concepts.
- Experience in Configuration Management Tools like Ansible, Chef.
- Experience with Terraform, Cloud Formation or other infrastructure as code tools.
- Experience with CI/CD in Jenkins.
- Good knowledge of AWS/Azure cloud environments.
- Good hands-on experience on webservers like Nginx, Apache, tomcat configuration, and administration.
- Experience working with and maintaining package management systems (e.g., Artifactory, APT).
- Knowledge of scripting in PowerShell/Python/Bash/
- Experience building automation and pipelines (integration, testing, deployment)
- Experience with Docker containers (images and registry management).
- Understanding of metrics collectors such as Metricbeat, Heartbeat or Prometheus is good to have.
- Ability to work collaboratively on a cross-functional team with a wide range of experience levels
- Ability to analyze existing services and identify technical debt to work.
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Job description
Problem Statement-Solution
Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.
One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.
About Koo
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.
Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.
Since June 2021, Koo is available in Nigeria.
Founding Team
Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).
Technology Team & Culture
The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.
Technology skill sets required for a matching profile
- Experience between 3 to 7 years in a devops role with mandatory one or more stints at a fast paced startup.
- Mandatory experience with containers, kubernetes (EKS) (setting up from scratch), istio and microservices.
- Sound knowledge of technologies like Terraform, Automation Scripts, Cron jobs etc. Must have worked with goals of putting infra as code like setting up new environments with code.
- Knowledge of industry standards around monitoring, alerting, self healing, high availability, auto scaling etc.
- Exhaustive experience with various cloud technologies (especially on AWS) like SQS, SNS, Elastic Search, Elastic Cache, Elastic Transcoder, VPC, Subnets, Security groups etc.
- Must have set up stable CI-CD pipelines with capabilities to do zero downtime deployments on any one of rolling updates, blue green or canary modes.
- Experience with VPN and LDAP solutions for securely login to infrastructure and proving SSO. 8. Master of deploying and troubleshooting all layers of application from network, frontend, backend and databases (MongoDB, Redis, Postgres, Cassandra, ElasticSearch, Aerospike etc).
About RaRa Delivery
Not just a delivery company…
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
- Responsible for the entire infrastructure including Production (both bare metal and AWS).
- Manage and maintain the production systems and operations including SysAdmin, DB activities.
- Improve tools and processes, automate manual efforts, and maintain the health of the system.
- Champion best practices, CI-CD, Metrics Driven Development
- Optimise the company's computing architecture
- Conduct systems tests for security, performance, and availability
- Maintain security of the system
- Develop and maintain design and troubleshooting documentation
- 7+ years of experience into DevOps/Technical Operations
- Extensive experience in operating scripting language like shell, python, etc
- Experience in developing and maintaining CI/CD process for SaaS applications using tools such as Jenkins
- Hands on experience in using configuration management tools such as Puppet, SaltStack, Ansible, etc
- Hands-on experience to build and handle VMs, Containers utilizing tools such as Kubernetes, Docker, etc
- Hands on experience in building, designing and maintaining cloud-based applications with AWS, Azure,GCP, etc
- Knowledge of Databases (MySQL, NoSQL)
- Knowledge of security/ethical hacking
- Have experience with ElasticSearch, Kibana, LogStash
- Have experience with Cassandra, Hadoop, or Spark
- Have experience with Mongo, Hive
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech
to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering,
AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We
are one of the top & the fastest growing partners for Google Cloud and AWS globally with
over 2,500 clients successfully moved to cloud.
What we believe?
1. Best practices are overrated
○ Implementing best practices can only make one n ‘average’ .
2. Honesty and Transparency
○ We believe in naked truth. We do what we tell and tell what we do.
3. Client Partnership
○ Client - Vendor relationship: No. We partner with clients instead.
○ And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by
HAPPIER.
1. Humble: Happy people don’t carry ego around. We listen to understand; not to
respond.
2. Adaptable: We are comfortable with uncertainty. And we accept changes well. As
that’s what life's about.
3. Positive: We are super positive about work & life in general. We love to forget and
forgive. We don’t hold grudges. We don’t have time or adequate space for it.
4. Passionate: We are as passionate about the great street-food vendor across the
street as about Tesla’s new model and so on. Passion is what drives us to work and
makes us deliver the quality we deliver.
5. Innovative: Innovate or Die. We love to challenge the status quo.
6. Experimental: We encourage curiosity & making mistakes.
7. Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
1. Love for cloud: When was the last time your dinner entailed an act on “How would
‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did
the ‘Sheldon’ version of the same thing.
2. Passion for sales: When was the last time you went at a remote gas station while on
vacation, and ended up helping the gas station owner saasify his 7 gas stations
across other geographies.
3. Compassion for customers: You listen more than you speak. When you do speak,
people feel the need to listen.
4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can
attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions
to help move & improve infra from on-premise to cloud or help optimize cloud spend from
one public cloud to the other.
1. Be the first one to experiment on new age cloud offerings, help define the best
practise as a thought leader for cloud, automation & Dev-Ops, be a solution
visionary and technology expert across multiple channels.
2. Continually augment skills and learn new tech as the technology and client needs
evolve
3. Use your experience in Google cloud platform, AWS or Microsoft Azure to build
hybrid-cloud solutions for customers.
4. Provide leadership to project teams, and facilitate the definition of project
deliverables around core Cloud based technology and methods.
5. Define tracking mechanisms and ensure IT standards and methodology are met;
deliver quality results.
6. Participate in technical reviews of requirements, designs, code and other artifacts
7. Identify and keep abreast of new technical concepts in google cloud platform
8. Security, Risk and Compliance - Advise customers on best practices around access
management, network setup, regulatory compliance and related areas.
Accomplishment Set
● Passionate, persuasive, articulate Cloud professional capable of quickly establishing
interest and credibility
● Good business judgment, a comfortable, open communication style, and a
willingness and ability to work with customers and teams.
● Strong service attitude and a commitment to quality.
● Highly organised and efficient.
● Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
1. Is Education overrated? Yes. We believe so. However there is no way to locate you
otherwise. So unfortunately we might have to look for a Bachelor's or Master's
degree in engineering from a reputed institute or you should be programming from
12. And the latter is better. We will find you faster if you specify the latter in some
manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the
new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives
those fancy degrees!
3. 1 - 5 years of experience with at least 2 - 3 years of hands-on experience in Cloud
Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise
environment.
4. Good analytical, communication, problem solving, and learning skills.
5. Knowledge on programming against cloud platforms such as Google Cloud Platf
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles










