Cutshort logo
VUMONIC logo
Devops Engineer
Devops Engineer
VUMONIC's logo

Devops Engineer

Simran Bhullar's profile picture
Posted by Simran Bhullar
1 - 3 yrs
₹5L - ₹7.5L / yr
Bengaluru (Bangalore)
Skills
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconElastic Search
skill iconMongoDB
skill iconNodeJS (Node.js)
Cassandra

Designation : DevOp Engineer

Location : HSR, Bangalore


About the Company


Making impact driven by Data. 


Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time.. 



About the Role

 

We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.



What you’ll do


  • Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
  • Monitor Logs on servers & Cloud management
  • Build and set up new development tools and infrastructure to reduce occurrences of errors 
  • Understand the needs of stakeholders and convey this to developers
  • Design scripts to automate and improve development and release processes
  • Test and examine codes written by others and analyze results
  • Ensure that systems are safe and secure against cybersecurity threats
  • Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
  • Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
  • Design procedures for system troubleshooting and maintenance.


What you need to have


TECHNICAL SKILLS

  • Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
  • Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
  • Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
  • 2 or more years of experience as a DevOp (startup/technical experience preferred)

You are

  • Excited to learn, are a hustler and “Do-er”
  • Passionate about building products that create impact.
  • Updated with the latest technological developments & enjoy upskilling yourself with market trends.
  • Willing to experiment with novel ideas & take calculated risks.
  • Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
  • Interested to work as part of a supportive, highly motivated and fun team.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About VUMONIC

Founded :
2016
Type :
Services
Size :
20-100
Stage :
Profitable

About

Vumonic offers an online subscription based competitive intelligence platform providing information on market share and transaction share for the e-commerce industry. It claims to collect the purchase information directly from the customer inbox and email and is device independent. It collects information from in-house data sourcing technique and third-party partners. Claims to have 30k+ data sets since 2013. 
Read more

Connect with the team

Profile picture
Simran Bhullar
Profile picture
Isha shetty

Company social profiles

linkedin

Similar jobs

Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

About the Role:

We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.


Key Responsibilities:


Cloud Management:

  • Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
  • Ensure high availability, scalability, and security of cloud resources.

Containerization & Orchestration:

  • Develop and manage containerized applications using Docker.
  • Deploy, scale, and manage Kubernetes clusters.

CI/CD Pipelines:

  • Build and maintain robust CI/CD pipelines to automate the software delivery process.
  • Implement monitoring and alerting to ensure pipeline efficiency.

Version Control & Collaboration:

  • Manage code repositories and workflows using Git.
  • Collaborate with development teams to optimize branching strategies and code reviews.

Automation & Scripting:

  • Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
  • Write scripts to optimize and maintain workflows.

Monitoring & Logging:

  • Implement and maintain monitoring solutions to ensure system health and performance.
  • Analyze logs and metrics to troubleshoot and resolve issues.


Required Skills & Qualifications:

  • 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
  • Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
  • Hands-on experience building and managing CI/CD pipelines.
  • Proficient in using Git for version control.
  • Experience with scripting languages such as Bash, Python, or PowerShell.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Solid understanding of networking, security, and system administration.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and teamwork skills.


Preferred Qualifications:

  • Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with serverless architectures and microservices.


Read more
Nagarro Software
at Nagarro Software
1 video
12 recruiters
Prabhu Singh
Posted by Prabhu Singh
Remote, Gurugram
5.5 - 8.5 yrs
₹12L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+7 more
  • Good knowledge of at least one language (C#, Java, Python, Go, PHP, Node.js)
  • Have enough experience on application and infrastructure architectures
  • Design and plan cloud solution architecture
  • Design for security, network, and compliances
  • Analyze and optimize technical and business processes
  • Ensure solution and operational reliability
  • Manage and provision cloud infrastructure
  • Manage IaaS, PaaS, and SaaS solutions
  • Design strategies around cloud governance, migration, Cloud operations and DevOps
  • Design highly scalable, available, and reliable cloud applications
  • Build and test applications
  • Deploy applications on cloud
  • Integration with cloud services

Certification:

  • Architect level certificate of any cloud (AWS, GCP, Azure)
Read more
HappyFox
at HappyFox
1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Implement consistent observability, deployment and IaC setups
  • Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
  • Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Lead infrastructure security audits

 

Requirements

  • At least 7 years of experience in handling/building Production environments in AWS.
  • At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Experience in security hardening of infrastructure, systems and services.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.

 

 

Read more
Dhwani Rural Information Systems
Sunandan Madan
Posted by Sunandan Madan
gurgaon
2 - 6 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more
Job Overview
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.

Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
 The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.

  
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.

EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Read more
IntelliFlow Solutions Pvt Ltd
at IntelliFlow Solutions Pvt Ltd
2 candid answers
Divyashree Abhilash
Posted by Divyashree Abhilash
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹12L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
IntelliFlow.ai is a next-gen technology SaaS Platform company providing tools for companies to design, build and deploy enterprise applications with speed and scale. It innovates and simplifies the application development process through its flagship product, IntelliFlow. It allows business engineers and developers to build enterprise-grade applications to run frictionless operations through rapid development and process automation. IntelliFlow is a low-code platform to make business better with faster time-to-market and succeed sooner.

Looking for an experienced candidate with strong development and programming experience, knowledge preferred-

  • Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
  • Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
  • Proficient with Unix systems and bash
  • Proficient with git/GitHub/GitLab/bitbucket

 

Desired skills-

  • Docker
  • Kubernetes
  • Jenkins
  • Experience in any scripting language (Phyton, Shell Scripting, Java Script)
  • NGINX / Load Balancer
  • Splunk / ETL tools
Read more
Concentric AI
at Concentric AI
7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
4 - 10 yrs
₹10L - ₹45L / yr
skill iconPython
Shell Scripting
DevOps
skill iconAmazon Web Services (AWS)
Infrastructure architecture
+7 more
About us:

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.

There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.

Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.

That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.

Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/

Title: Cloud DevOps Engineer 

Role: Individual Contributor (4-8 yrs)  

      

Requirements: 

  • Energetic self-starter, a fast learner, with a desire to work in a startup environment  
  • Experience working with Public Clouds like AWS 
  • Operating and Monitoring cloud infrastructure on AWS. 
  • Primary focus on building, implementing and managing operational support 
  • Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure. 
  • Expert at one of the scripting languages – Python, shell, etc  
  • Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc 
  • Handling load monitoring, capacity planning, and services monitoring. 
  • Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues. 
  • Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
RaRa Now
at RaRa Now
3 recruiters
N SHUBHANGINI
Posted by N SHUBHANGINI
Remote only
2 - 8 yrs
₹7L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

About RaRa Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Datadriven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology

About the Role

  • Build and maintain CI/CD tools and pipelines.
  • Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
  • Continuously improve code quality, product execution, and customer delight.
  • Communicate, collaborate and work effectively across distributed teams in a global environment.
  • Operate to strengthen teams across their product with their knowledge base
  • Contribute to improving team relatedness, and help build a culture of camaraderie.
  • Continuously refactor applications to ensure high-quality design
  • Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
  • Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
  • Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
  • Working knowledge of the TCP/IP stack, internet routing, and load balancing
  • Basic understanding of cluster orchestrators and schedulers (Kubernetes)
  • Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
  • Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Read more
A Reputed IT Services Company
A Reputed IT Services Company
Agency job
via Confidenthire by Aprajeeta Sinha
Navi Mumbai, Pune, Mumbai
4 - 7 yrs
₹10L - ₹20L / yr
DevOps
Terraform
skill iconDocker
skill iconKubernetes
Google Cloud Platform (GCP)
+1 more
  1. GCP Cloud experience mandatory
  2. CICD - Azure DevOps
  3. IaC tools – Terraform
  4. Experience with IAM / Access Management within cloud
  5. Networking / Firewalls
  6. Kubernetes / Helm / Istio
Read more
USA based product engineering company.Medical industry
USA based product engineering company.Medical industry
Agency job
via Sagar Enterprises by Rupali Khamkar
Remote, Pune
6 - 13 yrs
₹30L - ₹37L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
Ansible
CI/CD
+4 more

Total Experience: 6 – 12 Years

 

Required Skills and Experience 

 

  • 3+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
  • 3+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
  • Building and running Docker images and deployment on Amazon ECS
  • Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
  • Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
  • Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
  • Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
  • Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
  • Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
  • Understanding of IAM, RBAC, NACLs, and KMS
  • Good communication skills

 

Good to have:

 

  • Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
  • Knowledge of database administration such as MongoDB.
  • Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.

 

Responsibilities

 

  • Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
  • Establish and promote DevOps thinking, guidelines, best practices, and standards.
  • Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
Read more
Radware
at Radware
1 recruiter
Vinoth Kumar
Posted by Vinoth Kumar
Bengaluru (Bangalore)
6 - 10 yrs
₹12L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconElastic Search
skill iconPython
+3 more
Job Responsibilities: Managing the cloud deployment with thousands of VM’s and Containers with 100% uptime Budgeting the infra costs and plan for continued cost optimisation Managing and motivating the team members Designing the architecture to scale the back-end to meet the business requirements Requirements: Strong background in Linux fundamentals and system administration Good command on coding with scripting languages like Python and Shell scripting Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure Experience with automation/configuration management using either Puppet, Chef or an equivalent Experience with Elasticsearch, Mongodb, Redis, Memcached, Kafka, RabbitMQ or ActiveMQ Knowledge of best practices and IT operations in an always-up, always-available service Good team management skills and communications skills Good experience with monitoring and alerting systems like Nagios, Zabbix Experience with CI and CD tools Educational Qualification: BE/B.Tech/MCA
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos