Designation : DevOp Engineer
Location : HSR, Bangalore
About the Company
Making impact driven by Data.
Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time..
About the Role
We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.
What you’ll do
- Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
- Monitor Logs on servers & Cloud management
- Build and set up new development tools and infrastructure to reduce occurrences of errors
- Understand the needs of stakeholders and convey this to developers
- Design scripts to automate and improve development and release processes
- Test and examine codes written by others and analyze results
- Ensure that systems are safe and secure against cybersecurity threats
- Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
- Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
- Design procedures for system troubleshooting and maintenance.
What you need to have
TECHNICAL SKILLS
- Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
- Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
- Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
- 2 or more years of experience as a DevOp (startup/technical experience preferred)
You are
- Excited to learn, are a hustler and “Do-er”
- Passionate about building products that create impact.
- Updated with the latest technological developments & enjoy upskilling yourself with market trends.
- Willing to experiment with novel ideas & take calculated risks.
- Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
- Interested to work as part of a supportive, highly motivated and fun team.

About VUMONIC
About
Connect with the team
Similar jobs
LogiNext is looking for a technically savvy and passionate DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 2 to 4 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
The candidate must have 2-3 years of experience in the domain. The responsibilities include:
● Deploying system on Linux-based environment using Docker
● Manage & maintain the production environment
● Deploy updates and fixes
● Provide Level 1 technical support
● Build tools to reduce occurrences of errors and improve customer experience
● Develop software to integrate with internal back-end systems
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
● Develop scripts to automate visualization
● Design procedures for system troubleshooting and maintenance
● Experience working on Linux-based infrastructure
● Excellent understanding of MERN Stack, Docker & Nginx (Good to have Node Js)
● Configuration and managing databases such as Mongo
● Excellent troubleshooting
● Experience of working with AWS/Azure/GCP
● Working knowledge of various tools, open-source technologies, and cloud services
● Awareness of critical concepts in DevOps and Agile principles
● Experience of CI/CD Pipeline
- 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code.
- Advanced Linux system administration abilities.
- Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus.
- Experience with managing production environments on AWS or GCP.
- Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus.
- Experience on any configuration management tools like Ansible, Puppet or Chef is a must.
- Experience in any one of the scripting languages: Shell, Python, etc.
- Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must.
- Solid understanding of SSL and DNS.
- Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc.
- Basic understanding of networking concepts.
- Always adhere to security best practices.
- Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.
- Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage.
What you get to do
- Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments
- Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues
- Continuously improve Conviva SaaS services and infrastructure for availability, performance and security
- Implement security best practices – primarily patching of operating systems and applications
- Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
- Participate in 12x7 on-call rotations
- Work with third party service/support providers for installations, support related calls, problem resolutions etc.
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
processes.
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.
DevOps Engineer
Job Description:
The position requires a broad set of technical and interpersonal skills that includes deployment technologies, monitoring and scripting from networking to infrastructure. Well versed in troubleshooting Prod issues and should be able to drive till the RCA.
Skills:
- Manage VMs across multiple datacenters and AWS to support dev/test and production workloads.
- Strong hands-on over Ansible is preferred
- Strong knowledge and hands-on experience in Kubernetes Architecture and administration.
- Should have core knowledge in Linux and System operations.
- Proactively and reactively resolve incidents as escalated from monitoring solutions and end users.
- Conduct and automate audits for network and systems infrastructure.
- Do software deployments, per documented processes, with no impact to customers.
- Follow existing devops processes while having flexibility to create and tweak processes to gain efficiency.
- Troubleshoot connectivity problems across network, systems or applications.
- Follow security guidelines, both policy and technical to protect our customers.
- Ability to automate recurring tasks to increase velocity and quality.
- Should have worked on any one of the Database (Postgres/Mongo/Cockroach/Cassandra)
- Should have knowledge and hands-on experience in managing ELK clusters.
- Scripting Knowledge in Shell/Python is added advantage.
- Hands-on Experience over K8s based Microservice Architecture is added advantage.
Should be open to embracing new technologies, keeping up with emerging tech.
Strong troubleshooting and problem-solving skills.
Willing to be part of a high-performance team, build mature products.
Should be able to take ownership and work under minimal supervision.
Strong Linux System Administration background (with minimum 2 years experience), responsible for handling/defining the organization infrastructure(Hybrid).
Working knowledge of MySQL databases, Nginx, and Haproxy Load Balancer.
Experience in CI/CD pipelines, Configuration Management (Ansible/Saltstack) & Cloud Technologies (AWS/Azure/GCP)
Hands-on experience in GitHub, Jenkins, Prometheus, Grafana, Nagios, and Open Sources tools.
Strong Shell & Python scripting would be a plus.
Job role
Anaxee is India's REACH Engine! To provide access across India, we need to build highly scalable technology which needs scalable Cloud infrastructure. We’re seeking an experienced cloud engineer with expertise in AWS (Amazon Web Services), GCP (Google Cloud Platform), Networking, Security, and Database Management; who will be Managing, Maintaining, Monitoring, Handling Cloud Platforms, and ensuring the security of the same.
You will be surrounded by people who are smart and passionate about the work they are doing.
Every day will bring new and exciting challenges to the job.
Job Location: Indore | Full Time | Experience: 1 year and Above | Salary ∝ Expertise | Rs. 1.8 LPA to Rs. 2.64 LPA
About the company:
Anaxee Digital Runners is building India's largest last-mile Outreach & data collection network of Digital Runners (shared feet-on-street, tech-enabled) to help Businesses & Consumers reach the remotest parts of India, on-demand.
We want to make REACH across India (remotest places), as easy as ordering pizza, on-demand. Already serving 11000 pin codes (57% of India) | Anaxee is one of the very few venture-funded startups in Central India | Website: www.anaxee.com
Important: Check out our company pitch (6 min video) to understand this goal - https://www.youtube.com/watch?v=7QnyJsKedz8
Responsibilities (You will enjoy the process):
#Triage and troubleshoot issues on the AWS and GCP and participate in a rotating on-call schedule and address urgent issues quickly
#Develop and leverage expert-level knowledge of supported applications and platforms in support of project teams (architecture guidance, implementation support) or business units (analysis).
#Monitoring the process on production runs, communicating the information to the advisory team, and raising production support issues to the project team.
#Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
#Developing and implementing technical efforts to design, build, and deploy AWS and GCP applications at the direction of lead architects, including large-scale data processing and advanced analytics
#Participate in all aspects of the SDLC for AWS and GCP solutions, including planning, requirements, development, testing, and quality assurance
#Troubleshoot incidents, identify root cause, fix, and document problems, and implement preventive measures
#Educate teams on the implementation of new cloud-based initiatives, providing associated training as required
#Build and maintain operational tools for deployment, monitoring, and analysis of AWS and GCP infrastructure and systems; Design, deploy, maintain, automate & troubleshoot virtual servers and storage systems, firewalls, and Load Balancers in our hybrid cloud environment (AWS and GCP)
What makes a great DevOps Engineer (Cloud) for Anaxee:
#Candidate must have sound knowledge, and hands-on experience, in GCP (Google Cloud Platform) and AWS (Amazon Web Services)
#Good hands-on Linux Operating system OR any other similar distributions, viz. Ubuntu, CentOS, RHEL/RedHat, etc.
#1+ years of experience in the industry
#Bachelor's degree preferred with Science/Maths background (B.Sc/BCA/B.E./B.Tech)
#Enthusiasm to learn new software, take ownership and latent desire and curiosity in the related domain like Cloud, Hosting, Programming, Software development, security.
#Demonstrable skills troubleshooting a wide range of technical problems at application and system level, and have strong organizational skills with eye for detail.
#Prior knowledge of risk-chain is an added advantage
#AWS/GCP certifications is a plus
#Previous startup experience would be a huge plus.
The ideal candidate must be experienced in cloud-based tech, with a firm grasp on emerging technologies, platforms, and applications, and have the ability to customize them to help our business become more secure and efficient. From day one, you’ll have an immediate impact on the day-to-day efficiency of our IT operations, and an ongoing impact on our overall growth
What we offer
#Startup Flexibility
#Exciting challenges to learn grow and implement notions
#ESOPs (Employee Stock Ownership Plans)
#Great working atmosphere in a comfortable office,
#And an opportunity to get associated with a fast-growing VC-funded startup.
What happens after you apply?
You will receive an acknowledgment email with company details.
If gets shortlisted, our HR Team will get in touch with you (Call, Email, WhatsApp) in a couple of days
Rest all the information will be communicated to you then via our AMS.
Our expectations before/after you click “Apply Now”
Read about Anaxee: http://www.anaxee.com/
Watch this six mins pitch to get a better understanding of what we are into https://www.youtube.com/watch?v=7QnyJsKedz8
Let's dive into detail (Company Presentation): https://bit.ly/anaxee-deck-brands


