Cutshort logo
VUMONIC logo
Devops Engineer
Devops Engineer
VUMONIC's logo

Devops Engineer

Simran Bhullar's profile picture
Posted by Simran Bhullar
1 - 3 yrs
₹5L - ₹7.5L / yr
Bengaluru (Bangalore)
Skills
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconElastic Search
skill iconMongoDB
skill iconNodeJS (Node.js)
Cassandra

Designation : DevOp Engineer

Location : HSR, Bangalore


About the Company


Making impact driven by Data. 


Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time.. 



About the Role

 

We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.



What you’ll do


  • Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
  • Monitor Logs on servers & Cloud management
  • Build and set up new development tools and infrastructure to reduce occurrences of errors 
  • Understand the needs of stakeholders and convey this to developers
  • Design scripts to automate and improve development and release processes
  • Test and examine codes written by others and analyze results
  • Ensure that systems are safe and secure against cybersecurity threats
  • Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
  • Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
  • Design procedures for system troubleshooting and maintenance.


What you need to have


TECHNICAL SKILLS

  • Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
  • Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
  • Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
  • 2 or more years of experience as a DevOp (startup/technical experience preferred)

You are

  • Excited to learn, are a hustler and “Do-er”
  • Passionate about building products that create impact.
  • Updated with the latest technological developments & enjoy upskilling yourself with market trends.
  • Willing to experiment with novel ideas & take calculated risks.
  • Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
  • Interested to work as part of a supportive, highly motivated and fun team.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About VUMONIC

Founded :
2016
Type :
Services
Size :
20-100
Stage :
Profitable

About

Vumonic offers an online subscription based competitive intelligence platform providing information on market share and transaction share for the e-commerce industry. It claims to collect the purchase information directly from the customer inbox and email and is device independent. It collects information from in-house data sourcing technique and third-party partners. Claims to have 30k+ data sets since 2013. 
Read more

Connect with the team

Profile picture
Simran Bhullar
Profile picture
Isha shetty

Company social profiles

linkedin

Similar jobs

Adesso India
Adesso India
Agency job
via HashRoot by Deepak S
Remote only
5 - 12 yrs
₹10L - ₹25L / yr
skill iconElastic Search
Ansible
skill iconAmazon Web Services (AWS)
DevOps
AWS CloudFormation
+1 more

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange

We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.

The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.


Responsibilities:

Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.

Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.

Improve and fix bugs in installation and automation routines.

Monitor CPU usage, security findings, and AWS alerts.

Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.

Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).

Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.

Integrate data from AWS CloudWatch.

Document all relevant information and train involved personnel in the used technologies.


Requirements:

Experience with Elastic Stack (ELK) components and related technologies.

Proficiency in automation tools like Ansible and CloudFormation.

Strong knowledge of AWS Cloud services.

Experience in creating and managing dashboards and alerts.

Familiarity with IAM roles and rights management.

Ability to document processes and train team members.

Excellent problem-solving skills and attention to detail.

 

Skills & Requirements

Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
11 - 15 yrs
₹1L - ₹15L / yr
Microservices
Linux/Unix
skill iconPython
Shell Scripting
skill iconAmazon Web Services (AWS)
+22 more

Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=

LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.

 

Responsibilities:

  • Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
  • Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
  • Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
  • Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
  • Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
  • Define and build processes to identify performance bottlenecks and scaling pitfalls
  • Manage robust monitoring and alerting infrastructure 
  • Explore new tools to improve development operations to automate daily tasks
  • Ensure High Availability and Auto-failover with minimum or no manual interventions


Requirements:

  • Bachelor’s degree in Computer Science, Information Technology or a related field
  • 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
  • Strong background in Linux/Unix Administration and Python/Shell Scripting
  • Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
  • Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
  • Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
  • Experience in query analysis, peformance tuning, database redesigning, 
  • Experience in enterprise application development, maintenance and operations
  • Knowledge of best practices and IT operations in an always-up, always-available service
  • Excellent written and oral communication skills, judgment and decision-making skills.
  • Excellent leadership skill.
Read more
Bito Inc
at Bito Inc
2 recruiters
Amrit Dash
Posted by Amrit Dash
Remote only
5 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Ansible
Chef
+7 more

Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.

 

Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!

 

We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.

 

We are hiring a DevOps Engineer to join our team.

 

Responsibilities:

  • Collaborate with the development team to design, develop, and implement Java-based applications
  • Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
  • Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
  • Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
  • Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
  • Evaluate and define/modify configuration management strategies and processes using Ansible
  • Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
  • Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort

Requirements:

  • Minimum 4+ years of relevant work experience in a DevOps role
  • At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
  • Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
  • Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
  • Mastery in configuration automation tool sets such as Ansible, Chef, etc
  • Proficiency with Jira, Confluence, and Git toolset
  • Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
  • Proven ability to manage and prioritize multiple diverse projects simultaneously

What do we offer: 

At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology. 

·               Work from anywhere 

·               Flexible work timings 

·               Competitive compensation, including stock options 

·               A chance to work in the exciting generative AI space 

·               Quarterly team offsite events

Read more
HappyFox
at HappyFox
1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+12 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities:

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Implement consistent observability, deployment and IaC setups
  • Patch production systems to fix security/performance issues
  • Actively respond to escalations/incidents in the production environment from customers or the support team
  • Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Participate in infrastructure security audits

 

Requirements:

  • At least 5 years of experience in handling/building Production environments in AWS.
  • At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
  • Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

 

Read more
SuperProcure
at SuperProcure
1 video
3 recruiters
Jyothsna Samanth
Posted by Jyothsna Samanth
Remote only
3 - 9 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Job Summary: We are looking for a senior DevOps engineer to help us build functional systems that improve customer experience. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. 


Key Responsibilities


  • Utilise various open source technologies & build independent web based tools, microservices and solutions
  • Write deployment scripts 
  • Configure and manage data sources like MySQL, Mongo, ElasticSearch, etc
  • Configure and deploy pipelines for various microservices using CI/CD tools
  • Automated server monitoring setup & HA adherence
  • Defining and setting development, test, release, update, and support processes for DevOps operation
  • Coordination and communication within the team and with customers where integrations are required
  • Work with company personnel to define technical problems and requirements, determine solutions, and implement those solutions.
  • Work with product team  to design automated pipelines to support SaaS delivery and operations in cloud platforms.
  • Review and act on the Service requests, Infrastructure requests and Incidents logged by our Implementation teams and clients. Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues
  • Modifying and improving existing systems. Suggest process improvements and implement them.
  • Collaborate  with Software Engineers to help them deploy and operate different systems, also help to automate and streamline company's operations and processes.
  • Developing interface simulators and designing automated module deployments.

 

Key Skills

  • Bachelor's degree in software engineering, computer science, information technology, information systems.
  • 3+ years of experience in managing Linux based cloud microservices infrastructure (AWS, GCP or Azure)
  • Hands-on experience with databases including MySQL. 
  • Experience OS tuning and optimizations for running databases and other scalable microservice solutions 
  • Proficient working with git repositories and git workflows
  • Able to setup and manage CI/CD pipelines
  • Excellent troubleshooting, working knowledge of various tools, open-source technologies, and cloud services
  • Awareness of critical concepts in DevOps and Agile principles
  • Sense of ownership and pride in your performance and its impact on company’s success
  • Critical thinker and problem-solving skills
  • Extensive experience in DevOps engineering, team management, and collaboration.
  • Ability to install and configure software, gather test-stage data, and perform debugging.
  • Ability to ensure smooth software deployment by writing script updates and running diagnostics.
  • Proficiency in documenting processes and monitoring various metrics.
  • Advanced knowledge of best practices related to data encryption and cybersecurity.
  • Ability to keep up with software development trends and innovation.
  • Exceptional interpersonal and communication skills

Experience:

  • Must have  4+ years of experience as Devops Engineer in a SaaS product based company


About SuperProcure


SuperProcure is a leading logistics and supply chain management solutions provider that aims to bring efficiency, transparency, and process optimization across the globe with the help of technology and data. SuperProcure started our journey in 2017 to help companies digitize their logistics operations. We created industry-recognized products which are now being used by 150+ companies like Tata Consumer Products, ITC, Flipkart, Tata Chemicals, PepsiCo, L&T Constructions, GMM Pfaudler, Havells, others. It helps achieve real-time visibility, 100% audit adherence & transparency, 300% improvement in team productivity, up to 9% savings in freight costs and many more benefits. SuperProcure is determined to make the lives of the logistic teams easier, add value, and help in establishing a fair and beneficial process for the company. 



Super Procure is backed by IndiaMart and incubated under IIMCIP & Lumis, Supply Chain Labs. SuperProcure was also recognized as Top 50 Emerging Start-ups of India at the NASSCOM Product Conclave organized in Bengaluru and was a part of the recently launched National Logistics policy by the prime minister of India. More details about our journey can be found here


Life @ SuperProcure 


SuperProcure operates in an extremely innovative, entrepreneurial, analytical, and problem-solving work culture. Every team member is fully motivated and committed to the company's vision and believes in getting things done. In our organization, every employee is the CEO of what he/she does; from conception to execution, the work needs to be thought through.


Our people are the core of our organization, and we believe in empowering them and making them a part of the daily decision-making, which impacts the business and shapes the company's overall strategy. They are constantly provided with resources, 


mentorship and support from our highly energetic teams and leadership. SuperProcure is extremely inclusive and believes in collective success.


Looking for a bland, routine 9-6 job? PLEASE DO NOT APPLY. Looking for a job where you wake up and add significant value to a $180 Billion logistics industry everyday? DO APPLY.



OTHER DETAILS

  • Engagement : Full Time
  • No. of openings : 1
  • CTC : 12 - 20lpa


Read more
MyNextDeveloper
Neha Gandhi
Posted by Neha Gandhi
Remote only
2 - 5 yrs
₹15L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Company Introduction :


My Next Developer is a global network of top talent in business, design, and technology that enables companies to scale their teams, on-demand.


We take the best elements of virtual teams and combine them with a support structure that encourages innovation, social interaction, and fun. We see no borders, move at a fast pace, and are never afraid to break the mold.


Job Responsibilities


- Ensure security in the development activities.


- Implement risk management techniques and threat modeling.


- Implement infrastructure automation, monitoring and alerts as part of ISO 27001 and SOC 2 certifications.


- Collaborate with internal teams to produce the best security solutions


Minimum requirements :


- Bachelor's/Master's degree in degree in computer science, cybersecurity, engineering, or equivalent degree.


- 3+ years of experience as a DevSecOps engineer.


- Proficiency in back-end technologies such as NodeJs or Python.


- Expertise in using DevOps tools like GitHub, dependency management, and CI/CD.


- Profound knowledge of AWS cloud, DevOps culture and automation tools.


- Fluent in both spoken and written English communication.


- Ability to work full-time (40 hours/week) with a 2-3 hour overlap with European time zone.


- Stay up-to-date on cybersecurity threats and follow the best practices.


- Previous project experience related to ISO 27001 and SOC 2 certifications.

Read more
ekzero corporation
at ekzero corporation
1 recruiter
Rina Rathod
Posted by Rina Rathod
Vadodara, GUJARAT
0 - 7 yrs
₹1.8L - ₹8L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Key Responsibilities
  • Demonstrated experience with AWS
  • Knowledge of servers, networks, storage, client-server systems, and firewalls
  • Strong expertise in Windows and/or Linux operating systems, including system architecture and design, as well as experience supporting and troubleshooting stability and performance issues
  • Thorough understanding of and experience with virtualization technologies (e.g., VMWare/Hyper-V)
  • Knowledge of core network services such as DHCP, DNS, IP routing, VLANs, layer 2/3 routing, and load balancing is required
  • Experience in reading, writing or modifying PowerShell, Bash scripts & Python code.Experience using git
  • Working know-how of software-defined lifecycles, product packaging, and deployments
  • POSTGRESSQL or Oracle database administration (Backup, Restore, Tuning, Monitoring, Management)
  • At least 2 from AWS Associate Solutions Architect, DevOps, or SysOps
  • At least 1 from AWS Professional Solutions Architect, DevOps
Primary Skills:
  • AWS: S3, Redshift, DynamoDB, EC2, VPC, Lambda, CloudWatch etc.
  • Bigdata: Databricks, Cloudera, Glue and Athena
  • DevOps: Jenkins, Bitbucket
  • Automation: Terraform, Cloud Formation, Python, Shell scripting Experience in automating AWS infrastructure with Terraform.
  • Experience in database technologies is a plus.
  • Knowledge in all aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Proficiency in security implementation best practices on IAM policies, KMS encryption, Secrets Management, Network Security Groups etc.
  • Experience working in the SCRUM Environment
 
Read more
LDT Technology
Remote only
1 - 5 yrs
₹2.1L - ₹15L / yr
skill iconJava
skill iconAndroid Development
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconAngularJS (1.x)
We're Hiring for (java, android, react, and node.js)
1. Java developer
Technology- Spring Boot, Hibernate, Rest Api, Microservices
Experience- 3+ to 7 years

2. Profile- NodeJS Developers

Technology- NodeJS, ExpressJS, MongoDB, MySQL, PostgreSQL
Experience- 4 to 8 years

3. Profile- ReactJs Developers
Technology- Reactjs, Redux, Html, CSS, Javascript
Experience- 3 to 7 years

4. Profile- Android Developers 

Technology- Android, Java, Kotlin, Flutter
Experience- 3 to 6 years

Qualification- BCA, MCA, BSC.IT, MSC.IT, Btech.IT, Mtech.IT or Relevant Experience in the same field.
Notice Period- Immediate Joiner or Max in 15 days.

Salary- Good Hike on Current CTC.
Organization- LDT Technology
Company's website link- http://www.ldttechnology.com/
Location- Zirakpur [Mohali] near Chandigarh
Read more
Banyan Data Services
at Banyan Data Services
1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
skill iconJava
skill iconPython
Spark
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Cloud Software Engineer

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure.  We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms.  Our engagement service is built on the DevOps standard practice and SRE model.

 

We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Roles and Responsibilities

· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms

· Apply continuous delivery practices to deliver high-quality software and value as early as possible.

· Work in collaborative teams to build new experiences

· Participate in the entire cycle of software consulting and delivery from ideation to deployment

· Integrating multiple software products across cloud and hybrid environments

· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud

· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices

 

Desired Candidate Profile : *** freshers can also apply ***

 

· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.

· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.

· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.

· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives

· Driven to develop technical skills for oneself and team-mates

· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.

· Possess at least one cloud-related certification from AWS, Azure, or equivalent

· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns

· Past experience quickly learning new languages and frameworks

· Ability to work with a high degree of autonomy and self-direction

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
A Reputed IT Services Company
A Reputed IT Services Company
Agency job
via Confidenthire by Aprajeeta Sinha
Navi Mumbai, Pune, Mumbai
4 - 7 yrs
₹10L - ₹20L / yr
DevOps
Terraform
skill iconDocker
skill iconKubernetes
Google Cloud Platform (GCP)
+1 more
  1. GCP Cloud experience mandatory
  2. CICD - Azure DevOps
  3. IaC tools – Terraform
  4. Experience with IAM / Access Management within cloud
  5. Networking / Firewalls
  6. Kubernetes / Helm / Istio
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos