Cutshort logo
Ashnik logo
Cloud and Infrastructure Automation Consultant
Ashnik
Cloud and Infrastructure Automation Consultant
Ashnik's logo

Cloud and Infrastructure Automation Consultant

at Ashnik

Agency job
8 - 15 yrs
₹10L - ₹20L / yr
Remote only
Skills
Infrastructure
Automation
skill iconPython
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconRuby
Powershell
RHEL

Position- Cloud and Infrastructure Automation Consultant

Location- India(Pan India)-Work from Home

The position:

This exciting role in Ashnik’s consulting team brings great opportunity to design and deploy automation solutions for Ashnik’s enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.

Responsibilities:

· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools

· Independently determine the needs of the customer and create solution frameworks

· Design and develop moderately complex software solutions to meet needs

· Use a process-driven approach in designing and developing solutions.

· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers

· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process

Experience and skills required :

· 8 to 10 year of experience in IT infrastructure

· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies

· 6+ years of experience with RHEL/windows system automation

· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks

· Strong understanding and knowledge of networking architecture

· Experience with Sentinel Policy as Code

· Strong understanding of AWS and Azure infrastructure

· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions

· Experience with Hashicorp Configuration Language (HCL) for module & policy development

· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable

This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.

 

About Us

Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services – Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnik’s offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.

As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.


Package : upto 20L

Experience: 8 yrs

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Flytbase
at Flytbase
3 recruiters
Arshia Verma
Posted by Arshia Verma
Pune
0 - 2 yrs
₹8L - ₹15L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
Artificial Intelligence (AI)

𝐖𝐞’𝐫𝐞 𝐇𝐢𝐫𝐢𝐧𝐠: 𝐀𝐈-𝐍𝐚𝐭𝐢𝐯𝐞 𝐃𝐞𝐯𝐒𝐞𝐜𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐬 𝐚𝐭 𝐅𝐥𝐲𝐭𝐁𝐚𝐬𝐞

📍 Location: Pune (Onsite)


This will be one of the most challenging and rewarding roles of your career.

At FlytBase, prompting is thinking. We don’t care if you’ve memorized APIs or collected DevOps certifications.

We care how you think, how you debug under pressure, and how you solve infra problems no one else can.

We don’t hire engineers to maintain systems.

We hire infra architects to design infrastructure that runs—and evolves—on its own.

If you need step-by-step tickets or constant direction—don’t apply.

We’re looking for engineers who can design, secure, and scale real-world systems with speed, clarity, and zero excuses.

If that’s what you’ve been waiting for—read on.


𝐓𝐡𝐞 𝐌𝐢𝐬𝐬𝐢𝐨𝐧

FlytBase is building the autonomous backbone for aerial intelligence—drone fleets that operate 24/7 at industrial scale.

Our platform flies missions, detects anomalies, and delivers insights—with no human in the loop.

The infra behind this? That’s what you build.


𝐘𝐨𝐮𝐫 𝐋𝐨𝐨𝐩

• Design AI-secured CI/CD pipelines (GitHub, Docker, Terraform, SAST/DAST)

• Architect AWS infra (EC2, S3, IAM, VPCs, EKS) with Infrastructure-as-Code

• Build intelligent observability systems (Grafana, Dynatrace, LLM-based detection)

• Define fallback, rollback & recovery loops that don’t need escalation

• Enforce compliance (SOC2, ISO27001, GDPR) without killing velocity

• Own SLAs from definition → delivery → defense

• Automate what used to need a team. Then automate again.


𝐖𝐡𝐨 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫

✅ You treat infrastructure like a product

✅ You already use AI Tools to move 5x faster

✅ You can go from “zero” to “live infra” in 48 hours


𝐁𝐨𝐧𝐮𝐬 𝐩𝐨𝐢𝐧𝐭𝐬 𝐢𝐟 𝐲𝐨𝐮’𝐯𝐞:

• Built custom DevOps bots or AI agents

• Got an OSCP or hacked together your own SOC2 framework

• Shipped production infra solo

• Open-sourced your infra tools or scripts


𝐉𝐨𝐢𝐧 𝐔𝐬

If you’re still reading—good. That’s a signal.


𝐀𝐩𝐩𝐥𝐲 𝐡𝐞𝐫𝐞👉 https://lnkd.in/gsqjaJSP

Read more
Flytbase
at Flytbase
3 recruiters
Shilpa Kumari
Posted by Shilpa Kumari
Pune
0 - 4 yrs
₹6L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Position: SDE-1 DevSecOps

Location: Pune, India

Experience Required: 0+ Years


We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.


About FlytBase


FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.


The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.

 

The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.


Role and Responsibilities:


  • Participate in the creation and maintenance of CI/CD solutions and pipelines.
  • Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
  • Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
  • Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
  • Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
  • Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
  • Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
  • Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
  • Automate routine tasks and create tools to improve team efficiency and system robustness.
  • Contribute to disaster recovery plans and ensure robust backup systems are in place.
  • Develop and enforce security policies and respond effectively to security incidents.
  • Manage incident response protocols, including on-call rotations and strategic planning.
  • Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
  • Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.


Best suited for candidates who: (Skills/Experience)


  • Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
  • Background in IT or computer science.
  • Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
  • Solid understanding of network layers and TCP/IP protocols.
  • In-depth understanding of operating systems, networking, and cloud services.
  • Strong problem-solving skills with a 'hacker' mindset.
  • Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus. 
  • Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.


Compensation: 


This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.


Perks:


  • Fast-paced Startup culture
  • Hacker mode environment
  • Enthusiastic and approachable team
  • Professional autonomy
  • Company-wide sense of purpose
  • Flexible work hours
  • Informal dress code


Read more
Dhwani Rural Information Systems
Sunandan Madan
Posted by Sunandan Madan
gurgaon
2 - 6 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more
Job Overview
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.

Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
 The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.

  
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.

EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Read more
Acceldata
at Acceldata
5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.

 

We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.

 

Position Summary-

 

This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.

 

Required experience

  1. 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
  2. Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
  3. Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
  4. Basic understanding of SQL
  5. Experience working with technologies like S3, Kubernetes experience preferred.
  6. Databricks/Hadoop/Kafka experience preferred but not required

 

 

 

 

 

 

 

Read more
Bengaluru (Bangalore)
4 - 6 yrs
₹6L - ₹10L / yr
RESTful APIs
skill iconPython
TypeScript
skill iconNodeJS (Node.js)
skill iconDocker
+7 more
Role: Cloud Automation Engineer
Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
Read more
Saas Based Tech Company
Saas Based Tech Company
Agency job
via Merito by Gaurav Bhosle
Mumbai
4 - 10 yrs
₹12L - ₹15L / yr
DevOps
CI/CD
skill iconAmazon Web Services (AWS)
skill iconJenkins
Gradle
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment automation and CI /CD

Key Responsibilities
  • Building and setting up new development tools and infrastructure
  • Understanding the needs of stakeholders and conveying this to developers
  • Working on ways to automate and improve development and release
    processes
  • Testing and examining code written by others and analyzing results
  • Ensuring that systems are safe and secure against cybersecurity
    threats
  • Identifying technical problems and developing software updates and ‘fixes’
  • Working with software developers and software engineers to ensure that development follows established processes and works as intended
  • Planning out projects and being involved in project management decisions

Required Skills and Qualifications
  • BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
  • 4+ years of overall development experience.
  • Strong understanding of cloud deployment and setup
  • Hands-on experience with tools like Jenkins, Gradle etc.
  • Deploy updates and fixes
  • Provide Level 2 technical support
  • Build tools to reduce occurrences of errors and improve customer experience
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate deployment
  • Design procedures for system troubleshooting and maintenance
  • Skills and Qualifications
  • Proficient with git and git workflows
  • Working knowledge of databases and SQL
  • Problem-solving attitude
  • Collaborative team spirit
Read more
B2B Cloud Telephony Co. | REMOTE
B2B Cloud Telephony Co. | REMOTE
Agency job
via Unnati by Astha Bharadwaj
Remote only
1.5 - 4 yrs
₹7L - ₹10L / yr
DevOps
skill iconDocker
skill iconJenkins
Kibana
skill iconAmazon Web Services (AWS)
+3 more
Are you looking to join a fast growing cloud telephony company where you could use your experience and leverage on the growth? Then, please read on.

Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
 
Currently operating in major cities, the startup is focused on increasing its reach in tier 2 and 3 cities and towns, which are largely untapped markets. They also have their operations set up across South Asia, Middle East, and Latin America. Led by a visionary founder, whose experience covers business, technology, sales, and operations, the team is built on trust, motivation, learning, and improvement.
 
As a Devops Engineer, you will be required to design development pipelines from the ground up; design and operate highly available systems in AWS Cloud environments.
 
What you will do:
  • Being involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
  • Ensuring reliable operation of CI/ CD pipelines
  • Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
  • Logging, metrics and alerting management.
  • Creating Docker files
  • Creating Bash/ Python scripts for automation.
  • Performing root cause analysis for production errors.

 

What you need to have:
  • Proficient in Linux Commands line and troubleshooting.
  • Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
  • Hands-on experience with CI tooling preferably with Jenkins.
  • Proficient in deployment using Ansible.
  • Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
  • Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
  • Scripting languages: Bash, Python, Groovy.
  • Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Read more
Magicflare Software Services
Akshata Bhosle
Posted by Akshata Bhosle
Pune
3 - 6 yrs
₹3L - ₹15L / yr
DevOps
Windows Azure
skill iconMongoDB

Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days

Job Descrtiption:

Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB

Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.

Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application

Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator

Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support,  implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.

Read more
Banyan Data Services
at Banyan Data Services
1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
skill iconJava
skill iconPython
Spark
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Cloud Software Engineer

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure.  We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms.  Our engagement service is built on the DevOps standard practice and SRE model.

 

We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Roles and Responsibilities

· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms

· Apply continuous delivery practices to deliver high-quality software and value as early as possible.

· Work in collaborative teams to build new experiences

· Participate in the entire cycle of software consulting and delivery from ideation to deployment

· Integrating multiple software products across cloud and hybrid environments

· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud

· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices

 

Desired Candidate Profile : *** freshers can also apply ***

 

· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.

· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.

· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.

· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives

· Driven to develop technical skills for oneself and team-mates

· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.

· Possess at least one cloud-related certification from AWS, Azure, or equivalent

· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns

· Past experience quickly learning new languages and frameworks

· Ability to work with a high degree of autonomy and self-direction

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
Searce Inc
at Searce Inc
64 recruiters
Yashodatta Deshapnde
Posted by Yashodatta Deshapnde
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
+2 more
Role & Responsibilities :
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team

Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.

Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos