Cutshort logo
Kubernetes Jobs in Jaipur

9+ Kubernetes Jobs in Jaipur | Kubernetes Job openings in Jaipur

Apply to 9+ Kubernetes Jobs in Jaipur on CutShort.io. Explore the latest Kubernetes Job opportunities across top companies like Google, Amazon & Adobe.

icon
Fullness Web Solutions

at Fullness Web Solutions

2 candid answers
Vidhu Bajaj
Posted by Vidhu Bajaj
Jaipur
2 - 4 yrs
₹6L - ₹8L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+1 more

We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead and CTO to identify and establish DevOps practices in the company.


You will establish configuration management, automate our infrastructure, implement continuous integration, and train the team in DevOps best practices to achieve a continuously deployable system.

 

You will be part of a continually growing team. You will have the chance to be creative and think of new ideas. You will also be able to work on open-source projects, ranging from small to large.

 

What you’ll be doing – your role

  • Improve CI/CD tooling
  • Implement and improve monitoring and alerting
  • Help support daily operations through the use of automation and assist in building a DevOps culture with our engineers for a better all-around software development and deployment experience
  • Duties and tasks are varied and complex, and may require independent judgment; you should be fully competent in your own area of expertise
  • Develop and maintain solutions for highly resilient services and infrastructure
  • Implement automation to help deploy our services and maintain their operational health
  • Contribute to the understanding of how our services are being used and help plan the capacity needs for future growth

 

 What do we expect – experience and skills:

  • Bachelor’s degree in Computer Science or related technical field, involving coding
  • 3+ years of experience running large-scale customer-facing services
  • 2-3 years of DevOps experience
  • A strong desire and aptitude for system automation defines success in this role
  • Linux experience, including expertise in system installation, configuration, administration, troubleshooting
  • Experience with cloud-based providers such as AWS
  • Experience with Kubernetes
  • Experience with web-based API/restful service
  • Experience with Configuration Management and infrastructure as code platforms (Ansible/Terraform)
  • Experience with at least one scripting language (Python, Bash, JavaScript)
  • Methodical approach to troubleshooting and documenting issues
  • Experience in Docker orchestration and management
  • Experience with implementing and maintaining CI/CD pipelines

Note: This vacancy is for CIMET, our client, for their Jaipur office.

Read more
Celebal Technologies Pvt Ltd

at Celebal Technologies Pvt Ltd

2 candid answers
Neelam Agarwal
Posted by Neelam Agarwal
Noida, Gurugram, Jaipur, Delhi, Ghaziabad, Faridabad
5 - 10 yrs
₹5L - ₹20L / yr
Docker
Kubernetes
DevOps
Windows Azure
Jenkins
+3 more

Key Responsibilities

• As a part of the DevOps team, you will be responsible for the configuration, optimization, documentation, and support of the CI/CD components.

• Creating and managing build and release pipelines with Azure DevOps and Jenkins.

• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.

• Troubleshoot server performance issues & handle the continuous integration system.

• Automate infrastructure provisioning using ARM Templates and Terraform.

• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.

• Diagnose and develop root-cause solutions for failures and performance issues in the production environment.

• Deploy and manage Infrastructure for production applications

• Configure security best practices for application and infrastructure

 

Essential Requirements

• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)

• Strong knowledge of CI/CD principles.

• Strong work experience with CI/CD implementation tools like Azure DevOps, Team City, Octopus Deploy, AWS Code Deploy, and Jenkins.

• Experience of writing automation scripts with PowerShell, Bash, Python, etc.

• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.

• Understanding of secure DevOps practices

Good to Have -

•Knowledge of scripting languages such as PowerShell, Bash

• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.

• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)

• Strong communication skills and ability to explain protocol and processes with team and management.

• Must be able to handle multiple tasks and adapt to a constantly changing environment.

• Must have a good understanding of SDLC.

• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.

• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.

• Organized, flexible, and analytical ability to solve problems creatively

Read more
Bengaluru (Bangalore), Hyderabad, Pune, Chennai, Jaipur
10 - 14 yrs
₹1L - ₹15L / yr
Ant
Maven
CI/CD
Jenkins
GitHub
+16 more

DevOps Architect 

Experience:  10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.

Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired

Technical Skillset: Skills Proficiency level

  • Build tools (Ant or Maven) - Expert
  • CI/CD tool (Jenkins or Github CI/CD) - Expert
  • Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
  • Infrastructure As Code (Terraform, Helm charts etc.) - Expert
  • Containerization (Docker, Docker Registry) - Expert
  • Scripting (linux) - Expert
  • Cluster deployment (Kubernetes) & maintenance - Expert
  • Programming (Java) - Intermediate
  • Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
  • Artifactory (JFrog) - Expert
  • Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
  • Ansible, MySQL, PostgreSQL - Intermediate


• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)

Roles and Responsibilities

• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
Read more
Matellio India Private Limited
Harshit Sharma
Posted by Harshit Sharma
Remote, Jaipur
5 - 12 yrs
₹5L - ₹13L / yr
DevOps
Windows Azure
Amazon Web Services (AWS)
Docker
Kubernetes
+3 more
  • At least 5 year of experience in Cloud technologies-AWS and Azure and developing. 
  • Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
  • Hands on experience in DevOps tools configuration in different environments.
  • Strong knowledge of working with DevOps design patterns, processes and best practices
  • Hand-on experience in Setting up Build pipelines.
  • Prior working experience in system administration or architecture in Windows or Linux.
  • Must have experience in GIT (BitBucket, GitHub, GitLab)
  • Hands-on experience on Jenkins pipeline scripting.
  • Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
  • Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
  • Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
  • Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
  • Good to have experience in migrating code repositories from one source control to another.
  • Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
  • Must have good communication skills and problem solving skills
Read more
Claristaio

at Claristaio

3 recruiters
Poonam Aggarwal
Posted by Poonam Aggarwal
Pune, Jaipur
2 - 5 yrs
₹5L - ₹8L / yr
Python
Spark
Kubernetes
Docker
SQL
+4 more
Position title – Data Engineer
Years of Experience – 2-3 years
Location – Flexible (Pune/Jaipur Preferred), India
Position Summary
At Clarista.io, we are driven to create a connected data world for enterprises, empowering their employees with the information they need to compete in the digital economy. Information is power, but only if it can be harnessed by people.
Clarista turns current enterprise data silos into a ‘Live Data Network’, easy to use, always available, with flexibility to create any analytics with controls to ensure quality and security of the information
Clarista is designed with business teams in mind, hence ensuring performance with large datasets and a superior user experience are critical to the success of the product

What You'll Do
You will be part of our data platform & data engineering team. As part of this agile team, you will work in our cloud native environment and perform following activities to support core product development and client specific projects:
• You will develop the core engineering frameworks for an advanced self-service data analytics product.
• You will work with multiple types of data storage technologies such as relational, blobs, key-value stores, document databases and streaming data sources.
• You will work with latest technologies for data federation with MPP (Massive Parallel Processing) capabilities
• Your work will entail backend architecture to enable product capabilities, data modeling, data queries for UI functionality, data processing for client specific needs and API development for both back-end and front-end data interfaces.
• You will build real-time monitoring dashboards and alerting systems.
• You will integrate our product with other data products through APIs
• You will partner with other team members in understanding the functional / nonfunctional\ business requirements, and translate them into software development tasks
• You will follow the software development best practices in ensuring that the code architecture and quality of code written by you is of high standard, as expected from an enterprise software
• You will be a proactive contributor to team and project discussions

Who you are
• Strong education track record - Bachelors or an advanced degree in Computer Science or a related engineering discipline from Indian Institute of Technology or equivalent premium institute.
• 2-3 years of experience in Big Data and Data Engineering.
• Strong knowledge of advanced SQL, data federation and distributed architectures
• Excellent Python programming skills. Familiarity with Scala and Java are highly preferred
• Strong knowledge and experience in modern and distributed data stack
components such as the Spark, Hive, airflow, Kubernetes, docker etc.
• Experience with cloud environments (AWS, Azure) and native cloud technologies for data storage and data processing
• Experience with relational SQL and NoSQL databases, including Postgres, Blobs, MongoDB etc.
• Experience with data pipeline and workflow management tools: Airflow, Dataflow, Dataproc etc.
• Experience with Big Data processing and performance optimization
• Should know how to write modular and optimized code.
• Should have good knowledge around error handling.
• Fair understanding of responsive design and cross-browser compatibility issues.
• Experience versioning control systems such as GIT
• Strong problem solving and communication skills.
• Self-starter, continuous learner.

Good to have some exposure to
• Start-up experience is highly preferred
• Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc.
• Agile software development methodologies.
• Working in multi-functional, multi-location teams

What You'll Love About Us – Do ask us about these!
• Be an integral part of the founding team. You will work directly with the founder
• Work Life Balance. You can't do a good job if your job is all you do!
• Prepare for the Future. Academy – we are all learners; we are all teachers!
• Diversity & Inclusion. HeForShe!
• Internal Mobility. Grow with us!
• Business knowledge of multiple sectors
Read more
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Ranchi, Patna, Kolkata, Gandhinagar, Ahmedabad, Indore, Lucknow, Bhopal, Jaipur, Jodhpur
2 - 6 yrs
₹3L - ₹10L / yr
DevOps
CI/CD
Docker
Kubernetes
Amazon Web Services (AWS)
+2 more
We are looking for a proficient, passionate, and skilled DevOps Specialist. You will have the opportunity to build an in-depth understanding of our client's business problems and then implement organizational strategies to resolve the same.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.

Benefits of working with OpsTree Solutions:

Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Read more
Ahmedabad, Bengaluru (Bangalore), Pune, Noida, Indore, Jaipur, Nagpur, Nashik
4 - 10 yrs
₹1L - ₹15L / yr
Kubernetes
Docker
Amazon Web Services (AWS)
AWS CloudFormation
Linux administration
+6 more

Rules & Responsibilities:

 

  • Design, implement and maintain all AWS infrastructure and services within a managed service environment
  • Should be able to work on 24 X 7 shifts for support of infrastructure.
  • Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
  • Design and implement availability, scalability, and performance plans for the AWS managed service environment
  • Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
  • Manage the production deployment and deployment automation
  • Implement process and quality improvements through task automation
  • Institute infrastructure as code, security automation and automation or routine maintenance tasks
  • Experience with containerization and orchestration tools like docker, Kubernetes
  • Build, Deploy and Manage Kubernetes clusters thru automation
  • Create and deliver knowledge sharing presentations and documentation for support teams
  • Learning on the job and explore new technologies with little supervision
  • Work effectively with onsite/offshore teams

 

Qualifications:

  • Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
  • Experience in designing, implementing, and maintaining all AWS infrastructure and services
  • Design and implement availability, scalability, and performance plans for the AWS managed service environment
  • Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
  • Hands-on technical expertise in Security Architecture, automation, integration, and deployment
  • Familiarity with compliance & security standards across the enterprise IT landscape
  • Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
  • Solid understanding of AWS IAM Roles and Policies
  • Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
  • Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
  • Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
  • Experience in managing and working with the offshore teams
  • Familiarity with CI/CD systems such as Jenkins, GitLab CI
  • Scripting experience (Python, Bash, etc.)
  • AWS, Kubernetes Certification is preferred
  • Ability to work with and influence Engineering teams
Read more
MTX

at MTX

2 recruiters
Sinchita S
Posted by Sinchita S
Bengaluru (Bangalore), Jaipur, Hyderabad
4 - 7 yrs
₹24L - ₹35L / yr
DevOps
CI/CD
Kubernetes
Jenkins
Docker

MTX Group Inc. is seeking a motivated DevOps Engineer to join our team. MTX Group Inc is a global cloud implementation partner that enables organizations to become a fit enterprise through digital transformation and strategy. MTX is powered by the Maverick.io Artificial Intelligence platform and has a strong presence in the Public Sector providing proprietary designs and innovative concept accelerators around licensing and permitting, inspections, grants management, case management, and program management. MTX is a strategic partner with Salesforce with specialty expertise in Einstein Analytics, Mulesoft, Customer Community, Commerce Cloud, and Marketing Cloud. MTX is a Google Cloud partner helping accelerate digital transformation programs across federal, state, and local government agencies.


The DevOps role is responsible for maintaining infrastructure and both development and operational deployments in multiple cloud environments for MTX Group, Inc. and their clients. This role adheres to and promotes MTX Group, Inc’s company’s values by performing respective duties in a manner that supports and contributes to the achievement of MTX Group, Inc’s company’s goals.


Responsibilities:


  • Develop and manage tools and services to be used by the organization and by external users of the platform
  • Automate all operational and repetitive tasks to improve efficiency and productivity of all development teams
  • Research and propose new solutions to improve the the mavQ platform in aspects of speed, scalability and security
  • Automate and manage the cloud infrastructure of the organization distribute across the globe and across multiple cloud providers such as Google Cloud and AWS
  • Ensure thorough logging, monitoring and alerting for all services and code running in the organization
  • Work with development teams to communications and protocols for distributes microservices
  • Help development teams debug devops related issues
  • Manage CI/CD, Source Control and IAM for the organization

 


What you will bring:


  • Bachelor’s Degree or equivalent 
  • 4+ years of experience as a DevOps Engineer OR
  • 2+ years of experience as backend developer and 2+ years of experience as DevOps or Systems engineer
  • Hands on experience with Docker and Kubernetes
  • Thorough understanding of operating systems and networking
  • Theoretical and practical understanding of Infrastructure-as-code and Platform-as-a-service concepts
  • Ability to understand and work with any service, tool or API as needed
  • Ability to understand implementation of open source products and modify them if necessary
  • Ability to visualize large scale distributed systems and debug issues or make changes to said systems
  • Understanding and practical experience in managing CI/CD

What we offer:

  • A competitive salary on par with top market standards
  • Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
    • Sum Insured: INR 5,00,000/- 
    • Maternity cover upto two children
    • Inclusive of COVID-19 Coverage
    • Cashless & Reimbursement facility
    • Access to free online doctor consultation

  • Personal Accident Policy (Disability Insurance) -
  • Sum Insured: INR. 25,00,000/- Per Employee
  • Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
  • Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
  • Temporary Total Disability is covered

  • An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver  benefit
  • Monthly Internet Reimbursement of upto Rs. 1,000 
  • Opportunity to pursue Executive Programs/ courses at top universities globally
  • Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others

                                                     ***********************

 

Read more
YourHRfolks

at YourHRfolks

6 recruiters
Pranit Visiyait
Posted by Pranit Visiyait
Remote, Jaipur
3 - 8 yrs
₹6L - ₹16L / yr
DevOps
Docker
Jenkins
Kubernetes
Terraform
+6 more

Job Location: Jaipur

Experience Required: Minimum 3 years

About the role:

As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here. 

Responsibilities:

  • Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
  • Ensuring availability, performance, security, and scalability of AWS production systems
  • Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment. 
  • Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
  • Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
  • 24x7 in shifts on call for Level 2 and higher escalations
  • Respond to incidents and write blameless RCA’s/postmortems
  • Implement and practice proper security controls and processes
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on platform.

Must have:  

  • Minimum 3 Years of Experience in DevOps.  
  • BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
  • Strong inter-personal skills.
  • Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
  • Must have experience in Docker, Kubernetes, Amazon ECS or  Mesos
  • Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
  • Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
  • Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.  
  • In-depth knowledge of the Linux operating system and administration. 
  • Production experience with a major cloud provider such Amazon AWS.
  • Knowledge of web server technologies such as Nginx or Apache. 
  • Knowledge of Redis, Memcache, or one of the many in-memory data stores.
  • Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5. 
  • Comfortable with large-scale, highly-available distributed systems.

Good to have:  

  • Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
  • Production experience with Hashicorp products such as Vault or Consul
  • Expertise in designing, analyzing troubleshooting large-scale distributed systems.
  • Experience in an PCI environment
  • Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
  • Experience maintaining and scaling database applications
  • Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc. 
  • Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
  • Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs. 
  • Experience in Kafka, RabbitMQ or any messaging bus.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort