Cutshort logo
Adesso India logo
DevOps Engineer (ELK / AWS)
Adesso India
DevOps Engineer (ELK / AWS)
Adesso India's logo

DevOps Engineer (ELK / AWS)

at Adesso India

Agency job
5 - 12 yrs
₹10L - ₹25L / yr
Remote only
Skills
skill iconElastic Search
Ansible
skill iconAmazon Web Services (AWS)
DevOps
AWS CloudFormation
Kibana

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange

We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.

The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.


Responsibilities:

Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.

Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.

Improve and fix bugs in installation and automation routines.

Monitor CPU usage, security findings, and AWS alerts.

Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.

Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).

Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.

Integrate data from AWS CloudWatch.

Document all relevant information and train involved personnel in the used technologies.


Requirements:

Experience with Elastic Stack (ELK) components and related technologies.

Proficiency in automation tools like Ansible and CloudFormation.

Strong knowledge of AWS Cloud services.

Experience in creating and managing dashboards and alerts.

Familiarity with IAM roles and rights management.

Ability to document processes and train team members.

Excellent problem-solving skills and attention to detail.

 

Skills & Requirements

Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
CoffeeBeans
at CoffeeBeans
2 candid answers
Ariba Khan
Posted by Ariba Khan
Mumbai, Hyderabad
8 - 11 yrs
Upto ₹35L / yr (Varies
)
skill iconKubernetes
skill iconJenkins
skill iconDocker
Ansible
sonarqube

About the role:

We are seeking an experienced DevOps Engineer with deep expertise in Jenkins, Docker, Ansible, and Kubernetes to architect and maintain secure, scalable infrastructure and CI/CD pipelines. This role emphasizes security-first DevOps practices, on-premises Kubernetes operations, and integration with data engineering workflows.


🛠 Required Skills & Experience

Technical Expertise

  • Jenkins (Expert): Advanced pipeline development, DSL scripting, security integration, troubleshooting
  • Docker (Expert): Secure multi-stage builds, vulnerability management, optimisation for Java/Scala/Python
  • Ansible (Expert): Complex playbook development, configuration management, automation at scale
  • Kubernetes (Expert - Primary Focus): On-premises cluster operations, security hardening, networking, storage management
  • SonarQube/Code Quality (Strong): Integration, quality gate enforcement, threshold management
  • DevSecOps (Strong): Security scanning, compliance automation, vulnerability remediation, workload governance
  • Spark ETL/ETA (Moderate): Understanding of distributed data processing, job configuration, runtime behavior


Core Competencies

  • Deep understanding of DevSecOps principles and security-first automation
  • Strong troubleshooting and problem-solving abilities across complex distributed systems
  • Experience with infrastructure-as-code and GitOps methodologies
  • Knowledge of compliance frameworks and security standards
  • Ability to mentor teams and drive best practice adoption


🎓Qualifications

  • 6 - 10 Years years of hands-on DevOps
  • Proven track record with Jenkins, Docker, Kubernetes, and Ansible in production environments
  • Experience managing on-premises Kubernetes clusters (bare-metal preferred)
  • Strong background in security hardening and compliance automation
  • Familiarity with data engineering platforms and big data technologies
  • Excellent communication and collaboration skills


🚀 Key Responsibilities

1.CI/CD Pipeline Architecture & Security

  • Design, implement, and maintain enterprise-grade CI/CD pipelines in Jenkins with embedded security controls:
  • Build greenfield pipelines and enhance/stabilize existing pipeline infrastructure
  • Diagnose and resolve build, test, and deployment failures across multi-service environments
  • Integrate security gates, compliance checks, and automated quality controls at every pipeline stage
  • Manage and optimize SonarQube and static code analysis tooling:
  • Enforce code quality and security scanning standards across all services
  • Maintain organizational coding standards, vulnerability thresholds, and remediation workflows
  • Automate quality gates as integral components of CI/CD processes
  • Engineer optimized Docker images for Java, Scala, and Python applications:
  • Implement multi-stage builds, layer optimization, and minimal base images
  • Conduct image vulnerability scanning and enforce compliance policies
  • Apply containerization best practices for security and performance
  • Develop comprehensive Ansible automation:
  • Create modular, reusable, and secure playbooks for configuration management
  • Automate environment provisioning and application lifecycle operations
  • Maintain infrastructure-as-code standards and version control

2.Kubernetes Platform Operations & Security

  • Lead complete lifecycle management of on-premises/bare-metal Kubernetes clusters:
  • Cluster provisioning, version upgrades, node maintenance, and capacity planning
  • Configure and manage networking (CNI), persistent storage solutions, and ingress controllers
  • Troubleshoot workload performance, resource constraints, and reliability issues
  • Implement and enforce Kubernetes security best practices:
  • Design and manage RBAC policies, service account isolation, and least-privilege access models
  • Apply Pod Security Standards, network policies, secrets encryption, and certificate lifecycle management
  • Conduct cluster hardening, security audits, monitoring, and policy governance
  • Provide technical leadership to development teams:
  • Guide secure deployment patterns and containerized application best practices
  • Establish workload governance frameworks for distributed systems
  • Drive adoption of security-first mindsets across engineering teams

3.Data Engineering Support

  • Collaborate with data engineering teams on Spark-based workloads:
  • Support deployment and operational tuning of Spark ETL/ETA jobs
  • Understand cluster integration, job orchestration, and performance optimization
  • Debug and troubleshoot Spark workflow issues in production environments
Read more
Biofourmis
at Biofourmis
44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
Ekloud INC
Remote only
5 - 8 yrs
₹12L - ₹20L / yr
Salesforce
DevOps
gearset
copado
flosum
+4 more

Salesforce DevOps/Release Engineer

 

Resource type - Salesforce DevOps/Release Engineer

Experience - 5 to 8 years

Norms - PF & UAN mandatory

Resource Availability - Immediate or Joining time in less than 15 days

 

Job - Remote

Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)

 

Required Experience:

  • 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
  • Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
  • Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
  • Solid understanding of Salesforce architecture, metadata, and development lifecycle.
  • Familiarity with version control systems (e.g., Git) and agile methodologies

Key Responsibilities:



  • Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
  • Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
  • Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
  • Monitor, troubleshoot, and resolve deployment and release issues.
  • Maintain documentation for deployment processes and provide training on best practices.
  • Stay updated on the latest Salesforce DevOps tools, features, and best practices.

Technical Skills:

  • Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
  • CI/CDBuilding and maintaining pipelines, automation, and release management
  • Version ControlProficiency with Git and related workflows
  • Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
  • Scripting
  • Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
  • Communication
  • Strong written and verbal communication skills

Preferred Qualifications:

 

Bachelor’s degree in Computer Science, Information Technology, or related field.

 

Certifications:

 

Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.

Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.

Experience with Salesforce DX and deployment strategies for large-scale orgs. 

Read more
building video-calling as a service
building video-calling as a service
Agency job
via Qrata by Blessy Fernandes
Remote only
3 - 5 yrs
₹30L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Location - Bangalore (Remote)
Experience - 2+ Years
Requirements:

● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Read more
TruScholar
Amravati
3 - 5 yrs
₹6L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Job Title: Kubernetes and Cloud Engineer
Location: Amravati, Maharashtra 444605 , INDIA
We are looking for a Kubernetes Cloud Engineer with experience in deployment and administration of Hyperledger Fabric blockchain applications. While you will work on Truscholar blockchain based platform (both Hyperledger Fabric and INDY versions), if you combine rich Kubernetes experience with strong DevOps skills, we will still be keen on talking to you.
Responsibilities
● Deploy Hyperledger Fabric (BEVEL SETUP) applications on Kubernetes
● Monitoring Kubernetes system
● Implement and improve monitoring and alerting
● Build and maintain highly available blockchain systems on Kubernetes
● Implement an auto-scaling system for our Kubernetes nodes
● Detail Design & Develop SSI & ZKP Solution
● - Act as a liaison between the Infra, Security, business & QA Teams for end to end integration and DevOps - Pipeline adoption.
Technical Skills
● Experience with AWS EKS Kubernetes Service, Container Instances, Container Registry and microservices (or similar experience on AZURE)
● Hands on with automation tools like Terraform, Ansible
● Ability to deploy Hyperledger Fabric in Kubernetes environment is highly desirable
● Hyperledger Fabric/INDY (or other blockchain) development, architecture, integration, application experience
● Distributed consensus systems such as Raft
● Continuous Integration and automation skills including GitLab Actions ● Microservices architectures, Cloud-Native architectures, Event-driven architectures, APIs, Domain Driven Design
● Being a Certified Hyperledger Fabric Administrator would be an added advantage
Sills Set
● Understanding of Blockchain NEtworks
● Docker Products
● Amazon Web Services (AWS)
● Go (Programming Language)
● Hyperledger Fabric/INDY
● Gitlab
● Kubernetes
● Smart Contracts
Who We are:
Truscholar is a state-of- art Digital Credential Issuance and Verification Platform running as blockchain Infrastructure as an Instance of Hyperledger Indy Framework. Our Solution helps all universities, Institutes, Edtech, E-learning Platforms, Professional Training Academics, Corporate Employee Training and Certifications and Event Management Organisations managing exhibitions, Trade Fairs, Sporting Events, seminars and webinars to their learners, employees or participants. The digital certificates, Badges, or transcripts generated are immutable, shareable and verifiable thereby building an individual's Knowledge Passport. Our Platform has been architected to function as a single Self Sovereign Identity Wallet for the next decade, keeping personal data privacy guidelines in min.
Why Now?
The Startup venture, which was conceived as an idea while two founders were pursuing a Blockchain Technology Management Course, has received tremendous applause and appreciation from mentors and investors, and has been able to roll out the product within a year and comfortably complete the product market fit stage. Truscholar has entered a growth stage, and is searching for young, creative, and bright individuals to join the team and make Truscholar a preferred global product within the next 36 months.
Our Work Culture:
With our innovation, open communication, agile thought process, and will to achieve, we are a very passionate group of individuals driving the company's growth. As a result of their commitment to the company's development narrative, we believe in offering a work environment with clear metrics to support workers' individual progress and networking within the fraternity.
Our Vision:
To become the intel inside the education world by powering all academic credentials across the globe and assisting students in charting their digital academic passports.
Advantage Location Amravati, Maharashtra, INDIA
Amid businesses in India realising the advantages of the work-from-home (WFH) concept in the backdrop of the Coronavirus pandemic, there has been a major shift of the workforce towards tier-2 cities.
Amravati, also called Ambanagri, is a city of immense cultural and religious importance and a beautiful Tier 2 City of Maharastra. It is also called the cultural capital of the Vidarbha region. The cost of living is less, the work-life balance is better, much breathable air, fewer traffic bottlenecks and housing remains affordable, as compared to congested and eccentric metro cities of India. We firmly believe that they (tier-2) are the future talent hubs and job-creation centres. Our conviction has been borne out by the fact that tier-2 cities have made great strides in salary levels due to a lot of investments in building excellent physical and social infrastructure.
Read more
samco securities limited
at samco securities limited
9 recruiters
Careers Samco
Posted by Careers Samco
Mumbai
1 - 6 yrs
₹2L - ₹4L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more
  • Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
  • Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
  • Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
  • Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
  • Design and implement DevOps processes for new projects following the department's objectives of automation.
  • Collaborate on projects with development teams to provide recommendations, support and guidance.
  • Work towards full automation, monitoring, virtualization and containerization.
  • Create and maintain tools for deployment, monitoring and operations.
  • Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
  • Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
  • Maintain 24/7 availability for responsible systems and be open to on-call rotation.
Read more
HOP Financial Services
at HOP Financial Services
2 recruiters
Shreya Dubey
Posted by Shreya Dubey
Bengaluru (Bangalore)
3 - 4 yrs
₹8L - ₹11L / yr
skill iconKubernetes
skill iconDocker
DevOps
skill iconJenkins
Chef
+1 more

About Hop:


We are a London, UK based FinTech startup with a subsidiary in India. Hop is working towards building the next generation digital banking platform for seamless and economical currency exchange, with technology at the crux of it. In a technology driven era, many financial services platforms still lack the customer experience and are cumbersome to use. Hop aims at building a ‘state of the art’ tech-centric, customer focused solution.


moneyHOP is India’s first cross-border neo-bank providing millennials the ability to ‘Send’ & ‘Spend’ conveniently and economically across the globe using HOPRemit (An online remittance portal) and HOP app + Card (A multi-currency bank account).


This position is a crucially important position in the firm and the person hired will have the liberty to drive the product and provide direction in line with business needs.


Website: https://moneyhop.co/">https://moneyhop.co/



About Individual

 

Looking for an enthusiastic individual who is passionate about technology and has worked with either a start-up or a blue-chip firm in the past.

 

The candidate needs to be a multi-tasker, highly self-motivated, self-starter and have the ability to work in a high stress environment. He/she should be tech savvy and willing to embrace new technology comfortably.

 

Ideally, the candidate should have experience working with the technology stack in the scalable and high growth mobile application software.

 

General Skills

 

  • 3-4 years of experience in DevOps.
  • Bachelor's degree in Computer Science, Information Science, or equivalent practical experience.
  • Exposure to Behaviour Driven Development and experience in programming and testing.
  • Excellent verbal and written communication skills.
  • Good time management and organizational skills.
  • Dependability
  • Accountability and Ownership
  • Right attitude and growth mindset
  • Trust-worthiness
  • Ability to embrace new technologies
  • Ability to get work done
  • Should have excellent analytical and troubleshooting skills.

Technical Skills

 

  • Work with developer teams with a focus on automating build and deployment using tools such as Jenkins.
  • Implement CI/CD in projects (GitLabCI preferred).
  • Enable software build and deploy.
  • Provisioning both day to day operations and automation using tools, e. g. Ansible, Bash.
  • Write, plan, create infra as a code using Terraform.
  • Monitoring, ITSM automation incident creation from alerts using licensed and open source tools.
  • Manage credentials for AWS cloud servers, github repos, Atlassian Cloud services, Jenkins, OpenVPN, and the developers environment.
  • Building environments for unit tests, integration tests, system tests, and acceptance tests using Jenkins.
  • Create and spin off resource instances.
  • Experience implementing CI/CD.
  • Experience with infrastructure automation solutions (Ansible, Chef, Puppet, etc. ).
  • Experience with AWS.
  • Should have expert Linux and Network administration skills to troubleshoot and trace symptoms back to the root cause.
  • Knowledge of application clustering / load balancing concepts and technologies.
  • Demonstrated ability to think strategically about developing solution strategies, and deliver results.
  • Good understanding of design of native Cloud applications Cloud application design patterns and practices in AWS.

Day-to-Day requirements


  • Work with the developer team to enhance the existing CI/CD pipeline.
  • Adopt industry best practices to set up a UAT and prod environment for scalability.
  • Manage the AWS resources including IAM users, access control, billing etc.
  • Work with the test automation engineer to establish a CI/CD pipeline.
  • Work on replication of environments easy to implement.
  • Enable efficient software deployment.
Read more
A Health Tech Company headquartered in Toronto US.
A Health Tech Company headquartered in Toronto US.
Agency job
via Multi Recruit by Manjunath Multirecruit
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹25L / yr
Solution architecture
AWS Lambda
skill iconAmazon Web Services (AWS)
Amazon EC2
  • Degree in Computer Science or related discipline.
  • AWS Certified Solutions Architect certification required
  • 5+ years of architecture, design, implementation, and support of highly complex solutions (i.e. having an architectural sense for ensuring security and compliance, availability, reliability, etc.)
  • Deep technical experience in serverless AWS infrastructure
  • Understanding of cloud automation and orchestration tools and techniques including git, terraform, ARM or equivalent
  • Create Technical Design documents, understand technical designs and translate into the application requirements.
  • Exercise independent judgment in evaluating alternative technical solutions
  • Participate in code and design review process
  • Write unit test cases for quality check of the deliverables
  • Ability to work closely with others in a team environment as well as independently
  • Proven ability to problem solve and troubleshoot
  • Excellent verbal and written communication skills and the ability to interact professionally with a diverse group, executives, managers, and subject matter experts
  • Excellent English communication skills are required

We are looking for a Solution Architect with at least 5 years’ experience working on the following to join our growing team:

 

  • AWS
  • Postgresql
  • EC2 on AWS
  • Cognito
  • and most importantly Serverless

 

You will need a strong technical AWS background focused on architecting on serverless (eg Lambda) AWS infrastructure. 

Read more
PRAXINFO
at PRAXINFO
1 recruiter
Umang Kathiyara
Posted by Umang Kathiyara
Ahmedabad
1 - 3 yrs
₹4L - ₹6L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
ECS
+4 more

PRAXINFO Hiring DevOps Engineer.

 

Position : DevOps Engineer

Job Location : C.G.Road, Ahmedabad

EXP : 1-3 Years

Salary : 40K - 50K

 

Required skills:

⦿ Good understanding of cloud infrastructure (AWS, GCP etc)

⦿ Hands on with Docker, Kubernetes or ECS

⦿ Ideally strong Linux background (RHCSA , RHCE)

⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)

⦿ Microservice architectures

⦿ Experience with distributed systems and highly scalable systems

⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)

⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.

 

If anyone is interested than share your resume at hiring at praxinfo dot com!

 

#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos