Cutshort logo

11+ OPA Jobs in India

Apply to 11+ OPA Jobs on CutShort.io. Find your next job, effortlessly. Browse OPA Jobs and apply today!

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
wwwwebnyayai
Ishita Jindal
Posted by Ishita Jindal
Noida
3 - 6 yrs
₹4L - ₹10L / yr
CI/CD
Google Cloud Platform (GCP)

Role Overview

We are looking for a hands-on DevOps Engineer who will own infrastructure, deployment, reliability, and cloud cost optimisation. You will work closely with backend, AI, and product teams to ensure the platform is secure, scalable, and always available.


This is a high-ownership role with real impact on uptime, performance, and developer velocity.


Key Responsibilities


Infrastructure & Cloud Management

  • Design, deploy, and manage cloud infrastructure (GCP preferred; AWS acceptable)
  • Manage Compute Engine, Cloud Run, Kubernetes (GKE), Cloud SQL, storage, and networking
  • Ensure high availability, fault tolerance, and scalability


CI/CD & Deployment

  • Build and maintain CI/CD pipelines for backend and AI services
  • Automate deployments, rollbacks, and environment management (dev, staging, prod)
  • Improve release reliability and deployment speed


Monitoring, Reliability & Security

  • Set up monitoring, alerting, and logging (uptime, CPU, memory, errors, latency)
  • Proactively identify and resolve performance bottlenecks and incidents
  • Implement security best practices: IAM, secrets management, backups, and access controls


Cost Optimisation & Performance

  • Monitor and optimise cloud costs (compute, databases, storage)
  • Implement autoscaling, right-sizing, and resource optimisation
  • Work with engineering teams to balance performance with cost efficiency


Required Qualifications & Skills

  • 3–6 years of hands-on DevOps / Cloud Engineering experience
  • Strong experience with GCP (or AWS with willingness to transition)
  • Experience with Docker, Kubernetes, and containerised workloads
  • Experience with CI/CD tools (GitHub Actions, GitLab CI, or similar)
  • Ability to troubleshoot production issues under pressure
  • Experience with AI/ML workloads and GPU-based deployments


Read more
NetSquare Solutions
Aishwarya M
Posted by Aishwarya M
Remote only
3 - 15 yrs
Best in industry
Ansible
CI/CD
gitlab
skill iconJenkins
Bash
+1 more

We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.


Responsibilities:

  • Develop and maintain infrastructure using Ansible.
  • Write Ansible playbooks.
  • Implement CI/CD pipelines.
  • Manage GitLab repositories.
  • Monitor and troubleshoot infrastructure issues.
  • Ensure security and compliance.
  • Document best practices.


Qualifications:

  • Proven DevOps experience.
  • Expertise with Ansible and CI/CD pipelines.
  • Proficient with GitLab.
  • Strong scripting skills.
  • Excellent problem-solving and communication skills.


Regards,

Aishwarya M

Associate HR

Read more
Adesso India

Adesso India

Agency job
via HashRoot by Deepak S
Remote only
5 - 12 yrs
₹10L - ₹25L / yr
skill iconElastic Search
Ansible
skill iconAmazon Web Services (AWS)
DevOps
AWS CloudFormation
+1 more

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange

We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.

The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.


Responsibilities:

Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.

Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.

Improve and fix bugs in installation and automation routines.

Monitor CPU usage, security findings, and AWS alerts.

Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.

Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).

Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.

Integrate data from AWS CloudWatch.

Document all relevant information and train involved personnel in the used technologies.


Requirements:

Experience with Elastic Stack (ELK) components and related technologies.

Proficiency in automation tools like Ansible and CloudFormation.

Strong knowledge of AWS Cloud services.

Experience in creating and managing dashboards and alerts.

Familiarity with IAM roles and rights management.

Ability to document processes and train team members.

Excellent problem-solving skills and attention to detail.

 

Skills & Requirements

Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

Read more
Eclat engineering pvt. ltd
Amrita Panigrahy
Posted by Amrita Panigrahy
Remote only
3 - 5 yrs
₹18L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

About The Role:

The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.


Roles & Responsibilities:

- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure

provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and

scalable.

- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and

performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.

- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource

usage. Implement cost-saving measures while maintaining scalability and reliability.

- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure

consistency across environments. Automate configuration changes and updates.

- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry

standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.

- Collaboration with Development and Operations Teams: Foster collaboration between development and

operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure

issues and improving the development process.

- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure

business continuity in the event of system failures or other disruptions.

- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,

processes, and best practices. Share knowledge and mentor junior team members.

- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts

to introduce new tools and technologies that enhance DevOps practices.

- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to

infrastructure and deployments. Implement preventive measures to reduce recurring problems.


Requirements:

● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)

● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.

● Experience with Linux Administrator

● Experience with microservice architecture, containers, Kubernetes, and Helm is a must

● Experience in Configuration Management preferably Ansible

● Experience in Shell Scripting is a must

● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins

● Experience in logging, monitoring and analytics

● An Understanding of writing Infrastructure as a Code using tools like Terraform

● Preferences - AWS, Kubernetes, Ansible


Must Have:

● Knowledge of AWS Cloud Platform.

● Good experience with microservice architecture, Kubernetes, helm and container-based technologies

● Hands-on experience with Ansible.

● Should have experience in working and maintaining CI/CD Processes.

● Hands-on experience in version control tools like GIT.

● Experience with monitoring tools such as Cloudwatch/Sysdig etc.

● Sound experience in administering Linux servers and Shell Scripting.

● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).

Read more
Wheelseye Technology

at Wheelseye Technology

5 recruiters
Mohit Sharma
Posted by Mohit Sharma
Gurugram
4 - 8 yrs
₹15L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Terraform
+3 more

Requirements

Core skills:

● Strong background in Linux / Unix Administration and

troubleshooting

● Experience with AWS (ideally including some of the following:

VPC, Lambda, EC2, Elastic Cache, Route53, SNS, Cloudwatch,

Cloudfront, Redshift, Open search, ELK etc.)

● Experience with Infra Automation and Orchestration tools

including Terraform, Packer, Helm, Ansible.

● Hands on Experience on container technologies like Docker,

Kubernetes/EKS, Gitlab and Jenkins as Pipeline.

● Experience in one or more of Groovy, Perl, Python, Go or

scripting experience in Shell.

● Good understanding of with Continuous Integration(CI) and

Continuous Deployment(CD) pipelines using tools like Jenkins,

FlexCD, ArgoCD, Spinnaker etc

● Working knowledge of key value stores, database technologies

(SQL and NoSQL), Mongo, mySQL

● Experience with application monitoring tools like Prometheus,

Grafana, APM tools like NewRelic, Datadog, Pinpoint

● Good exposure on middleware components like ELK, Redis, Kafka

and IOT based systems including Redis, NewRelic, Akamai,

Apache / Nginx, ELK, Grafana, Prometheus etc


Good to have:

● Prior experience in Logistics, Payment and IOT based applications

● Experience in unmanaged mongoDB cluster, automations &

operations, analytics

● Write procedures for backup and disaster recovery

Core Experience

● 3-5 years of hands-on DevOps experience

● 2+ years of hands-on Kubernetes experience

● 3+ years of Cloud Platform experience with special focus on

Lambda, R53, SNS, Cloudfront, Cloudwatch, Elastic Beanstalk,

RDS, Open Search, EC2, Security tools

● 2+ years of scripting experience in Python/Go, shell

● 2+ years of familiarity with CI/CD, Git, IaC, Monitoring, and

Logging tools

Read more
RaRa Now

at RaRa Now

3 recruiters
Puneeta Mishra
Posted by Puneeta Mishra
Remote only
4 - 8 yrs
₹7L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

About RaRa Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology
About the Role
  • Build and maintain CI/CD tools and pipelines.
  • Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
  • Continuously improve code quality, product execution, and customer delight.
  • Communicate, collaborate and work effectively across distributed teams in a global environment.
  • Operate to strengthen teams across their product with their knowledge base
  • Contribute to improving team relatedness, and help build a culture of camaraderie.
  • Continuously refactor applications to ensure high-quality design
  • Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
  • Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
  • Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
  • Working knowledge of the TCP/IP stack, internet routing, and load balancing
  • Basic understanding of cluster orchestrators and schedulers (Kubernetes)
  • Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
  • Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Read more
SquareShiftco

at SquareShiftco

6 recruiters
Gowtham M
Posted by Gowtham M
Remote, Chennai
7 - 15 yrs
₹15L - ₹30L / yr
DevOps
skill iconDocker
skill iconKubernetes
CI/CD
Ansible
+4 more

Requirements

You will make an ideal candidate if you have: 

  • Experience of building a range of Services in a Cloud Service provider

  • Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques

  • Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)

  • Managed an infrastructure that involved 50+ hosts/network 

  • 3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security

  • Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)

  • Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools

  • Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams

  • Good leadership and teamwork skills - Works collaboratively in an agile environment

  • Operational effectiveness - delivers solutions that align to approved design patterns and security standards

  • Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS

  • Experienced in full automation and configuration management

  • A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change

  • Set and achieved challenging short, medium and long term goals which exceeded the standards in their field

  • Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences

  • Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same

  • A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success

  • A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes

 

Essential Skills :

  • Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security

  • Demonstrable experience of Linux administration and scripting preferably Red Hat

  • Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools

  • Experience working within an Agile environment

  • Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS

  • Server administration (either Linux or Windows)

  • Automation scripting (using scripting languages such as Terraform, Ansible etc.)

  • Ability to quickly acquire new skills and tools

Required Skills :

  • Linux & Windows Server Certification

Read more
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹20L / yr
DevOps
skill iconJenkins
Puppet
Terraform
skill iconDocker
+10 more

DevOps Engineer 

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.

We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Key Qualifications

· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.

· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc

· Strong experience in Linux/Unix administration.

· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.

· Expertise in multiple coding and scripting languages including Shell, Python, and Perl

· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)

· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database

· Worked on open-source tools for logging, monitoring, search engine, caching, etc.

· Professional Certificates in AWS or any other cloud is preferable

· Excellent problem solving and troubleshooting skills

· Must have good written and verbal communication skills

Key Responsibilities

Ambitious individuals who can work under their own direction towards agreed targets/goals.

 Must be flexible to work on the office timings to accommodate the multi-national client timings.

 Will be involved in solution designing from the conceptual stages through development cycle and deployments.

 Involve development operations & support internal teams

 Improve infrastructure uptime, performance, resilience, reliability through automation

 Willing to learn new technologies and work on research-orientated projects

 Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.

 Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.

 Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
Mosaic Wellness

at Mosaic Wellness

1 recruiter
Sneha Mali
Posted by Sneha Mali
Mumbai
4 - 7 yrs
₹18L - ₹25L / yr
DevOps
skill iconJenkins
skill iconDocker
skill iconAmazon Web Services (AWS)
Nginx
+4 more

Role 

We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.

You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.

This would be a hybrid role and the person would be expected to also do some application level programming in their downtime.

Responsibilities

  • Deployment, automation, management, and maintenance of production systems.
  • Ensuring availability, performance, security, and scalability of production systems.
  • Evaluation of new technology alternatives and vendor products.
  • System troubleshooting and problem resolution across various application domains and platforms.
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
  • Manage the establishment and configuration of SaaS infrastructure in an agile way by storing infrastructure as code and employing automated configuration management tools with a goal to be able to re-provision environments at any point in time.
  • Be accountable for proper backup and disaster recovery procedures.
  • Drive operational cost reductions through service optimizations and demand based auto scaling.
  • Have on call responsibilities.
  • Perform root cause analysis for production errors
  • Uses open source technologies and tools to accomplish specific use cases encountered within the project.
  • Uses coding languages or scripting methodologies to solve a problem with a custom workflow.

Requirements

    • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
  • Prior experience as a software developer in a couple of high level programming languages.
    • Extensive experience in any Javascript based framework since we will be deploying services to NodeJS on AWS Lambda (Serverless)
  • Extensive experience with web servers such as Nginx/Apache
  • Strong Linux system administration background.
  • Ability to present and communicate the architecture in a visual form.
  • Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT gateway, DynamoDB)
  • Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~ 1 Lakh users a day)
  • A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
  • Expertise with Git
  • Experience implementing CI/CD (e.g. Jenkins, TravisCI)
  • Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or Mongo.
  • Stellar troubleshooting skills with the ability to spot issues before they become problems.
  • Current with industry trends, IT ops and industry best practices, and able to identify the ones we should implement.
  • Time and project management skills, with the capability to prioritize and multitask as needed.
Read more
Consulting and Product Engineering Company

Consulting and Product Engineering Company

Agency job
via Exploro Solutions by Sapna Prabhudesai
Hyderabad, Bengaluru (Bangalore), Pune, Chennai
8 - 12 yrs
₹7L - ₹30L / yr
DevOps
Terraform
skill iconDocker
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Job Dsecription:

 

○ Develop best practices for team and also responsible for the architecture

○ solutions and documentation operations in order to meet the engineering departments quality and standards

○ Participate in production outage and handle complex issues and works towards Resolution

○ Develop custom tools and integration with existing tools to increase engineering Productivity

 

 

Required Experience and Expertise

 

○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.

○ Deep understanding of Terraform with best practices & writing TF modules.

○ Hands-on experience of GCP  and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.

○ Deep understanding of Kernel, Networking and OS fundamentals

NOTICE PERIOD - Max - 30 days

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort