Cutshort logo
Profectus Analytics Pvt. Ltd.'s logo

DevOps Engineer

Ankit Agarwal's profile picture
Posted by Ankit Agarwal
5 - 10 yrs
₹12L - ₹20L / yr
Bengaluru (Bangalore)
Skills
DevOps
skill iconJenkins
Continuous Integration
Linux/Unix
skill iconMongoDB
skill iconElastic Search
skill iconKubernetes
Cloud Computing
Requirement: • Hands on complete Design / Implementation of CI/CD Processes with Git • Hands-on in-depth knowledge of GCP like VPNs, VPCs, Storage, Tagging and monitoring, alerting of resources • Experience in development of access control lists • Hands on experience in containerization (Docker, Kubernetes) • Experience in with automation using any scripting language • Hands on experience in configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc. • Experience in monitoring services via tools such as Graylog, Syslog, NewRelic, Prometheus, Nagios, etc. Roles & Responsibilities: • Architecting, designing, implementing and supporting of projects using cloud technologies • Production support responsibilities for software and infrastructure fixes • Day to day operational support of continuous integration, release and source control tooling • Work with software, infrastructure, network engineers and DBAs to solve productivity challenges, drive efficiency, automate and streamline environment builds • Co-own monitoring & alert configuration to detect, triage and resolve issues quickly • Build tools to enhance production triage and improve time to detect issues You must be: • A team player who likes to work hard and play harder, have excellent interpersonal, organizational and time-management skills. • Able to think strategically and analytically to effectively complete assigned work within given timelines. • Someone who possesses excellent written and oral communication skills and have an attention to detail • A person with an ability to multi-task on multiple projects and tasks at the same time • A person who gives importance to attention to detail and be highly organized • Positive and upbeat with the ability to learn quickly • Be able to laugh. At others and most importantly at yourself. Need a sense of humor. You can expect: • A fast-paced, high-growth startup environment where you will gain a career and not just a job. • The company to invest in your personal and professional development. We support your ongoing education and training by reimbursing you for relevant educational courses. • An open office culture, no cabins or cubicles and a place that is looking for your input to help us grow • The support of your teammates to always do better. Own it and win together! • Exposure of International Retail market. Learn about a high growth industry and build critical skill-sets. • Excellent employee referral program. Refer your friends, work with your friends and be awarded for it. • Work along with smart, creative and energetic team who truly believe in ‘working hard and partying harder!’ Educational Requirement: • UG - B.Tech/B.E. – Computer Science/ IT or equivalent • PG - M.Tech - Computer Science/ IT, MCA – Computers or equivalent
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Profectus Analytics Pvt. Ltd.

Founded :
2017
Type :
Product
Size :
20-100
Stage :
Raised funding

About

Profectus Solutions is a retail pricing product company that provides business-focused and technology-enabled solutions to retailers to help them solve challenging and high-value problems in the pricing domain and improve business results and exceed their financial goals.

 

Built for retailers by retail experts, our solutions are designed to not only understand and appreciate the business challenges and nuances of the retail world but also instill a collaborative and work-flow driven approach towards pricing that is enabled by cutting-edge techniques in big data optimization, Machine Learning and Artificial Intelligence.

Read more

Connect with the team

Profile picture
Ankit Agarwal
Profile picture
neha phull
Profile picture
Urmita Das

Company social profiles

N/A

Similar jobs

AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Alyke
Riya Salgotra
Posted by Riya Salgotra
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 6 yrs
₹3L - ₹14L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Role Overview:

As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.


Key Responsibilities:

  • Design and manage scalable, secure, and highly available cloud infrastructure.
  • Lead efforts in implementing and optimizing CI/CD pipelines.
  • Automate repetitive tasks and develop robust monitoring solutions.
  • Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
  • Troubleshoot complex issues across development, staging, and production environments.
  • Mentor and guide L1 engineers on best practices.
  • Stay updated on emerging DevOps tools and technologies.
  • Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.


Qualifications:

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
  • Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
  • Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
  • Proficiency in scripting languages like Python, Bash, or PowerShell.
  • Deep understanding of system security, networking, and load balancing.
  • Strong analytical skills and problem-solving mindset.
  • Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.


What We Offer:

  • Opportunity to work with a cutting-edge tech stack in a product-first company.
  • Collaborative and growth-oriented environment.
  • Competitive salary and benefits.
  • Freedom to innovate and contribute to impactful projects.
Read more
Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

About the Role:

We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.


Key Responsibilities:


Cloud Management:

  • Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
  • Ensure high availability, scalability, and security of cloud resources.

Containerization & Orchestration:

  • Develop and manage containerized applications using Docker.
  • Deploy, scale, and manage Kubernetes clusters.

CI/CD Pipelines:

  • Build and maintain robust CI/CD pipelines to automate the software delivery process.
  • Implement monitoring and alerting to ensure pipeline efficiency.

Version Control & Collaboration:

  • Manage code repositories and workflows using Git.
  • Collaborate with development teams to optimize branching strategies and code reviews.

Automation & Scripting:

  • Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
  • Write scripts to optimize and maintain workflows.

Monitoring & Logging:

  • Implement and maintain monitoring solutions to ensure system health and performance.
  • Analyze logs and metrics to troubleshoot and resolve issues.


Required Skills & Qualifications:

  • 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
  • Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
  • Hands-on experience building and managing CI/CD pipelines.
  • Proficient in using Git for version control.
  • Experience with scripting languages such as Bash, Python, or PowerShell.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Solid understanding of networking, security, and system administration.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and teamwork skills.


Preferred Qualifications:

  • Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with serverless architectures and microservices.


Read more
Novo
at Novo
2 recruiters
Viraj Bhavsar
Posted by Viraj Bhavsar
Ahmedabad
4 - 8 yrs
₹1L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+2 more

What you’ll be doing at Novo:

● Systems thinking
● Creating best practices, templates, and automation for build, test, integration and
deployment pipelines on multiple projects
● Designing and developing tools for easily creating and managing dev/test infrastructure
and services in AWS cloud
● Providing expertise and guidance on CI/CD, Github, and other development tools via
containerization
● Monitoring and support systems in Dev, UAT and production environments
● Building mock services and production-like data sources for use in development and
testing
● Managing Github integrations, feature flag systems, code coverage tools, and other
development & monitoring tools tools
● Participating in support rotations to help troubleshoot to infrastructure issues

Stacks you eat everyday  ( For Devops Engineer )

● Creating and working with containers, as well as using container orchestration tools
(Kubernetes / Docker)
● AWS: S3, EKS, EC2, RDS, Route53, VPC etc.
● Fair understanding of Linux
● Good knowledge of CI/CD : Jenkins / CircleCI / Github Actions
● Basic level of monitoring
● Support for Deployment along with various Web Servers and Linux environments , both
backend and frontend.

Read more
Classplus
at Classplus
1 video
4 recruiters
Peoples Office
Posted by Peoples Office
Noida
5 - 8 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+11 more
About us

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

What will you do?

Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

Create standardized tooling and templates for development teams to create CI/CD pipelines

Ensure infrastructure is created and maintained using terraform

Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

You should apply, if you

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

Being Part of the Clan

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Read more
Kutumb
at Kutumb
3 recruiters
Dimpy Mehra
Posted by Dimpy Mehra
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.

 

More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html

 

We’re excited if you have:

  • Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
  • Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
  • Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
  • Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
  • Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
  • The drive and self-motivation to understand the intricate details of a complex infrastructure environment 
  • Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
  • Hands-on experience working with AWS 
  • Bonus points for knowledge of ETL pipelines and Big data architecture
  • Great problem-solving skills & takes pride in your work
  • Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
  • Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch

 

What you’ll be doing:

  • Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
  • Demonstrate great communication skills in working with technical and non-technical audiences
  • Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem

 

Tools we use:

Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK

 

What we offer:

  • High pace of learning
  • Opportunity to build the product from scratch
  • High autonomy and ownership
  • A great and ambitious team to work with
  • Opportunity to work on something that really matters
  • Top of the class market salary and meaningful ESOP ownership
Read more
A funded fintech startup based out of Bangalore, India
A funded fintech startup based out of Bangalore, India
Agency job
via Qrata by Revathi Satish
Bengaluru (Bangalore)
5 - 11 yrs
₹15L - ₹30L / yr
Hadoop
Big Data
Ansible
DevOps

Hiring for a funded fintech startup based out of Bangalore!!!

Our Ideal Candidate

We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments.

Requirements

  • 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase
  • Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run
  • Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks
  • Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql)
  • Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
  • Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches
  • Experience supporting on-premise solutions as well as on AWS cloud
  • Experience working with and supporting Spark-based applications on YARN
  • Experience with one or more automation tools such as Ansible, Teraform, etc
  • Experience working with CI/CD tools like Jenkins and various test report and coverage plugins
  • Experience defining and automating the build, versioning and release processes for complex enterprise products
  • Experience supporting clients remotely and on-site
  • Experience working with and supporting Java- and Python-based tech stacks would be a plus

Desired Non-technical Requirements

  • Very strong communication skills both written and verbal
  • Strong desire to work with start-ups
  • Must be a team player

Job Perks

  • Attractive variable compensation package
  • Flexible working hours – everything is results-oriented
  • Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
Read more
company logo
Agency job
via Anetcorp Ind Pvt Ltd by Jyoti Yadav
Mohali
4 - 15 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

       

Sr. DevOps Engineer - L3 Support
 
Hybrid -  2-3 days from Office in Mohali
Shift: PST
 
Experience 6+
 
 

Hands on experience in:

  • Deploying, managing, securing and patching enterprise applications on large scale in Cloud preferably AWS.
  • Experience leading End-to-end DevOps projects with modern tools encompassing both Applications and Infrastructure
  • AWS Code deploy, Code build, Jenkins, Sonarqube.
  • Incident management and root cause analysis.
  • Strong understanding of immutable infrastructure and infrastructure as code concepts. Participate in capacity planning and provisioning of new resources. Importing already deployed infra into IaaC.
  • Utilizing AWS cloud services such as EC2, S3, IAM, Route53, RDS, VPC, NAT/IG Gateway, LAMBDA, Load Balancers, CloudWatch, API Gateway are some of them.
  • AWS ECS managing multi cluster container environments (ECS with EC2 and Fargate with service discovery using Route53)
  • Monitoring/analytics tools like Nagios/DataDog and logging tools like LogStash/SumoLogic
  • Simple Notification Service (SNS)
  • Version Control System: Git, Gitlab, Bitbucket
  • Participate in Security Audit of Cloud Infrastructure.
  • Exceptional documentation and communication skills.
  • Ready to work in Shift
  • Knowledge of Akamai is Plus.
  • Microsoft Azure is Plus
  • Adobe AEM is plus.
  • AWS Certified DevOps Professional is plus
Read more
Saas Based Tech Company
Saas Based Tech Company
Agency job
via Merito by Gaurav Bhosle
Mumbai
4 - 10 yrs
₹12L - ₹15L / yr
DevOps
CI/CD
skill iconAmazon Web Services (AWS)
skill iconJenkins
Gradle
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment automation and CI /CD

Key Responsibilities
  • Building and setting up new development tools and infrastructure
  • Understanding the needs of stakeholders and conveying this to developers
  • Working on ways to automate and improve development and release
    processes
  • Testing and examining code written by others and analyzing results
  • Ensuring that systems are safe and secure against cybersecurity
    threats
  • Identifying technical problems and developing software updates and ‘fixes’
  • Working with software developers and software engineers to ensure that development follows established processes and works as intended
  • Planning out projects and being involved in project management decisions

Required Skills and Qualifications
  • BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
  • 4+ years of overall development experience.
  • Strong understanding of cloud deployment and setup
  • Hands-on experience with tools like Jenkins, Gradle etc.
  • Deploy updates and fixes
  • Provide Level 2 technical support
  • Build tools to reduce occurrences of errors and improve customer experience
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate deployment
  • Design procedures for system troubleshooting and maintenance
  • Skills and Qualifications
  • Proficient with git and git workflows
  • Working knowledge of databases and SQL
  • Problem-solving attitude
  • Collaborative team spirit
Read more
Aviso Inc
at Aviso Inc
1 video
11 recruiters
Chaitanya Penugonda
Posted by Chaitanya Penugonda
Bengaluru (Bangalore), Hyderabad
5 - 12 yrs
Best in industry
skill iconDocker
skill iconAmazon Web Services (AWS)
DevOps
skill iconMongoDB
AWS Lambda
+2 more
What you will be doing:

● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc

What you bring:

● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.

Aviso offers:

● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos