Cutshort logo
Peoplegrove logo
DevOps Engineer
DevOps Engineer
Peoplegrove's logo

DevOps Engineer

Madhumita Banerjee's profile picture
Posted by Madhumita Banerjee
3 - 6 yrs
₹10L - ₹15L / yr
Remote only
Skills
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
This posting is for an India-based engineer. We are not looking for part-time contractors, so please only apply if you're looking for a full-time position.
 
What does a Devops Engineer do at PeopleGrove?
 
PeopleGrove has built a dynamic cloud-based software platform for universities, companies, and nonprofits to power mentoring and networking initiatives. As a devops engineer , you'll become an integral member of a small team of engineers, working on our kubernetes cluster. Some of the practices we employ are Infrastructure automation, Continuous Integration/deployment, Configuration management, Microservices, serverless computing, Monitoring, Database management.

Projects you'll be working on:

    • We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
    • Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
    • Managing Kubernetes cluster
    • Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc. 
    • Supporting developers in setting up infrastructure for services
    • Manage and improve microservices infrastructure
    • Managing high availability, low latency applications 
    • Focus on security best practices to ensure assist in security and compliance activities

Requirements

    • Minimum 3 years experience as DevOps
    • Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability). 
    • BASH expertise, node or python professional programming experience
    • Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
    • Experience setting microservices architecture
    • Experience with package management and deployments
    • Thorough understanding of networking. 
    • Understanding of all common services and protocols
    • Experience in web server configuration, monitoring, network design and high availability
    • Thorough understanding of DNS, VPN, SSL

Technologies you'll work with:

    • GKE, Prometheus, Grafana, Stackdriver
    • ArgoCD and GitHub Actions
    • NodeJS Backend
    • Postgres, ElasticSearch, Redis, RabbitMQ
    • Whatever else you decide - we're constantly re-evaluating our stack and tools
    • Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.

Benefits

      • Remote Option - You can work from location of your choice :)
      • Reimbursement of Home Office Setup
      • Competitive Salary
      • Friendly atmosphere
      • Flexible paid vacation policy
 
At PeopleGrove, we don’t just accept difference — we celebrate it, we support it, and we thrive on it for the benefit of our employees, our products, and our community. PeopleGrove is proud to be an equal opportunity workplace.
 
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Peoplegrove

Founded :
2015
Type :
Product
Size :
20-100
Stage :
Raised funding

About

PeopleGrove is on an audacious mission: to give every student the connections, advisers and mentors needed to achieve their career, academic and personal goals. You'll be joining a team fully committed to realizing this mission.

We've come a long way already, attracting partnerships with the world's best universities including Stanford, U Michigan, USC and more.

We're looking for go-getters, doers and people passionate about making a major impact on Day 1.

Read more

Connect with the team

Profile picture
Nitin Hayaran
Profile picture
Shivani Chaurasia
Profile picture
Madhumita Banerjee

Company social profiles

blogtwitterfacebook

Similar jobs

AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Top 3 Fintech Startup
Top 3 Fintech Startup
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
3 - 4 yrs
₹4L - ₹12L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more
Section 1
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.

Section 2

- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.

Section 3

- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution

Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification

Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Read more
India’s leading peer-to-peer commerce platform.
India’s leading peer-to-peer commerce platform.
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 8 yrs
₹45L - ₹60L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

What you will do


We are looking for an exceptional engineering lead to join our team. You will be responsible for building and owning the systems that would have critical impact for the business and the experience of our community from day one. 

  • Build and lead an agile engineering team
  • Work closely with Founder on product development
  • Collaborate with operations team to understand customer pain points and solve interesting problems
  • Code, test, ship - manage the entire application cycle
  • Build libraries and documentation for future references
  • Research and develop best practices and tools to enable delivery of features
  • Set up capabilities to track and report business and user metrics
  • Design and improve architecture to ensure scalability

Requirements

  • Proven experience at scaling tech companies, preferably in commerce or social network
  • Keen to innovate, open-minded and collaborative
  • Able to interpret product needs and suggest appropriate solutions
  • Have led a team, also able to code hands-on
  • Strong communication skills
  • Strong work ethic: responsible, responsive, and detail-oriented.

Technologies we use
Go, Flutter, AWS, Google Cloud

Read more
Bito Inc
at Bito Inc
2 recruiters
Amrit Dash
Posted by Amrit Dash
Remote only
5 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Ansible
Chef
+7 more

Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.

 

Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!

 

We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.

 

We are hiring a DevOps Engineer to join our team.

 

Responsibilities:

  • Collaborate with the development team to design, develop, and implement Java-based applications
  • Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
  • Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
  • Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
  • Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
  • Evaluate and define/modify configuration management strategies and processes using Ansible
  • Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
  • Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort

Requirements:

  • Minimum 4+ years of relevant work experience in a DevOps role
  • At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
  • Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
  • Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
  • Mastery in configuration automation tool sets such as Ansible, Chef, etc
  • Proficiency with Jira, Confluence, and Git toolset
  • Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
  • Proven ability to manage and prioritize multiple diverse projects simultaneously

What do we offer: 

At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology. 

·               Work from anywhere 

·               Flexible work timings 

·               Competitive compensation, including stock options 

·               A chance to work in the exciting generative AI space 

·               Quarterly team offsite events

Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Remote only
4 - 8 yrs
₹6L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more
  • Understanding of maintenance of existing systems (Virtual machines), Linux stack
  • Experience running, operating and maintainence of Kubernetes pods
  • Strong Scripting skills
  • Experience in AWS
  • Knowledge of configuring/optimizing open source tools like Kafka, etc.
  • Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
  • Optimizing and standardizing monitoring, alerting.
  • Experience in Google cloud platform
  • Experience/ Knowledge  in Python will be an added advantage
  • Experience on Monitoring Tools like Jenkins, Kubernetes ,Terraform  etc
Read more
SuperProcure
at SuperProcure
1 video
3 recruiters
Jyothsna Samanth
Posted by Jyothsna Samanth
Remote only
3 - 9 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Job Summary: We are looking for a senior DevOps engineer to help us build functional systems that improve customer experience. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. 


Key Responsibilities


  • Utilise various open source technologies & build independent web based tools, microservices and solutions
  • Write deployment scripts 
  • Configure and manage data sources like MySQL, Mongo, ElasticSearch, etc
  • Configure and deploy pipelines for various microservices using CI/CD tools
  • Automated server monitoring setup & HA adherence
  • Defining and setting development, test, release, update, and support processes for DevOps operation
  • Coordination and communication within the team and with customers where integrations are required
  • Work with company personnel to define technical problems and requirements, determine solutions, and implement those solutions.
  • Work with product team  to design automated pipelines to support SaaS delivery and operations in cloud platforms.
  • Review and act on the Service requests, Infrastructure requests and Incidents logged by our Implementation teams and clients. Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues
  • Modifying and improving existing systems. Suggest process improvements and implement them.
  • Collaborate  with Software Engineers to help them deploy and operate different systems, also help to automate and streamline company's operations and processes.
  • Developing interface simulators and designing automated module deployments.

 

Key Skills

  • Bachelor's degree in software engineering, computer science, information technology, information systems.
  • 3+ years of experience in managing Linux based cloud microservices infrastructure (AWS, GCP or Azure)
  • Hands-on experience with databases including MySQL. 
  • Experience OS tuning and optimizations for running databases and other scalable microservice solutions 
  • Proficient working with git repositories and git workflows
  • Able to setup and manage CI/CD pipelines
  • Excellent troubleshooting, working knowledge of various tools, open-source technologies, and cloud services
  • Awareness of critical concepts in DevOps and Agile principles
  • Sense of ownership and pride in your performance and its impact on company’s success
  • Critical thinker and problem-solving skills
  • Extensive experience in DevOps engineering, team management, and collaboration.
  • Ability to install and configure software, gather test-stage data, and perform debugging.
  • Ability to ensure smooth software deployment by writing script updates and running diagnostics.
  • Proficiency in documenting processes and monitoring various metrics.
  • Advanced knowledge of best practices related to data encryption and cybersecurity.
  • Ability to keep up with software development trends and innovation.
  • Exceptional interpersonal and communication skills

Experience:

  • Must have  4+ years of experience as Devops Engineer in a SaaS product based company


About SuperProcure


SuperProcure is a leading logistics and supply chain management solutions provider that aims to bring efficiency, transparency, and process optimization across the globe with the help of technology and data. SuperProcure started our journey in 2017 to help companies digitize their logistics operations. We created industry-recognized products which are now being used by 150+ companies like Tata Consumer Products, ITC, Flipkart, Tata Chemicals, PepsiCo, L&T Constructions, GMM Pfaudler, Havells, others. It helps achieve real-time visibility, 100% audit adherence & transparency, 300% improvement in team productivity, up to 9% savings in freight costs and many more benefits. SuperProcure is determined to make the lives of the logistic teams easier, add value, and help in establishing a fair and beneficial process for the company. 



Super Procure is backed by IndiaMart and incubated under IIMCIP & Lumis, Supply Chain Labs. SuperProcure was also recognized as Top 50 Emerging Start-ups of India at the NASSCOM Product Conclave organized in Bengaluru and was a part of the recently launched National Logistics policy by the prime minister of India. More details about our journey can be found here


Life @ SuperProcure 


SuperProcure operates in an extremely innovative, entrepreneurial, analytical, and problem-solving work culture. Every team member is fully motivated and committed to the company's vision and believes in getting things done. In our organization, every employee is the CEO of what he/she does; from conception to execution, the work needs to be thought through.


Our people are the core of our organization, and we believe in empowering them and making them a part of the daily decision-making, which impacts the business and shapes the company's overall strategy. They are constantly provided with resources, 


mentorship and support from our highly energetic teams and leadership. SuperProcure is extremely inclusive and believes in collective success.


Looking for a bland, routine 9-6 job? PLEASE DO NOT APPLY. Looking for a job where you wake up and add significant value to a $180 Billion logistics industry everyday? DO APPLY.



OTHER DETAILS

  • Engagement : Full Time
  • No. of openings : 1
  • CTC : 12 - 20lpa


Read more
Wipro
at Wipro
3 recruiters
Agency job
via Skillathon by Abhijit Choudhary
Hyderabad
4 - 12 yrs
₹5L - ₹20L / yr
Windows Azure
Microsoft Windows Azure
Web application security
VD Support
Deployment tools
+1 more
  • Extensive experience in designing & supporting Azure Managed Services Operations.  
  • Maintaining the Azure Active Directory and Azure AD authentication. 
  • Azure update management – Handling updates/Patching. 
  • Good understanding of Azure services (Azure App Service, Azure SQL,  Azure Storage Account..etc). 
  • Understanding of load balancers, DNS, virtual networks, NSG and firewalls in cloud environment. 
  • ARM templates writing, setup automation for resources provisioning. 
  • Knowledge on Azure automation and Automation Desire State Configuration. 
  • Good understanding of High Availability and Auto scaling. 
  • Azure Backups and ASR (Azure Site Recovery) 
  • Azure Monitoring and Configuration monitoring (performance metrics, OMS) 
  • Cloud Migration Experience(On premise to Cloud). 
  • PowerShell scripting for custom tasks automation. 
  • Strong experience in configuring, maintaining, and troubleshooting Microsoft based production systems. 

    Certification: 

    Azure Administrator (AZ-103) & Azure Architect (AZ-300 & AZ-301) 
Read more
RaRa Now
at RaRa Now
3 recruiters
Puneeta Mishra
Posted by Puneeta Mishra
Remote only
2 - 4 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
• Minimum 2 years experience working as DevOps Engineer.
• Hands-on experience in Azure.
• Build and maintain CI/CD tools and pipelines.
• Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RARA Now.
• Continuously improve code quality, product execution, and customer delight.
• Communicate, collaborate and work effectively across distributed teams in a global environment.
• Operate to strengthen teams across their product with their knowledge base
• Contribute to improving team relatedness, and help build a culture of camaraderie.
• Continuously refactor applications to ensure high-quality design
• Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
• Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
• Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
• Working knowledge of the TCP/IP stack, internet routing, and load balancing
• Basic understanding of cluster orchestrators and schedulers (Kubernetes)
• Deep knowledge of Linux as a production environment, and container technologies. e.g., Docker, Infrastructure as Code such as Terraform, and K8s administration at large scale.
• Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, and CI/CD.
Read more
FinTech NBFC dedicated to driving Finance sector
FinTech NBFC dedicated to driving Finance sector
Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹10L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconGit
YAML
+2 more
Technical Skills: - Knowledge of infrastructure and cloud (preferably AWS); experience with infrastructure-as-code (preferably Terraform) - Experienced with one or more scripting languages, Yaml, Python, Ruby, Bash, and/or NodeJS. - Experience with web services standards and related technologies. - Experience in working with Git, or other source control and CI/CD technologies following Agile Development Methodology for Software Development and related Agile practices, and exposure to Agile tools
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch. 
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude










Read more
APT Portfolio
at APT Portfolio
1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos