Cutshort logo
SigTuple logo
Site Reliability Engineer - DevOps
Site Reliability Engineer - DevOps
SigTuple's logo

Site Reliability Engineer - DevOps

rupali Borole's profile picture
Posted by rupali Borole
3 - 10 yrs
₹5L - ₹31L / yr
Bengaluru (Bangalore)
Skills
Cloud Computing
skill iconDocker
skill iconPython
Ansible
Chef
Perl
Design patterns
skill iconAmazon Web Services (AWS)
Software Development
Site Reliability EngineerJob Description :You will be administering the infrastructure of an indigenous one-of-a-kind artificial intelligence cloud platform. You will be working with the dev teams to deploy, monitor and scale the distributed platform to handle real time AI analysis and loads and loads of visual data (images and videos in various formats). We're looking for people with extensive dev-ops experience and strong system programming skills.Responsibilities :1. You will be responsible for the up time and reliability of infrastructure of SigTuple and help backend teams achieve it by writing reliable software and automation2. Work with other development teams to automate deployment of modules and manage the build and release pipeline.3. Extensive process-level and node-level monitoring and auto healing of entire cluster.4. Managing, provisioning and servicing cloud servers.5. Contribution to back-end services to contribute to its infrastructure system design.Requirements :1. BTech/MTech in any engineering discipline.2. 3-6 years of experience in an Dev-Ops/Software Engineering role.3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (Kubernetes, AWS, GCP, Azure etc.)4. Proficiency with any major monitoring framework (Sensu, Nagios etc.).5. Comfortable with any one scripting language (Python, Perl) and a Configuration management or Orchestration Tool (Ansible, Chef etc)6. Proficiency with OS and network fundamentals and strong Linux administrator skills.7. Experience with Container Tools (Docker ecosystem) will be a plus8. Experience of working with issues of scale of a system.9. Experience of working in a startup is a plus.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About SigTuple

Founded :
2015
Type :
Services
Size :
100-1000
Stage :
Raised funding

About

SigTuple is a health-tech organisation that is transforming quality healthcare delivery in India, by spearheading a diagnostic revolution with multidisciplinary innovation. We build intelligent screening solutions to aid diagnosis through AI-powered analysis of visual medical data with a robust EN/ISO 13485:2016 certified Quality Management System in place. Our sustained efforts towards ushering in standardisation, scalability and reliability in diagnostics is the base of our contribution towards Indian Healthcare System. Every milestone marked represents how we took on each challenge in healthcare, to make our innovation, truly multidimensional.
Read more

Company video

SigTuple's video section
SigTuple's video section

Connect with the team

Profile picture
Mohammed Matheen
Profile picture
Mansi M Sindhvad
Profile picture
rupali Borole
Profile picture
Sneha Chakravorty
Profile picture
Ravindra S

Company social profiles

bloglinkedintwitterfacebook

Similar jobs

Indian private sector bank
Indian private sector bank
Agency job
via Pluginlive by Harsha Saggi
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Terraform
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes

Roles and Responsibilities:

  • AWS Cloud Management: Design, deploy, and manage AWS cloud infrastructure. Optimize and maintain cloud resources for performance and cost efficiency. Monitor and ensure the security of cloud-based systems.
  • Automated Provisioning: Develop and implement automated provisioning processes for infrastructure deployment. Utilize tools like Terraform and Packer to automate and streamline the provisioning of resources.
  • Infrastructure as Code (IaC): Champion the use of Infrastructure as Code principles. Collaborate with development and operations teams to define and maintain IaC scripts for infrastructure deployment and configuration.
  • Collaboration and Communication: Work closely with cross-functional teams to understand project requirements and provide DevOps expertise. Communicate effectively with team members and stakeholders regarding infrastructure changes, updates, and improvements.
  • Continuous Integration/Continuous Deployment (CI/CD): Implement and maintain CI/CD pipelines to automate software delivery processes. Ensure reliable and efficient deployment of applications through the development lifecycle.
  • Performance Monitoring and Optimization: Implement monitoring solutions to track system performance, troubleshoot issues, and optimize resource utilization. Proactively identify opportunities for system and process improvements. 


Mandatory Skills:

  • Proven experience as a DevOps Engineer or similar role, with a focus on AWS.
  • Strong proficiency in automated provisioning and cloud management.
  • Experience with Infrastructure as Code tools, particularly Terraform and Packer.
  • Solid understanding of CI/CD pipelines and version control systems.
  • Strong scripting skills (e.g., Python, Bash) for automation tasks.
  • Excellent problem-solving and troubleshooting skills.
  • Good interpersonal and communication skills for effective collaboration.


Secondary Skills:

  • AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect).
  • Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
  • Knowledge of microservices architecture and serverless computing.
  • Familiarity with monitoring and logging tools (e.g., CloudWatch, ELK stack).


Read more
AuditCue
Anand Srinivasan
Posted by Anand Srinivasan
Chennai
3 - 6 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Bachelor's degree in Computer Science or a related field, or equivalent work experience

Strong understanding of cloud infrastructure and services, such as AWS, Azure, or Google Cloud Platform

Experience with infrastructure as code tools such as Terraform or CloudFormation

Proficiency in scripting languages such as Python, Bash, or PowerShell

Familiarity with DevOps methodologies and tools such as Git, Jenkins, or Ansible

Strong problem-solving and analytical skills

Excellent communication and collaboration skills

Ability to work independently and as part of a team

Willingness to learn new technologies and tools as required

Read more
wwwthehiveai
Ashish Kapoor
Posted by Ashish Kapoor
Gurugram
5 - 20 yrs
₹10L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
RabbitMQ
skill iconPostgreSQL
DevOps Engineer, Gurgaon

About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!

About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.

Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
Read more
CloudAngle
Bogam Surender
Posted by Bogam Surender
Hyderabad
7 - 15 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconJava
skill iconNodeJS (Node.js)
+2 more
  1. Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
  2. About 5 years of professional experience supporting AWS cloud environments
  3. Certified Amazon Architect Associate or Architect
  4. Experience serving as lead (shift management, reporting) will be a plus
  5. AWS Architect Certified Solution Architect Professional (Must have)
  6. Minimum 4yrs experience, maximum 8 years’ experience.
  1. 100% work from office in Hyderabad
  2. Very fluent in English
Read more
Bootlabs Technologies Private Limited
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Required

• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.

o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions

o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.

• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.

• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.

• Scripting experience (Bash/python), automation in pipelines when required, system service.

• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.

Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Read more
Censiusai
at Censiusai
1 recruiter
Censius Team
Posted by Censius Team
Remote only
3 - 5 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
+3 more

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
Blue Sky Analytics
at Blue Sky Analytics
3 recruiters
Balahun Khonglanoh
Posted by Balahun Khonglanoh
Remote only
1 - 4 yrs
Best in industry
skill iconAmazon Web Services (AWS)
DevOps
Amazon EC2
AWS Lambda
ECS
+1 more

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!


Your Role

  • Applications built at scale to go up and down on command.
  • Manage a cluster of microservices talking to each other.
  • Build pipelines for huge data ingestion, processing, and dissemination.
  • Optimize services for low cost and high efficiency.
  • Maintain high availability and scalable PSQL database cluster.
  • Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.

Requirements

  • 1-4 years of work experience.
  • Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
  • CI/CD concepts and implementation using Codepipeline, Github Actions.
  • Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
  • Advanced Containerization - Docker, Kubernetes, ECS.
  • Experience with managed services like database cluster, distributed services on EC2.
  • Self-starters and curious folks who don't need to be micromanaged.
  • Passionate about Blue Sky Climate Action and working with data at scale.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
Goodera
Bengaluru (Bangalore)
3 - 9 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Ansible
DevOps
skill iconKubernetes
skill iconDocker
+5 more
AWS DevOps Engineer
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.


Responsibilities:

This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.

Required Candidate profile : 
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration 
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.
Read more
AppLift
at AppLift
1 video
1 recruiter
Divya Pushpa
Posted by Divya Pushpa
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹25L / yr
skill iconPython
DevOps
Our requirements: â—� Looking for candidate who are flexible to work in rotational shifts. â—� You have Bachelor (4-year) degree, with a technical major, such as engineering or computer science. â—� You have good knowledge of scripting like Shell/Python â—� You have 2+ years system administration/debugging experience, scripting and related tools. â—� You have systems Administration/System Engineer certification in Unix will be added advantage. â—� You have knowledge of Nagios/Puppet/Chef as mandatory Your Responsibilities: Engineering and Provisioning: â—� You install new / rebuild existing servers and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements. â—� You develop and maintain installation and configuration procedures. â—� You contribute to and maintain system standards. â—� You research and recommend innovative, and where possible automated approaches for system administration tasks. Identify approaches that leverage our resources and provide economies of scale. Operations and Support â—� You perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of scheduled jobs such as backups. â—� You perform regular security monitoring to identify any possible intrusions. â—� You perform regular file archival and purge as necessary. â—� You create, change, and delete user accounts per request. â—� You repair and recover from hardware or software failures. Coordinate and communicate with impacted teams. Maintenance â—� You apply OS patches and upgrades on a regular basis, and upgrade administrative tools and utilities. Configure / add new services as necessary. â—� You maintain operational, configuration, or other procedures. â—� You perform ongoing performance tuning, hardware upgrades, and resource optimization as required. â—� Configure CPU, memory, and disk partitions as required. â—� You maintain data center environmental and monitoring equipment What do we offer? â—� You get valuable insights into mobile marketing/entrepreneurship and have a high impact on shaping the expansion and success of AppLift across India â—� Profit from working with European Serial Entrepreneurs who co-founded over 10 successful companies within the last 8 years and get access to a well-established network and be able to build your own top-tier network & reputation â—� Learn and grow in an environment characterized by flat hierarchy, entrepreneurial drive and fun â—� You experience an excellent learning culture â—� Competitive remuneration package and much more!
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos