Cutshort logo
Censiusai logo
Backend Developer
Backend Developer
Censiusai's logo

Backend Developer

Censius Team's profile picture
Posted by Censius Team
3 - 5 yrs
₹10L - ₹20L / yr
Remote only
Skills
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
skill iconPython
SQL
skill iconMachine Learning (ML)

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Censiusai

Founded :
2020
Type :
Products & Services
Size :
20-100
Stage :
Bootstrapped

About

Based out of Dallas TX, Censius makes AI observable by providing teams visibility into the real-world performance of ML models. Join us to work with some of the most passionate folks and to make an impact.
Read more

Connect with the team

Profile picture
Censius Team

Company social profiles

linkedintwitterfacebook

Similar jobs

Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 



Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications 

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
Infra360 Solutions Pvt Ltd
at Infra360 Solutions Pvt Ltd
2 candid answers
HR Infra360
Posted by HR Infra360
Gurugram
2 - 4 yrs
₹7L - ₹14L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Please Apply - https://zrec.in/GzLLD?source=CareerSite


About Us

Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.


Job Description

Job Title:             DevOps Engineer AWS

Department:       Technology

Location:             Gurgaon

Work Mode:         On-site

Working Hours:   10 AM - 7 PM 

Terms:                 Permanent

Experience:     2-4 years

Education:           B.Tech/MCA/BCA

Notice Period:     Immediately



Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure. 

Below is a detailed description of the roles and responsibilities, expectations for the role.


Tech Stack :


  • Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
  • Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
  • ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
  • Helm: Expertise in Helm for managing Kubernetes applications.
  • Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
  • Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.


Key Responsibilities:

  • CI/CD and configuration management
  • Doing RCA of production issues and providing resolution
  • Setting up failover, DR, backups, logging, monitoring, and alerting
  • Containerizing different applications on the Kubernetes platform
  • Capacity planning of different environment's infrastructure
  • Ensuring zero outages of critical services
  • Database administration of SQL and NoSQL databases
  • Infrastructure as a code (IaC)
  • Keeping the cost of the infrastructure to the minimum
  • Setting up the right set of security measures
  • CI/CD and configuration management
  • Doing RCA of production issues and providing resolution
  • Setting up failover, DR, backups, logging, monitoring, and alerting
  • Containerizing different applications on the Kubernetes platform
  • Capacity planning of different environment's infrastructure
  • Ensuring zero outages of critical services
  • Database administration of SQL and NoSQL databases
  • Infrastructure as a code (IaC)
  • Keeping the cost of the infrastructure to the minimum
  • Setting up the right set of security measures


Ideal Candidate Profile:

  • A graduation/post-graduation degree in Computer Science and related fields
  • 2-4 years of strong DevOps experience with the Linux environment.
  • Strong interest in working in our tech stack
  • Excellent communication skills
  • Worked with minimal supervision and love to work as a self-starter
  • Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
  • Experience with version control systems like Git
  • Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
  • Strong experience with managing the Production Systems day in and day out
  • Experience in finding issues in different layers of architecture in production environment and fixing them
  • Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
  • Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
  • Experience in automation tools like Ansible/SaltStack and Jenkins
  • Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
  • Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
  • Experience with managing/mentoring small team of 2-3 people (desirable)
  • Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
  • Experience in logging tools Like ELK/Loki.
Read more
A video-calling as a service platform in Bangalore
A video-calling as a service platform in Bangalore
Agency job
via Qrata by Prajakta Kulkarni
Remote only
2 - 7 yrs
₹25L - ₹50L / yr
DevOps
skill iconKubernetes
Terraform
Web Realtime Communication (WebRTC)
skill iconPython
We're building video-calling as a service.

We are a managed video calling solution built on top of WebRTC, which allows it's
customers to integrate live video-calls within existing solutions in less than 10 lines of
code. We provide a completely managed SDK that solves the problem of battling
endless cases of video calling APIs.

Location - Bangalore (Remote)

Experience - 3+ Years

Requirements:

● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Read more
NetSquare Solutions
Aishwarya M
Posted by Aishwarya M
Remote only
3 - 15 yrs
Best in industry
Ansible
CI/CD
gitlab
skill iconJenkins
Bash
+1 more

We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.


Responsibilities:

  • Develop and maintain infrastructure using Ansible.
  • Write Ansible playbooks.
  • Implement CI/CD pipelines.
  • Manage GitLab repositories.
  • Monitor and troubleshoot infrastructure issues.
  • Ensure security and compliance.
  • Document best practices.


Qualifications:

  • Proven DevOps experience.
  • Expertise with Ansible and CI/CD pipelines.
  • Proficient with GitLab.
  • Strong scripting skills.
  • Excellent problem-solving and communication skills.


Regards,

Aishwarya M

Associate HR

Read more
wwwthehiveai
Ashish Kapoor
Posted by Ashish Kapoor
Gurugram
5 - 20 yrs
₹10L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
RabbitMQ
skill iconPostgreSQL
DevOps Engineer, Gurgaon

About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!

About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.

Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
Read more
Kutumb
at Kutumb
3 recruiters
Dimpy Mehra
Posted by Dimpy Mehra
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.

 

More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html

 

We’re excited if you have:

  • Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
  • Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
  • Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
  • Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
  • Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
  • The drive and self-motivation to understand the intricate details of a complex infrastructure environment 
  • Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
  • Hands-on experience working with AWS 
  • Bonus points for knowledge of ETL pipelines and Big data architecture
  • Great problem-solving skills & takes pride in your work
  • Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
  • Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch

 

What you’ll be doing:

  • Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
  • Demonstrate great communication skills in working with technical and non-technical audiences
  • Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem

 

Tools we use:

Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK

 

What we offer:

  • High pace of learning
  • Opportunity to build the product from scratch
  • High autonomy and ownership
  • A great and ambitious team to work with
  • Opportunity to work on something that really matters
  • Top of the class market salary and meaningful ESOP ownership
Read more
Second generation of the Internet
Second generation of the Internet
Agency job
via MNR Solutions by Neeraj Shukla
Remote, Trivandrum
3 - 7 yrs
₹6L - ₹12L / yr
DevOps
skill iconKubernetes
CI/CD
skill iconJavascript
skill iconDocker

Role – Devops

Experience 3 – 6 Years

 

Roles & Responsibilities –

  • 3-6 years of experience in deploying and managing highly scalable fault resilient systems
  • Strong experience in container orchestration and server automation tools such as Kubernetes, Google Container Engine, Docker Swarm, Ansible, Terraform  
  • Strong experience with Linux-based infrastructures, Linux/Unix administration, AWS, Google Cloud, Azure
  • Strong experience with databases such as  MySQL, Hadoop, Elasticsearch, Redis, Cassandra, and MongoDB.
  • Knowledge of scripting languages such as Java, JavaScript, Python, PHP, Groovy, Bash.
  • Experience in configuring CI/CD pipelines using Jenkins, GitLab CI, Travis.
  • Proficient in technologies such as Docker, Kafka, Raft and Vagrant
  • Experience in implementing queueing services such as RabbitMQ, Beanstalkd, Amazon SQS and knowledge in ElasticStack is a plus.
Read more
Banyan Data Services
at Banyan Data Services
1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹20L / yr
DevOps
skill iconJenkins
Puppet
Terraform
skill iconDocker
+10 more

DevOps Engineer 

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.

We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Key Qualifications

· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.

· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc

· Strong experience in Linux/Unix administration.

· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.

· Expertise in multiple coding and scripting languages including Shell, Python, and Perl

· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)

· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database

· Worked on open-source tools for logging, monitoring, search engine, caching, etc.

· Professional Certificates in AWS or any other cloud is preferable

· Excellent problem solving and troubleshooting skills

· Must have good written and verbal communication skills

Key Responsibilities

Ambitious individuals who can work under their own direction towards agreed targets/goals.

 Must be flexible to work on the office timings to accommodate the multi-national client timings.

 Will be involved in solution designing from the conceptual stages through development cycle and deployments.

 Involve development operations & support internal teams

 Improve infrastructure uptime, performance, resilience, reliability through automation

 Willing to learn new technologies and work on research-orientated projects

 Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.

 Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.

 Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
one of our MNC Client
one of our MNC Client
Agency job
via CETPA InfoTech by priya Gautam
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
1 - 10 yrs
₹5L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
Linux/Unix
SQL Azure
+9 more

Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Read more
Aviso Inc
at Aviso Inc
1 video
11 recruiters
Chaitanya Penugonda
Posted by Chaitanya Penugonda
Bengaluru (Bangalore), Hyderabad
2 - 4 yrs
₹5L - ₹8L / yr
DevOps
skill iconMongoDB
skill iconDocker
skill iconAmazon Web Services (AWS)
Puppet
What you will be doing:

● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc

What you bring:

● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.

Aviso offers:

● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos