Cutshort logo
Redfoxa Careerlink Pvt Ltd logo
AI Native Operations Expert
AI Native Operations Expert
Redfoxa Careerlink Pvt Ltd's logo

AI Native Operations Expert

Likhitha S's profile picture
Posted by Likhitha S
10 - 12 yrs
₹24L - ₹36L / yr
Koramangala
Skills
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Robotics
Process automation
skill iconData Analytics
KPI management

Job Title:

AI Native Operations Expert – Director / AVP / VP

Company: EOSGlobe

CTC: ₹24 – ₹36 LPA

Open Positions: 3

Experience: 12 – 18 Years

Joining: Immediate Joiners Preferred


Role Overview

EOSGlobe is transforming into an AI-First organization and is looking for an AI Native Operations Expert to lead this transformation. The role focuses on driving automation, process re-engineering, and AI adoption across BPM operations to improve efficiency, scalability, and business impact.


Key Responsibilities

Lead AI-driven transformation initiatives across BPM operations.

Re-engineer processes using Artificial Intelligence, Machine Learning, and automation tools.

Collaborate with leadership and strategy teams to implement AI-first operational models.

Define and track KPIs, productivity metrics, and financial impact of transformation initiatives.

Partner with internal teams and clients to demonstrate AI-driven efficiency and revenue growth.

Identify opportunities for process automation and digital adoption across operations.

Required Skills

Strong expertise in Artificial Intelligence (AI), Machine Learning (ML), and RPA.

Experience in process transformation and digital automation initiatives.

Deep understanding of BPM operations and service delivery models.

Strong leadership and stakeholder management skills.

Analytical mindset with ability to measure financial impact and operational KPIs.


Preferred Qualifications

Experience leading large-scale automation or AI transformation projects.

Exposure to BPM, consulting, or operations leadership roles.

Excellent communication and strategic thinking skills.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Redfoxa Careerlink Pvt Ltd

Founded :
2025
Type :
Services
Size
Stage :
Bootstrapped

About

N/A

Company social profiles

N/A

Similar jobs

AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Agentic AI Platform
Agentic AI Platform
Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
Zolvit (formerly Vakilsearch)
at Zolvit (formerly Vakilsearch)
1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Chennai
2 - 4 yrs
₹10L - ₹16L / yr
DevOps
Linux administration
Unix administration
Shell Scripting
CI/CD
+5 more

We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters



Responsibilities and Accountabilities:

  • As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
  • Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
  • Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env 
  • Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
  • Resolve incidents as escalated from Monitoring tools and Business Development Team
  • Implement and follow security guidelines, both policy and technology to protect our data
  • Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
  • Strong in performing production operation activities even at night times if required
  • Ability to automate [Scripts] recurring tasks to increase velocity and quality
  • Ability to manage and deliver multiple project phases at the same time

I Qualification(s): 

  • Experience in working with Linux Server, DevOps tools, and Orchestration tools 
  • Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add 

II Experience Required in DevOps Aspects:

  • Length of Experience: Minimum 1-4 years of experience
  • Nature of Experience: 
  • Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
  • Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
  • Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
  • Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
  • Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
  • Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
  • Good at Version Control & source code management systems like GitHub, GIT
  • Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
  • Experience in Web Server Nginx, and Apache
  • Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
  • Knowledge in Puma, Unicorn, Gunicorn & Yarn
  • Hands-on VMWare ESXi/Xencenter deployments is a value add
  • Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
  • Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
  • Code Quality like SonarQube is a value-add
  • Test Automation like Selenium, JMeter, and JUnit is a value-add
  • Experience in Heroku and OpenStack is a value-add 
  • Experience in Identifying Inbound and Outbound Threats and resolving it
  • Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages  
  • Documenting the Security fix for future use
  • Establish cross-team collaboration with security built into the software development lifecycle 
  • Forensics and Root Cause Analysis skills are mandatory 
  • Weekly Sanity Checks of the on-prem and off-prem environment 

 

III Skill Set & Personality Traits required:

  • An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
  • Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers


IV Age Group: 21 – 36 Years


V Cost to the Company: As per industry standards


Read more
AdElement
at AdElement
2 recruiters
Ritisha Nigam
Posted by Ritisha Nigam
Pune
2 - 7 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
AiML

Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)

Location: Pune – Onsite

About Us:

We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!

Role Overview:

We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.

Key Responsibilities:

  • Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
  • Collaborate with data scientists and engineers to build scalable AI-driven solutions.
  • Analyze large volumes of data to extract meaningful insights and improve ad performance.
  • Develop and deploy machine learning pipelines for automated decision-making.
  • Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
  • Optimize existing models for speed, scalability, and accuracy.
  • Work closely with product managers to align AI solutions with business goals.

Requirements:

  • Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
  • Strong programming skills in Python, R, or similar languages.
  • Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Expertise in data processing and real-time analytics.
  • Strong understanding of digital advertising, programmatic platforms, and ad server technology.
  • Excellent problem-solving and analytical skills.
  • Immediate joiners preferred.

Preferred Skills:

  • Knowledge of big data technologies like Spark, Hadoop, or Kafka.
  • Experience with cloud platforms like AWS, GCP, or Azure.
  • Familiarity with MLOps practices and tools.

How to Apply:

If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.

Join us in building the future of AI-driven digital advertising!

Read more
Flytbase
at Flytbase
3 recruiters
Alice Philip
Posted by Alice Philip
Pune
2 - 4 yrs
₹8L - ₹15L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
Artificial Intelligence (AI)

Lead DevSecOps Engineer


Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time


Apply here → https://lnk.ink/CLqe2


About FlytBase:

FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.

We're building intelligent autonomy — not just automation — and security is core to that vision.


What You’ll Own

You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.

Expect to:

  • Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
  • Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
  • Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
  • Build for Zero Trust security — logs, secrets, audits, access policies
  • Lead incident response, postmortems, and playbooks to reduce MTTR
  • Automate and secure CI/CD pipelines with SAST, DAST, image hardening
  • Script your way out of toil using Python, Bash, or LLM-based agents
  • Work alongside dev, platform, and product teams to ship secure, scalable systems


What We’re Looking For:

You’ve probably done a lot of this already:

  • 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
  • Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
  • Strong in Linux internals, OS hardening, and network security
  • Built and owned CI/CD pipelines, IaC, and automated releases
  • Written scripts (Python/Bash) that saved your team hours
  • Familiar with SOC 2, ISO 27001, threat detection, and compliance work

Bonus if you’ve:

  • Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.


What It Means to Be a Flyter

  • AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
  • Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
  • Joy in complexity: Security + infra + scale = your happy place.
  • Radical candor: You give and receive sharp feedback early — and grow faster because of it.
  • Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
  • H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
  • Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.


Perks:

▪ Unlimited leave & flexible hours

▪ Top-tier health coverage

▪ Budget for AI tools, courses

▪ International deployments

▪ ESOPs and high-agency team culture


Apply Here- https://lnk.ink/CLqe2

Read more
PGP Glass Pvt Ltd
Animesh Srivastava
Posted by Animesh Srivastava
Vadodara
2 - 4 yrs
₹6L - ₹12L / yr
IT infrastructure
Artificial Intelligence (AI)
DevOps

Key Responsibilities:-


• Collaborate with Data Scientists to test and scale new algorithms through pilots and later industrialize the solutions at scale to the comprehensive fashion network of the Group

• Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service to provide an e2e solution

• Leverage an understanding of software architecture and software design patterns to write scalable, maintainable, well-designed and future-proof code

• Design, develop and maintain the framework for the analytical pipeline

• Develop common components to address pain points in machine learning projects, like model lifecycle management, feature store and data quality evaluation

• Provide input and help implement framework and tools to improve data quality

• Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, designers, product managers and others to build the AI ecosystem within the Group

• Deliver on time, demonstrating a strong commitment to deliver on the team mission and agreed backlog

Read more
Censiusai
at Censiusai
1 recruiter
Censius Team
Posted by Censius Team
Remote only
3 - 5 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
+3 more

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
Searce Inc
at Searce Inc
64 recruiters
Mishita Juneja
Posted by Mishita Juneja
Pune
3 - 6 yrs
₹8L - ₹14L / yr
DevOps
skill iconKubernetes
skill iconDocker
Terraform
Cloud Computing
+11 more

Senior Devops Engineer



Who are we?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.

What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one n ‘average’ .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

Are you the one? Quick self-discovery test:

  1. Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  2. Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  3. Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?

Introduction

When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.

We are quite keen to meet you if:

  • You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
  • You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
  • You like experimenting, taking risks and thinking big.

3 things this position is NOT about:

  1. This is NOT just a job; this is a passionate hobby for the right kind.
  2. This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
  3. This is NOT a position for someone who likes to be told what needs to be done.

3 things this position IS about:

  1. Attention to detail matters.
  2. Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
  3. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?

Roles and Responsibilities

This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.


Responsibilities:

  • Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
  • Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
  • Develop tools and automation to make platform operations more efficient, reliable and reproducible
  • Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
  • Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
  • Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
  • Take initiatives to lead, drive and solve during challenging scenarios

Requirements:

  • 3 + Years of experience in Cloud Infrastructure and Operations domains
  • Experience with Linux systems, RHEL/CentOS preferred
  • Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
  • Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
  • Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
  • Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
  • Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
  • Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
  • Deep experience in customer facing roles with a proven track record of effective verbal and written communications
  • Dependable and good team player
  • Desire to learn and work with new technologies

Key Success Factors

  • Are you
    • Likely to forget to eat, drink or pee when you are coding?
    • Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
    • An open source enthusiast?
  • Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
  • Ability to think on your feet, and follow-up with multiple stakeholders to get things done
  • Excellent interpersonal communication skills
  • Superior project management and organizational skills
  • Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
  • Ability to anticipate potential problems, determine and implement solutions
  • Energetic, disciplined, with a results-oriented approach
  • Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
  • Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
  • You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast. 
Read more
Vamstar
at Vamstar
3 recruiters
Manasi Rokade
Posted by Manasi Rokade
Remote only
3 - 6 yrs
₹6L - ₹10L / yr
DevOps
skill iconDocker
skill iconAmazon Web Services (AWS)
CI/CD
skill iconNodeJS (Node.js)
+4 more

 

 

We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.

 

Job Requirements

 

  • Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
  • Experience with AWS Cloud Formation and AWS CLI is essential
  • The ability to work to project deadlines efficiently and with minimum guidance
  • A positive attitude and enjoys working within a global distributed team

 

Skills

 

  • Highly proficient working with CI/CD and automating infrastructure provisioning
  • Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
  • Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
  • Hands on experience with Docker and container orchestration
  • Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
  • An interest in healthcare and medical sectors
  • Technical degree with 4 plus years’ infrastructure and automation experience

 

Read more
MoreYeahs
at MoreYeahs
3 recruiters
Lovely Sharma
Posted by Lovely Sharma
Remote, Indore
2 - 6 yrs
₹12L - ₹16L / yr
skill iconMachine Learning (ML)
DevOps
skill iconKubernetes
Terraform
skill iconPython
+9 more
Machine Learning
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos