Cutshort logo
undefined logo
Data Annotator (Automotive)
Data Annotator (Automotive)
undefined's logo

Data Annotator (Automotive)

0 - 3 yrs
₹2L - ₹3L / yr
Remote, Bengaluru (Bangalore)
Skills
skill iconMachine Learning (ML)
Data Structures
Artificial Intelligence (AI)
As a member of the data annotation team, you will help us in annotate data for use within a larger machine learning and Ai framework. You'll be part of an energetic team of highly motivated professionals who are going to bring autonomous driving to the auto industry. We are looking for detail oriented individuals who will collaborate with our team, label images and provide feedback to our engineers to improve the user interface tools used in the process. Joining our team means playing an integral role in the future of automotive safety standards.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Agentic AI Platform
Agentic AI Platform
Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
AdElement
at AdElement
2 recruiters
Ritisha Nigam
Posted by Ritisha Nigam
Pune
2 - 7 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
AiML

Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)

Location: Pune – Onsite

About Us:

We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!

Role Overview:

We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.

Key Responsibilities:

  • Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
  • Collaborate with data scientists and engineers to build scalable AI-driven solutions.
  • Analyze large volumes of data to extract meaningful insights and improve ad performance.
  • Develop and deploy machine learning pipelines for automated decision-making.
  • Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
  • Optimize existing models for speed, scalability, and accuracy.
  • Work closely with product managers to align AI solutions with business goals.

Requirements:

  • Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
  • Strong programming skills in Python, R, or similar languages.
  • Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Expertise in data processing and real-time analytics.
  • Strong understanding of digital advertising, programmatic platforms, and ad server technology.
  • Excellent problem-solving and analytical skills.
  • Immediate joiners preferred.

Preferred Skills:

  • Knowledge of big data technologies like Spark, Hadoop, or Kafka.
  • Experience with cloud platforms like AWS, GCP, or Azure.
  • Familiarity with MLOps practices and tools.

How to Apply:

If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.

Join us in building the future of AI-driven digital advertising!

Read more
Hiret Consulting
Sanikha M
Posted by Sanikha M
Pune
6 - 10 yrs
₹10L - ₹15L / yr
Windows Azure
Data Structures
Finance
Insurance

Roles and Responsibilities: 

▪ Data Pipeline Development: Build, deploy, and maintain efficient ETL/ELT pipelines using Azure 

Data Factory, Data Factory & Azure Synapse Analytics. 

▪ We are only looking for senior candidates with over 5 yrs of relevant exp with ample client 

facing exp.

· Finance/Insurance experience is also a must. 

▪ Data Modelling & Warehousing: Design and optimize data models, warehouses, and lakes for 

structured/unstructured data. 

▪ SQL & Query Optimization: Write complex SQL queries, optimize performance, and manage 

databases. · Python Automation: Develop scripts for data processing, automation, and 

integration using Python (Pandas, NumPy). 

  

Technical Skills: 

▪ Cloud Technologies: Azure Synapse Analytics, Azure Fabric, Azure Databricks and AWS(good to 

have) 

▪ Knowledge of Python, Pyspark, SQL, ETL concepts 

▪ Good understanding of Insurance Operations and KPI reporting is an advantage. 

Read more
Flytbase
at Flytbase
3 recruiters
Alice Philip
Posted by Alice Philip
Pune
2 - 4 yrs
₹8L - ₹15L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
Artificial Intelligence (AI)

Lead DevSecOps Engineer


Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time


Apply here → https://lnk.ink/CLqe2


About FlytBase:

FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.

We're building intelligent autonomy — not just automation — and security is core to that vision.


What You’ll Own

You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.

Expect to:

  • Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
  • Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
  • Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
  • Build for Zero Trust security — logs, secrets, audits, access policies
  • Lead incident response, postmortems, and playbooks to reduce MTTR
  • Automate and secure CI/CD pipelines with SAST, DAST, image hardening
  • Script your way out of toil using Python, Bash, or LLM-based agents
  • Work alongside dev, platform, and product teams to ship secure, scalable systems


What We’re Looking For:

You’ve probably done a lot of this already:

  • 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
  • Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
  • Strong in Linux internals, OS hardening, and network security
  • Built and owned CI/CD pipelines, IaC, and automated releases
  • Written scripts (Python/Bash) that saved your team hours
  • Familiar with SOC 2, ISO 27001, threat detection, and compliance work

Bonus if you’ve:

  • Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.


What It Means to Be a Flyter

  • AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
  • Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
  • Joy in complexity: Security + infra + scale = your happy place.
  • Radical candor: You give and receive sharp feedback early — and grow faster because of it.
  • Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
  • H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
  • Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.


Perks:

▪ Unlimited leave & flexible hours

▪ Top-tier health coverage

▪ Budget for AI tools, courses

▪ International deployments

▪ ESOPs and high-agency team culture


Apply Here- https://lnk.ink/CLqe2

Read more
PGP Glass Pvt Ltd
Animesh Srivastava
Posted by Animesh Srivastava
Vadodara
2 - 4 yrs
₹6L - ₹12L / yr
IT infrastructure
Artificial Intelligence (AI)
DevOps

Key Responsibilities:-


• Collaborate with Data Scientists to test and scale new algorithms through pilots and later industrialize the solutions at scale to the comprehensive fashion network of the Group

• Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service to provide an e2e solution

• Leverage an understanding of software architecture and software design patterns to write scalable, maintainable, well-designed and future-proof code

• Design, develop and maintain the framework for the analytical pipeline

• Develop common components to address pain points in machine learning projects, like model lifecycle management, feature store and data quality evaluation

• Provide input and help implement framework and tools to improve data quality

• Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, designers, product managers and others to build the AI ecosystem within the Group

• Deliver on time, demonstrating a strong commitment to deliver on the team mission and agreed backlog

Read more
Kritter
at Kritter
3 recruiters
Tenzin Kalsang
Posted by Tenzin Kalsang
Bengaluru (Bangalore)
1 - 4 yrs
₹4L - ₹8L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

Objectives :

  • Building and setting up new development tools and infrastructure
  • Working on ways to automate and improve development and release processes
  • Testing code written by others and analyzing results
  • Ensuring that systems are safe and secure against cybersecurity threats
  • Identifying technical problems and developing software updates and ‘fixes’
  • Working with software developers and software engineers to ensure that development follows established processes and works as intended
  • Planning out projects and being involved in project management decisions


Daily and Monthly Responsibilities :


  • Deploy updates and fixes
  • Build tools to reduce occurrences of errors and improve customer experience
  • Develop software to integrate with internal back-end systems
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate visualization
  • Design procedures for system troubleshooting and maintenance


Skills and Qualifications :

  • Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
  • 3+ years of experience as a DevOps Engineer or similar software engineering role
  • Proficient with git and git workflows
  • Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
  • Working knowledge of databases and SQL
  • Problem-solving attitude
  • Collaborative team spirit
Read more
Censiusai
at Censiusai
1 recruiter
Censius Team
Posted by Censius Team
Remote only
3 - 5 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
+3 more

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
Karza
at Karza
1 recruiter
Priyanka Asher
Posted by Priyanka Asher
Mumbai
1 - 5 yrs
₹8L - ₹15L / yr
DevOps
Hadoop
Data Structures
Terraform
Ansible

At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.

A few recognitions:

  • Recognized as Top25 startups in India to work with 2019 by LinkedIn
  • Winner of HDFC Bank's Digital Innovation Summit 2020
  • Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
  • Winner of Amazon AI Award 2019 for Fintech
  • Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
  • Winner of FinShare 2018 challenge held by ShareKhan
  • Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
  • 2nd place Citi India FinTech Challenge 2018 by Citibank
  • Top 3 in Viacom18's Startup Engagement Programme VStEP

 

What your average day would look like:

  • Deploy and maintain mission-critical information extraction, analysis, and management systems
  • Manage low cost, scalable streaming data pipelines
  • Provide direct and responsive support for urgent production issues
  • Contribute ideas towards secure and reliable Cloud architecture
  • Use open source technologies and tools to accomplish specific use cases encountered within the project
  • Use coding languages or scripting methodologies to solve automation problems
  • Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
  • Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time

 

What you need to work with us:

  • Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
  • Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
  • Proficiency in Unix Operating systems and comfortable with Networking concepts
  • Experience with developing/deploying a scalable system
  • Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
  • Experience in managing Hadoop clusters
  • Understanding of containers and have managed them in production using container orchestration services.
  • Solid understanding of data structures and algorithms.
  • Applied exposure to continuous delivery pipelines (CI/CD).
  • Keen interest and proven track record in automation and cost optimization.

 

Experience:

  • 1-4 years of relevant experience
  • BE in Computer Science / Information Technology 

 

Read more
MoreYeahs
at MoreYeahs
3 recruiters
Lovely Sharma
Posted by Lovely Sharma
Remote, Indore
2 - 6 yrs
₹12L - ₹16L / yr
skill iconMachine Learning (ML)
DevOps
skill iconKubernetes
Terraform
skill iconPython
+9 more
Machine Learning
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos