Cutshort logo
AdElement logo
Sr Ai ML
AdElement's logo

Sr Ai ML

Ritisha Nigam's profile picture
Posted by Ritisha Nigam
2 - 7 yrs
₹5L - ₹15L / yr
Pune
Skills
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
AiML

Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)

Location: Pune – Onsite

About Us:

We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!

Role Overview:

We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.

Key Responsibilities:

  • Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
  • Collaborate with data scientists and engineers to build scalable AI-driven solutions.
  • Analyze large volumes of data to extract meaningful insights and improve ad performance.
  • Develop and deploy machine learning pipelines for automated decision-making.
  • Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
  • Optimize existing models for speed, scalability, and accuracy.
  • Work closely with product managers to align AI solutions with business goals.

Requirements:

  • Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
  • Strong programming skills in Python, R, or similar languages.
  • Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Expertise in data processing and real-time analytics.
  • Strong understanding of digital advertising, programmatic platforms, and ad server technology.
  • Excellent problem-solving and analytical skills.
  • Immediate joiners preferred.

Preferred Skills:

  • Knowledge of big data technologies like Spark, Hadoop, or Kafka.
  • Experience with cloud platforms like AWS, GCP, or Azure.
  • Familiarity with MLOps practices and tools.

How to Apply:

If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.

Join us in building the future of AI-driven digital advertising!

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About AdElement

Founded :
2010
Type :
Product
Size :
20-100
Stage :
Profitable

About

AdElement is an online advertising startup based in Pune. We do AI driven ad personalization for video and display ads. Audiences are targeted algorithmically across biddable sources of ad inventory through real time bidding. We are looking to grow our teams to meet the rapidly expanding market opportunity.

Read more

Connect with the team

Profile picture
Sachin Bhatevara
Profile picture
Ravi Tijare

Company social profiles

bloglinkedintwitterfacebook

Similar jobs

Agentic AI Platform
Agentic AI Platform
Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Bell Techlogix
Pemmraju VenkatVandita
Posted by Pemmraju VenkatVandita
Hyderabad
5 - 10 yrs
₹15L - ₹20L / yr
CI/CD
Terraform
MLOps
skill iconMachine Learning (ML)
Powershell

The DevOps Engineer will play a critical role in operationalizing artificial intelligence across Bell Techlogix client environments. This role focuses on building and supporting cloud infrastructure, CI/CD pipelines, and automation frameworks that power AI and machine learning workloads. The ideal candidate has experience supporting AI platforms such as Azure AI, Azure Machine Learning, Azure OpenAI, and ServiceNow or conversational AI platforms, and understands the operational requirements of production AI systems, including reliability, scalability, and security. 

 

Key Responsibilities 

•Design, build, and operate cloud infrastructure and platform services that support AI and machine learning workloads in production, SLA-driven managed services environments 

•Implement CI/CD and MLOps pipelines to enable automated training, testing, deployment, and rollback of AI and ML models 

•Develop and maintain Infrastructure as Code to provision AI-ready environments consistently across dev/test/prod 

•Support AI platform operations including monitoring model health, pipeline execution, compute utilization, and data dependencies 

•Partner with Machine Learning Engineers and Data Engineers to standardize deployment patterns for AI services and LLM-based solutions 

•Enable secure and scalable AI integrations using APIs, messaging, and event-driven architectures 

•Implement observability solutions for AI platforms, including logging, metrics, alerting, and drift detection integrations 

•Troubleshoot AI platform incidents, perform root cause analysis, and implement remediation to improve reliability and automation coverage 

•Apply security best practices for AI environments including secrets management, identity and access controls, network isolation, and policy enforcement 

•Support AI-driven automation use cases across platforms such as Microsoft Copilot, ServiceNow, and conversational AI tools 

•Collaborate with service desk, security, and architecture teams to continuously improve AI service delivery and operational maturity 

 

Required Qualifications 

•Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience 

•5+ years of experience in DevOps, cloud engineering, or platform operations, with exposure to AI or data workloads 

•Hands-on experience with Microsoft Azure, including compute, networking, storage, and monitoring services 

•Experience building CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools 

•Working knowledge of Infrastructure as Code (Terraform and/or Bicep/ARM) 

•Scripting experience using PowerShell and/or Python 

•Experience supporting production platforms with incident management, change control, and root cause analysis 

•Understanding of cloud security fundamentals and enterprise governance requirements 

 

Preferred Qualifications 

•Experience with Azure Machine Learning, Azure AI Services, Azure OpenAI, or MLOps frameworks 

•Exposure to containerization and orchestration technologies (Docker, Kubernetes, AKS) 

•Experience supporting data pipelines or feature stores used by machine learning systems 

•Familiarity with ServiceNow, AI-driven ITSM workflows, or automation platforms 

•Experience with observability tools 

•Knowledge of Responsible AI, data governance, and compliance considerations for AI systems 

•Relevant certifications (Microsoft Azure Administrator, Azure DevOps Engineer, Azure AI Engineer)

Read more
Flytbase
at Flytbase
3 recruiters
Alice Philip
Posted by Alice Philip
Pune
2 - 4 yrs
₹8L - ₹15L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
Artificial Intelligence (AI)

Lead DevSecOps Engineer


Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time


Apply here → https://lnk.ink/CLqe2


About FlytBase:

FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.

We're building intelligent autonomy — not just automation — and security is core to that vision.


What You’ll Own

You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.

Expect to:

  • Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
  • Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
  • Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
  • Build for Zero Trust security — logs, secrets, audits, access policies
  • Lead incident response, postmortems, and playbooks to reduce MTTR
  • Automate and secure CI/CD pipelines with SAST, DAST, image hardening
  • Script your way out of toil using Python, Bash, or LLM-based agents
  • Work alongside dev, platform, and product teams to ship secure, scalable systems


What We’re Looking For:

You’ve probably done a lot of this already:

  • 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
  • Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
  • Strong in Linux internals, OS hardening, and network security
  • Built and owned CI/CD pipelines, IaC, and automated releases
  • Written scripts (Python/Bash) that saved your team hours
  • Familiar with SOC 2, ISO 27001, threat detection, and compliance work

Bonus if you’ve:

  • Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.


What It Means to Be a Flyter

  • AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
  • Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
  • Joy in complexity: Security + infra + scale = your happy place.
  • Radical candor: You give and receive sharp feedback early — and grow faster because of it.
  • Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
  • H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
  • Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.


Perks:

▪ Unlimited leave & flexible hours

▪ Top-tier health coverage

▪ Budget for AI tools, courses

▪ International deployments

▪ ESOPs and high-agency team culture


Apply Here- https://lnk.ink/CLqe2

Read more
PGP Glass Pvt Ltd
Animesh Srivastava
Posted by Animesh Srivastava
Vadodara
2 - 4 yrs
₹6L - ₹12L / yr
IT infrastructure
Artificial Intelligence (AI)
DevOps

Key Responsibilities:-


• Collaborate with Data Scientists to test and scale new algorithms through pilots and later industrialize the solutions at scale to the comprehensive fashion network of the Group

• Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service to provide an e2e solution

• Leverage an understanding of software architecture and software design patterns to write scalable, maintainable, well-designed and future-proof code

• Design, develop and maintain the framework for the analytical pipeline

• Develop common components to address pain points in machine learning projects, like model lifecycle management, feature store and data quality evaluation

• Provide input and help implement framework and tools to improve data quality

• Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, designers, product managers and others to build the AI ecosystem within the Group

• Deliver on time, demonstrating a strong commitment to deliver on the team mission and agreed backlog

Read more
Censiusai
at Censiusai
1 recruiter
Censius Team
Posted by Censius Team
Remote only
3 - 5 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
+3 more

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
MoreYeahs
at MoreYeahs
3 recruiters
Lovely Sharma
Posted by Lovely Sharma
Remote, Indore
2 - 6 yrs
₹12L - ₹16L / yr
skill iconMachine Learning (ML)
DevOps
skill iconKubernetes
Terraform
skill iconPython
+9 more
Machine Learning
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos