Cutshort logo
2 - 9 yrs
₹9L - ₹27L / yr
Gurugram, Bengaluru (Bangalore), Chennai
Skills
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconDocker
skill iconKubernetes
skill iconJenkins
skill iconGitHub
skill iconGit
skill iconPython
MySQL
skill iconPostgreSQL
SQL server
Oracle
Terraform
argo
airflow
kubeflow
skill iconMachine Learning (ML)
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Top Management Consulting Company

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
4 - 12 yrs
₹12L - ₹15L / yr
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
+12 more

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
Databook
at Databook
5 candid answers
1 video
Nikhil Mohite
Posted by Nikhil Mohite
Mumbai
1 - 3 yrs
Upto ₹20L / yr (Varies
)
Data engineering
skill iconPython
Apache Kafka
Spark
skill iconAmazon Web Services (AWS)
+1 more

Lightning Job By Cutshort ⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)

 

 

About Databook:-

- Great salespeople let their customers’ strategies do the talking.

 

Databook’s award-winning Strategic Relationship Management (SRM) platform uses advanced AI and NLP to empower the world’s largest B2B sales teams to create, manage, and maintain strategic relationships at scale. The platform ingests and interprets billions of financial and market data signals to generate actionable sales strategies that connect the seller’s solutions to a buyer’s financial pain and urgency.

 

The Opportunity

We're seeking Junior Engineers to support and develop Databook’s capabilities. Working closely with our seasoned engineers, you'll contribute to crafting new features and ensuring our platform's reliability. If you're eager about playing a part in building the future of customer intelligence, with a keen eye towards quality, we'd love to meet you!

 

Specifically, you'll

- Participate in various stages of the engineering lifecycle alongside our experienced engineers.

- Assist in maintaining and enhancing features of the Databook platform.

- Collaborate with various teams to comprehend requirements and aid in implementing technology solutions.

 

Please note: As you progress and grow with us, you might be introduced to on-call rotations to handle any platform challenges.

 

Working Arrangements:

- This position offers a hybrid work mode, allowing employees to work both remotely and in-office as mutually agreed upon.

 

What we're looking for

- 1-2+ years experience as a Data Engineer

- Bachelor's degree in Engineering

- Willingness to work across different time zones

- Ability to work independently

- Knowledge of cloud (AWS or Azure)

- Exposure to distributed systems such as Spark, Flink or Kafka

- Fundamental knowledge of data modeling and optimizations

- Minimum of one year of experience using Python working as a Software Engineer

- Knowledge of SQL (Postgres) databases would be beneficial

- Experience with building analytics dashboard

- Familiarity with RESTful APIs and/or GraphQL is welcomed

- Hand-on experience with Numpy, Pandas, SpaCY would be a plus

- Exposure or working experience on GenAI (LLMs in general), LLMOps would be a plus

- Highly fluent in both spoken and written English language

 

Ideal candidates will also have:

- Self-motivated with great organizational skills.

- Ability to focus on small and subtle details.

- Are willing to learn and adapt in a rapidly changing environment.

- Excellent written and oral communication skills.

 

Join us and enjoy these perks!

- Competitive salary with bonus

- Medical insurance coverage

- 5 weeks leave plus public holidays

- Employee referral bonus program

- Annual learning stipend to spend on books, courses or other training materials that help you develop skills relevant to your role or professional development

- Complimentary subscription to Masterclass

Read more
Fxbytes technologies
at Fxbytes technologies
1 recruiter
Shweta Bharti
Posted by Shweta Bharti
Vijay Nagar Indore
3 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Seeking Data Analytics Trainer with Power BI and Tableau Expertise

Experience Required: Minimum 3 Years

Location: Indore

Part-Time / Full-Time Availability


We are actively seeking a qualified candidate to join our team as a Data Analytics Trainer, with a strong focus on Power BI and Tableau expertise. The ideal candidate should possess the following qualifications:


  A track record of 3 to 6 years in delivering technical training and mentoring.

  Profound understanding of Data Analytics concepts.

  Strong proficiency in Excel and Advanced Excel.

  Demonstrated hands-on experience and effective training skills in Python, Data Visualization, R Programming, and an in-depth understanding of both Power BI and Tableau.


Follow me on LinkedIn to get more job updates 👇

https://www.linkedin.com/in/shweta-bharti-a105ab197/




Read more
Dori AI
at Dori AI
5 recruiters
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore)
2 - 8 yrs
₹8L - ₹20L / yr
skill iconPython
skill iconData Science
skill iconMachine Learning (ML)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+5 more

Dori AI enables enterprises with AI-powered video analytics to significantly increase human productivity and improve process compliance. We leverage a proprietary full-stack end-to-end computer vision and deep learning platform to rapidly build and deploy AI solutions for enterprises. The platform was built with enterprise considerations including time-to-value, time-to-market, security, and scalability across a range of use cases. Capture visual data across multiple sites, leverage AI + Computer Vision to gather key insights, and make decisions with actionable visual insights. Launch CV applications in a matter of weeks that are optimized for both cloud and edge deployments.

 


Job brief: Sr. Software Engineer/Software Engineer


All of our team members are expected to learn, learn, and learn! We are working on cutting-edge technologies and areas of artificial intelligence that have never been explored before. We are looking for motivated software engineers with strong coding skills that want to work on problems and challenges they have never worked on before. All of our team members wear multiple hats so you will be expected to simultaneously work on multiple aspects of the products we ship.


Responsibilities

  • Participate heavily in the brainstorming of system architecture and feature design
  • Interface with external customers and key stakeholders to understand and document design requirements
  • Work cross-functionally with Engineering, Data Science, Product, UX, and Infrastructure teams
  • Drive best coding practices across the company (i.e. documentation, code reviews, coding standards, etc)
  • Perform security, legal, and license reviews of committed code
  • Complete projects with little or no supervision from senior leadership


Required Qualifications

  • Built and deployed customer-facing services and products at scale
  • Developed unit and integration tests
  • Worked on products where experimentation and data science are core to the development
  • Experience with large-scale distributed systems that have thousands of microservices and manages millions of transactions per day
  • Solid instruction-level understanding of Object Oriented design, data structures, and software engineering principles
  • Must have at least 4+ years of experience in back-end web development with the following tools: Python, Flask, FastAPI, AWS or Azure, GCP, Java or C/C++, ORM, Mongo, Postgres, TimescaleD, CI/CD


Desired Experience/Skills

  • You have a strong background in software development 
  • Experience with the following tools: Google Cloud Platform, Objective C/Swift, Github, Docker
  • Experience with open-source projects in a startup environment
  • BS, MS, or Ph.D. in Computer Science, Software Engineering, Math, Electrical Engineering, or other STEM degree


Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Indium Software
at Indium Software
16 recruiters
Swaathipriya P
Posted by Swaathipriya P
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹1L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
2+ years of Analytics with predominant experience in SQL, SAS, Statistics, R , Python, Visualization
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Read more
Wellness Forever Medicare Private Limited
Mumbai
3 - 5 yrs
₹7L - ₹11L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL server
Microsoft Windows Azure
+4 more
  • Minimum 3-4 years of experience with ETL tools, SQL, SSAS & SSIS
  • Good understanding of Data Governance, including Master Data Management (MDM) and Data Quality tools and processes
  • Knowledge of programming languages eg. JASON, Python, R
  • Hands on experience of SQL database design
  • Experience working with REST API
  • Influencing and supporting project delivery through involvement in project/sprint planning and QA
  • Working experience with Azure
  • Stakeholder management
  • Good communication skills
Read more
El Corte Ingls
Saradhi Reddy
Posted by Saradhi Reddy
Hyderabad
3 - 7 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
View the profiles of professionals named Vijaya More on LinkedIn. There are 20+ professionals named Vijaya More, who use LinkedIn to exchange information, ideas, and opportunities.
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹12L - ₹16L / yr
skill iconPython
Bash
MySQL
skill iconElastic Search
skill iconAmazon Web Services (AWS)

What are we looking for:

 

  1. Strong experience in MySQL and writing advanced queries
  2. Strong experience in Bash and Python
  3. Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
  4. Exposure to cloud services such as AWS, Azure, or GCP
  5. 2+ years of experience in the production support
  6. Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
  7. Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
  8. Interest in learning new languages / framework as needed
  9. Good written and oral communications skills
  10. A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with

 

As a product solutions engineer, you will:

 

  1. Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
  2. Work with developers to find and correct more complex issues
  3. Address urgent issues quickly, work within and measure against customer SLAs
  4. Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
  5. Build anomaly detectors wherever applicable
  6. Pass articulated feedback from customers to the development and product team
  7. Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
  8. Offer technical support needed in development

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos