Cutshort logo
3 - 5 yrs
₹12L - ₹14L / yr
Bengaluru (Bangalore)
Skills
Data Engineer
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
skill iconJava
ETL
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Our client company is into Analytics. (RF1)

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Bengaluru (Bangalore), Gurugram
2 - 6 yrs
₹12L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Role - Senior Executive Analytics (BI)


Overview of job -


Our Client is the world’s largest media investment company and are a part of WPP. In fact, we are responsible for one in every three ads you see globally. We are currently looking for a Senior Executive Analytics to join us. In this role, you will be responsible for a massive opportunity to build and be a part of largest performance marketing setup APAC is committed to fostering a culture of diversity and inclusion. Our people are our strength, so we respect and nurture their individual talent and potential.


Reporting of the role - This role reports to the Director - Analytics.


3 best things about the job:


1. Responsible for data & analytics projects and developing data strategies by diving into data and extrapolating insights and providing guidance to clients


2. Build and be a part of a dynamic team


3. Being part of a global organisations with rapid growth opportunities


Responsibilities of the role:


• Design and build Visualization, Dashboard and reports for both Internal and external clients using Tableau, Power BI, Datorama or R Shiny/Python, SQL, Alteryx.

• Work with large data sets via hands-on data processing to produce structured data sets for analysis.

• Good to Have - Build Marketing-Mix and Multi-Touch attribution models using a range of tools, including free and paid.


What you will need:


• Degree in Mathematics, Statistics, Economics, Engineering, Data Science, Computer Science, or quantitative field.

• Proficiency in one or more coding languages – preferred languages: Python, R

• Proficiency in one or more Visualization Tools – Tableau, Datorama, Power BI

• Proficiency in using SQL.

• Minimum 3 years of experience in Marketing/Data Analytics or related field with hands-on experience in building Marketing-Mix and Attribution models is a plus.

Read more
Bidgely
at Bidgely
1 recruiter
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹40L / yr
skill iconData Science
skill iconDeep Learning
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Artificial Neural Network (ANN)
+6 more
Roles and Responsibilities

● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.

● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.

● Productize has proven or working models into production-quality code.

● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions

● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.

● File patents for innovative solutions that add to the company's IP portfolio

Requirements

● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.

● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)

● Experience in productizing models to code in a fast-paced start-up
environment.

● Fluency in analytical tools such as Matlab, R, Weka etc.

● Strong intuition for data and Keen aptitude on large scale data analysis

● Strong communication and collaboration skills.
Read more
Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
Fintech Company
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
1 - 3 yrs
₹9L - ₹12L / yr
skill iconPython
SQL
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
+3 more
Job Responsibilities:

1+ years of proven experience in ML/AI with Python

Work with the manager through the entire analytical and machine learning model life cycle:

⮚ Define the problem statement

⮚ Build and clean datasets

⮚ Exploratory data analysis

⮚ Feature engineering

⮚ Apply ML algorithms and assess the performance

⮚ Codify for deployment

⮚ Test and troubleshoot the code

⮚ Communicate analysis to stakeholders

Technical Skills

⮚ Proven experience in usage of Python and SQL

⮚ Excellent in programming and statistics

⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark

Read more
Aureus Tech Systems
at Aureus Tech Systems
3 recruiters
Naveen Yelleti
Posted by Naveen Yelleti
Kolkata, Hyderabad, Chennai, Bengaluru (Bangalore), Bhubaneswar, Visakhapatnam, Vijayawada, Trichur, Thiruvananthapuram, Mysore, Delhi, Noida, Gurugram, Nagpur
1 - 7 yrs
₹4L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Skills and requirements

  • Experience analyzing complex and varied data in a commercial or academic setting.
  • Desire to solve new and complex problems every day.
  • Excellent ability to communicate scientific results to both technical and non-technical team members.


Desirable

  • A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
  • Hands on experience on Python, Pyspark, SQL
  • Hands on experience on building End to End Data Pipelines.
  • Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
  • Hands on Experience in building data pipelines.
  • Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
  • Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
  • Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
  • BS degree in math, statistics, computer science or equivalent technical field.
  • Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
  • Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
  • Willing to learn and work on Data Science, ML, AI.
Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
10 - 12 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
SQL
Artificial Intelligence (AI)
Job Title: Data Science Manager
Job Location: India
Job Summary
We at CondeNast are looking for a data science manager for the content intelligence
workstream primarily, although there might be some overlap with other workstreams. The
position is based out of Chennai and shall report to the head of the data science team, Chennai
Responsibilities:
1. Ideate new opportunities within the content intelligence workstream where data Science can
be applied to increase user engagement
2. Partner with business and translate business and analytics strategies into multiple short-term
and long-term projects
3. Lead data science teams to build quick prototypes to check feasibility and value to business
and present to business
4. Formulate the business problem into an machine learning/AI problem
5. Review & validate models & help improve the accuracy of model
6. Socialize & present the model insights in a manner that business can understand
7. Lead & own the entire value chain of a project/initiative life cycle - Interface with business,
understand the requirements/specifications, gather data, prepare it, train,validate, test the
model, create business presentations to communicate insights, monitor/track the performance
of the solution and suggest improvements
8. Work closely with ML engineering teams to deploy models to production
9. Work closely with data engineering/services/BI teams to help develop data stores, intuitive
visualizations for the products
10. Setup career paths & learning goals for reportees & mentor them
Required Skills:
1. 5+ years of experience in leading Data Science & Advanced analytics projects with a focus on
building recommender systems and 10-12 years of overall experience
2. Experience in leading data science teams to implement recommender systems using content
based, collaborative filtering, embedding techniques
3. Experience in building propensity models, churn prediction, NLP - language models,
embeddings, recommendation engine etc
4. Master’s degree with an emphasis in a quantitative discipline such as statistics, engineering,
economics or mathematics/ Degree programs in data science/ machine learning/ artificial
intelligence
5. Exceptional Communication Skills - verbal and written
6. Moderate level proficiency in SQL, Python
7. Needs to have demonstrated continuous learning through external certifications, degree
programs in machine learning & artificial intelligence
8. Knowledge of Machine learning algorithms & understanding of how they work
9. Knowledge of Reinforcement Learning
Preferred Qualifications
1. Expertise in libraries for data science - pyspark(Databricks), scikit-learn, pandas, numpy,
matplotlib, pytorch/tensorflow/keras etc
2. Working Knowledge of deep learning models
3. Experience in ETL/ data engineering
4. Prior experience in e-commerce, media & publishing domain is a plus
5. Experience in digital advertising is a plus
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move
to invest heavily in understanding this data and formed a whole new Data team entirely
dedicated to data processing, engineering, analytics, and visualization. This team helps drive
engagement, fuel process innovation, further content enrichment, and increase market
revenue. The Data team aimed to create a company culture where data was the common
language and facilitate an environment where insights shared in real-time could improve
performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team
at Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and
boost online revenue. We are broadly divided into four groups, Data Intelligence, Data
Engineering, Data Science, and Operations (including Product and Marketing Ops, Client
Services) along with Data Strategy and monetization. The teams built capabilities and products
to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are Condé
Nast, and It Starts Here.
Read more
Material Depot
at Material Depot
1 recruiter
Sarthak Agrawal
Posted by Sarthak Agrawal
Hyderabad, Bengaluru (Bangalore)
2 - 6 yrs
₹12L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconR Programming
skill iconPython
+1 more

Job Description

Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.

Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.

Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.

In this role, you will:

  • Extensive knowledge in machine learning and deep learning techniques
  • Solid background in image processing/computer vision
  • Experience in building datasets for computer vision tasks
  • Experience working with and creating data structures / architectures
  • Proficiency in at least one major machine learning framework
  • Experience visualizing data to stakeholders
  • Ability to analyze and debug complex algorithms
  • Good understanding and applied experience in classic 2D image processing and segmentation
  • Robust semantic object detection under different lighting conditions
  • Segmentation of non-rigid contours in challenging/low contrast scenarios
  • Sub-pixel accurate refinement of contours and features
  • Experience in image quality assessment
  • Experience with in depth failure analysis of algorithms
  • Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
  • Creativity and curiosity for solving highly complex problems
  • Excellent communication and collaboration skills
  • Mentor and support other technical team members in the organization
  • Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
  • Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.

How you will do this:

  • You’re part of an agile, multidisciplinary team.
  • You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
  • You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
  • You experiment, test, try, fail, and learn continuously.
  • You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.

For this role, you must have:

  • Strong knowledge of and experience with the functional programming paradigm.
  • Experience conducting code reviews, providing feedback to other engineers.
  • Great communication skills and a proven ability to work as part of a tight-knit team.
Read more
Quantiphi Inc.
at Quantiphi Inc.
1 video
10 recruiters
Anwar Shaikh
Posted by Anwar Shaikh
Mumbai
1 - 5 yrs
₹4L - ₹15L / yr
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
TensorFlow
Keras
+1 more
1. The candidate should be passionate about machine learning and deep learning.
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.
Read more
PayU
at PayU
1 video
6 recruiters
Deeksha Srivastava
Posted by Deeksha Srivastava
gurgaon, NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹7L - ₹15L / yr
skill iconPython
skill iconR Programming
skill iconData Analytics
R

What you will be doing:

As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -

  • Dive into the data and identify patterns
  • Development of end-to-end Credit models and credit policy for our existing credit products
  • Leverage alternate data to develop best-in-class underwriting models
  • Working on Big Data to develop risk analytical solutions
  • Development of Fraud models and fraud rule engine
  • Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
  • Working on cutting-edge techniques e.g. machine learning and deep learning models

Example of projects done in past:

  • Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
  • Fraud model development, deployment and rules for EMEA region

 

Basic Requirements:

  • 1-3 years of work experience as a Data scientist (in Credit domain)
  • 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
  • Strong problem solving and understand and execute complex analysis
  • Experience in at least one of the languages - R/Python/SAS and SQL
  • Experience in in Credit industry (Fintech/bank)
  • Familiarity with the best practices of Data Science

 

Add-on Skills : 

  • Experience in working with big data
  • Solid coding practices
  • Passion for building new tools/algorithms
  • Experience in developing Machine Learning models
Read more
WyngCommerce
at WyngCommerce
3 recruiters
Ankit Jain
Posted by Ankit Jain
Bengaluru (Bangalore)
1 - 3 yrs
₹6L - ₹8L / yr
skill iconData Analytics
Predictive analytics
Business Analysis
skill iconData Science
skill iconPython
WyngCommerce is building a Global Enterprise AI Platform for top tier brands and retailers to drive profitability for our clients. Our vision is to develop a self-learning retail backend that enables our clients to become more agile and responsive to demand and supply volatilities. We are looking for a Business Analyst to join our team. As a BA, you will take end-to-end ownership of on-boarding new clients, running proof-of-concepts and pilots with them on different AI product applications, and ensuring timely product roll-out and customer success for the clients. You will also be expected to drive significant inputs to the sales, engineering, data science and product team to help us build for scale. An eye for detail, ability to process and analyze data quickly, and communicating effectively with different teams (Client, Sales and Engineering) are the qualities we are looking for. There will be opportunities to grow up within the same job family (lead a team of analysts) or to move to other areas of business like customer success, data science or product management. KEY RESPONSIBILITIES: - Understand the client deliverables from the sales team and come back to them with the timelines of solution / product delivery - Coordinate with relevant stakeholders within the client team to configure the WyngCommerce platform to their business, set-up processes for regular data sharing - Drive relevant anomaly detection analyses and work on data pre-processing to prepare the data for the WyngCommerce Analytics engine - Drive rigorous testing of results from the WyngCommerce engine and apply manual overrides, wherever required before pushing the results to the client - Evaluate business outcomes of different engagements with the client and prepare the analysis for the business benefits to the clients KEY REQUIREMENTS: - 0-2 years of experience in analytics role (preferably client facing which required you to interface with multiple stakeholders) - Hands-on experience in data analysis (descriptive), visualization and data pre-processing - Hands-on experience in python, especially in data processing and visualization libraries like pandas, numpy, matplotlib, seaborn - Good understanding of statistical and predictive modeling concepts (you not need be completely hands-on) - Excellent analytical thinking, and problem solving skills - Experience in project management and handling client communications - Excellent communication (written/verbal) skills, including logically structuring and delivering presentations
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos