Cutshort logo
GroundtRuth logo
Senior Software Engineer
Senior Software Engineer
GroundtRuth's logo

Senior Software Engineer

Priti Singh's profile picture
Posted by Priti Singh
7 - 12 yrs
₹15L - ₹32L / yr
Remote only
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Amazon Web Services (AWS)
Python
Data Structures

You will:

  • Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
  • Design and develop Big Data pipelines handling huge geospatial data.
  • Bring clarity to large complex technical challenges.
  • Collaborate with Engineering leadership to help drive technical strategy.
  • Project scoping, planning and estimation.
  • Mentor and coach team members at different levels of experience.
  • Participate in peer code reviews and technical meetings.
  • Cultivate a culture of engineering excellence.
  • Seek, implement and adhere to standards, frameworks and best practices in the industry.
  • Participate in on-call rotation.

You have:

  • Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
  • 5+ years of experience in software design, architecture and development.
  • 5+ years of experience using object-oriented languages (Java, Python).
  • Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
  • Strong experience in working with different AWS technologies.
  • Excellent competencies in data structures & algorithms.

Nice to have:

  • Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
  • Experience in developing high throughput low latency backend services
  • Affinity to spatial data structures and algorithms.
  • Familiarity with Postgres DB, Google Places or Mapbox APIs

What we offer

At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.

  • Unlimited Paid Time Off
  • In Office Daily Catered Lunch
  • Fully stocked snacks/beverages
  • 401(k) employer match
  • Health coverage including medical, dental, vision and option for HSA or FSA
  • Generous parental leave
  • Company-wide DEIB Committee
  • Inclusion Academy Seminars
  • Wellness/Gym Reimbursement
  • Pet Expense Reimbursement
  • Company-wide Volunteer Day
  • Education reimbursement program
  • Cell phone reimbursement
  • Equity Analysis to ensure fair pay
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About GroundtRuth

Founded :
2009
Type
Size
Stage :
Profitable
About
GroundTruth is the leading location-based marketing and advertising technology company. Brands, agencies, small businesses, and non-profits trust our performance-driven solutions to help them reach consumers during moments of intent that generate important business outcomes. GroundTruth’s suite of geo-contextual omnichannel products and services are available at scale through our self-serve advertising platform, managed services, and industry reseller partnerships. GroundTruth’s marketing platform is powered by a unique data set called "visitation data" accredited by the Media Rating Council (MRC). Our proprietary cleansing processes combine contextual mapping technology (BlueprintsTM), owned and operated properties, and third-party mobile location data, together yielding over 30 billion visits annually.
Read more
Connect with the team
Profile picture
Priti Singh
Profile picture
Sumit Paul
Company social profiles
instagramlinkedintwitterfacebook

Similar jobs

CodeCraft Technologies Private Limited
Chandana B
Posted by Chandana B
Bengaluru (Bangalore), Mangalore
6 - 15 yrs
Best in industry
Data Science
Machine Learning (ML)
Python
Artificial Intelligence (AI)

CodeCraft Technologies is an award-winning creative engineering company where highly skilled designers and engineers work closely and bring to life, user-focused solutions.


Proven design & development methodologies are leveraged, and the latest technologies are explored, to deliver best-in-class mobile and web solutions. Our success is built on a team of talented and motivated individuals who drive excellence in everything they do. We are seeking a highly skilled and experienced Lead Data Scientist to join our growing team.


Responsibilities:

● Work with stakeholders across the organization to identify opportunities for leveraging company data to drive business solutions.

● Develop custom data models and algorithms to apply to data sets.

● Use predictive modeling to increase and optimize the business process and solutions

● Research and development of AI algorithms and their applicability in business-related problems to build intelligent systems.

● Build a Solid Data Science Team: Provide strategic direction for the data science team. Lead, mentor, and inspire a team of data scientists, fostering a culture of collaboration and continuous learning.

● Explore the latest technologies in the Data science domain and develop POCs.

● Establish a Technology Partnership with the leading technology providers in the AI/ML space.

● MLOps – Deploy ML solutions to the cloud.

● Collaborate with the content team to produce Tech blogs, case studies, etc.,


Required Skill Set:

● Strong foundational knowledge of data science concepts, machine learning algorithms, and programming skills in Python (and/or R).

● Expertise in Generative AI (GenAI), Large Language Models (LLM), Natural Language Processing (NLP), image processing and/or video analytics

● Proven track record of supporting global clients or internal stakeholders in data science projects.

● Experience in data analytics, descriptive analytics and predictive analytics

● Experience using AI/ML tools available from cloud service providers like AWS/AZURE/GCP including TensorFlow, SageMaker, and Azure ML

● Experience in deploying solutions to the cloud [AWS/Azure/GCP]

● Experience with Data Visualization tools like PowerBI, Tableau

● Proficient in SQL and other database technologies.

● Good understanding of the latest research and technologies in AI.

● Experience working across multiple geographic borders and time zones

● Outstanding communication and presentation skills


Education:

● Graduation/Post-graduation in Computers/Engineering/Statistics from a reputed institute

Read more
Service based company
Agency job
via Vmultiply solutions by Mounica Buddharaju
Ahmedabad, Rajkot
2 - 4 yrs
₹3L - ₹6L / yr
Python
Amazon Web Services (AWS)
SQL
ETL


Qualifications :

  • Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
  • Good knowledge of MVC, Entity Framework, and Web API/WCF.
  • ASP.NET Core knowledge is preferred.
  • Creating APIs / Using third-party APIs
  • Working knowledge of Angular is preferred.
  • Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
  • Solid understanding of object-oriented development principles
  • Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
  • Strong understanding of object-oriented programming
  • Ability to create reusable C# libraries
  • Must be able to write clean comments, readable C# code, and the ability to self-learn.
  • Working knowledge of GIT

Qualities required :

Over above tech skill we prefer to have

  • Good communication and Time Management Skill.
  • Good team player and ability to contribute on a individual basis.

  • We provide the best learning and growth environment for candidates.












Skills:


    NET Core

   .NET Framework

    ASP.NET Core

    ASP.NET MVC

    ASP.NET Web API  

   C#

   HTML


Read more
OJCommerce
at OJCommerce
3 recruiters
Rajalakshmi N
Posted by Rajalakshmi N
Chennai
2 - 5 yrs
₹7L - ₹12L / yr
Beautiful Soup
Web Scraping
Python
Selenium

Role : Web Scraping Engineer

Experience : 2 to 3 Years

Job Location : Chennai

About OJ Commerce: 


OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.

Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.

As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.

Job Summary:

We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!


Responsibilities:


  • Develop and maintain web crawlers in Python.
  • Crawl large websites and extract data.
  • Store data in a database.
  • Analyze and report on data.
  • Work with other engineers to develop and improve our web crawling infrastructure.
  • Stay up to date on the latest crawling tools and techniques.



Required Skills and Qualifications:


  • Bachelor's degree in computer science or a related field.
  • 2-3 years of experience with Python and web crawling.
  • Familiarity with tools / modules such as
  • Scrapy, Selenium, Requests, Beautiful Soup etc.
  • API tools such as Postman or equivalent. 
  • Working knowledge of SQL.
  • Experience with web crawling and data extraction.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Excellent communication and documentation skills.


What we Offer

• Competitive salary

• Medical Benefits/Accident Cover

• Flexi Office Working Hours

• Fast paced start up

Read more
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
Python
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Simplilearn Solutions
at Simplilearn Solutions
1 video
36 recruiters
STEVEN JOHN
Posted by STEVEN JOHN
Anywhere, United States, Canada
3 - 10 yrs
₹2L - ₹10L / yr
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
+2 more
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
BitClass
at BitClass
1 recruiter
Utsav Tiwary
Posted by Utsav Tiwary
Bengaluru (Bangalore)
1 - 4 yrs
₹8L - ₹16L / yr
Data Analytics
Statistical Analysis
data analyst
SQL
Python
BitClass, a VC funded startup (backed by the investors or Unacademy, Doubtnut, ShareChat, Meesho among others) in the domain of edtech, is looking for a data analyst to help in making better business decisions using information from the available data. The responsibility is to gather and prepare data from multiple sources, run statistical analyses, and communicate your findings in a clear and objective way.

Responsibilities

- Understanding the business requirements so as to formulate the problems to solve and restrict the slice of data to be explored.
- Collecting data from various sources.
- Performing cleansing, processing, and validation on the data subject to analyze, in order to ensure its quality.
- Exploring and visualizing data.
- Performing statistical analysis and experiments to derive business insights.
- Clearly communicating the findings from the analysis to turn information into something actionable through reports, dashboards, and/or presentations.

Skills

- Experience solving problems in the project’s business domain.
- Experience with data integration from multiple sources 
- Proficiency in at least one query language, especially SQL.
- Working experience with NoSQL databases, such as MongoDB and Elasticsearch.
- Working experience with popular statistical and machine learning techniques, such as clustering, linear regression, KNN, decision trees, etc.
- Good scripting skills using Python, R or any other relevant language
- Proficiency in at least one data visualization tool, such as Matplotlib, Plotly, D3.js, ggplot, etc.
- Great communication skills.
Read more
Credit Saison Finance Pvt Ltd
Najma Khanum
Posted by Najma Khanum
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹30L / yr
Data Science
R Programming
Python
Role & Responsibilities:
1) Understand the business objectives, formulate hypotheses and collect the relevant data using SQL/R/Python. Analyse bureau, customer and lending performance data on a periodic basis to generate insights. Present complex information and data in an uncomplicated, easyto-understand way to drive action.
2) Independently Build and refit robust models for achieving game-changing growth while managing risk.
3) Identify and implement new analytical/modelling techniques to improve model performance across customer lifecycle (acquisitions, management, fraud, collections, etc.
4) Help define the data infrastructure strategy for Indian subsidiary.
a. Monitor data quality and quantity.
b. Define a strategy for acquisition, storage, retention, and retrieval of data elements. e.g.: Identify new data types and collaborate with technology teams to capture them.
c. Build a culture of strong automation and monitoring
d. Staying connected to the Analytics industry trends - data, techniques, technology, etc. and leveraging them to continuously evolve data science standards at Credit Saison.

Required Skills & Qualifications:
1) 3+ years working in data science domains with experience in building risk models. Fintech/Financial analysis experience is required.
2) Expert level proficiency in Analytical tools and languages such as SQL, Python, R/SAS, VBA etc.
3) Experience with building models using common modelling techniques (Logistic and linear regressions, decision trees, etc.)
4) Strong familiarity with Tableau//Power BI/Qlik Sense or other data visualization tools
5) Tier 1 college graduate (IIT/IIM/NIT/BITs preferred).
6) Demonstrated autonomy, thought leadership, and learning agility.
Read more
Opscruise
at Opscruise
2 recruiters
sharmila M
Posted by sharmila M
Remote, Chennai
9 - 25 yrs
₹8L - ₹25L / yr
Data Science
Python
Machine Learning (ML)
DA
Unsupervised learning
+1 more

Responsibilities

  • Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
  • Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy,  efficacy, and robustness
  • Work closely with other functional teams to integrate implemented systems into the SaaS platform
  • Suggest innovative and creative concepts and ideas that would improve the overall platform

Qualifications

The ideal candidate must have the following qualifications:

  • 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
  • MS or M Tech (preferred) in applied mathematics/statistics;  CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
  • In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
  • Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
  • Experience in working on modeling graph structures related to spatiotemporal systems
  • Programming skills in Python is a must
  • Experience in developing and deploying on cloud (AWS or Google or Azure)
  • Good verbal and written communication skills
  • Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow

 

Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.

Location

Chennai, India

Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Sanket Patil
Posted by Sanket Patil
Bengaluru (Bangalore)
6 - 10 yrs
Best in industry
Machine Learning (ML)
Python
Data Science
Natural Language Processing (NLP)
Deep Learning
+2 more
About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!

What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.


We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).


Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos