Cutshort logo
TartanHQ Solutions Private Limited logo
Data Science Engineer
TartanHQ Solutions Private Limited's logo

Data Science Engineer

Prabhat Shobha's profile picture
Posted by Prabhat Shobha
2 - 4 yrs
₹9L - ₹15L / yr
Bengaluru (Bangalore)
Skills
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Python
Big Data
Amazon Web Services (AWS)
AWS Lambda
recommendation algorithm

Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.

What will you do?

  • You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
  • You will be constantly assessing and improving the model using techniques like Transfer learning
  • You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
  • You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
  • Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
  • Building predictive models and machine-learning algorithms to apply to data sets.
  • Coordinate with different functional teams to implement models and monitor outcomes.
  •  
  • Presenting information using data visualization techniques and proposing solutions and strategies to business challenges


We would love to hear from you if :

  • You have 2+ years of experience as a software engineer at a SaaS or technology company
  • Demonstrable hands-on programming experience with Python/R Data Science Stack
  • Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
  • Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
  • Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
  • Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
  • Preferable Experience Experienced in web crawling and data scraping
  • Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
  • Experience with text mining, pattern matching and fuzzy matching

Why Tartan?
  • Brand new Macbook
  • Stock Options
  • Health Insurance
  • Unlimited Sick Leaves
  • Passion Fund (Invest in yourself or your passion project)
  • Wind Down
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About TartanHQ Solutions Private Limited

Founded :
2021
Type
Size
Stage :
Raised funding
About

Tartan is a payroll connectivity API company enabling consent-driven employment and income verification in real-time 🎯


If you have ever applied for a home loan, car loan, or credit card, during the process, you were asked to share proof of your income through payslips or IT return, or a bank statement to assess your income. You probably had to manually find the necessary documents and upload them to a platform, or send them via e-mail. In some cases, you might even have needed to submit a request to your HR department for help.


Tartan helps precisely here – by offering a digitally embedded income verification solution to securely transmit your income data including the payslips without a pause in your application process.


Our technology allows businesses to access payroll data directly from the payroll systems to streamline income and employment verification, improve underwriting models, and help employees and independent freelancers avail custom benefits through their employers.✨


You can read about us here - bit.ly/3wkvO5V


Why should you work with us?

  • We are growing exponentially [Going from an on-paper idea-to-revenues in less than 12 weeks] - have big plans for 2022 and are super excited to build a top-tier product in India, for the world
  • Crazy ambitious mission: Have a founder mindset and lots of room to experiment, integrate and expand further. Have a figure-it-out mentality.
  • Transparent, flexible benefits: We have a robust benefits framework; we evaluate candidates based on performance every six months and put our best foot forward on appraisals, with some flexibility between cash/crypto mutual fund/stock options. Read more. Read in detail on our flexible benefits in the next section.👇
  • So much transparency: We try hard to do it in the right way. We strongly believe that we can be much more open and move faster when we have nothing to hide.


Flexible Benefits

  • Salary 💸 We offer competitive salaries with performance-based bonuses and raises at will. We believe that a raise doesn’t need to be tied to an annual review.
  • Stock Options 📈 We believe that all employees deserve to own a part of Tartan. Everyone should be rewarded for a successful company outcome.
  • Health Insurance 💪 We believe you and your family deserve robust health coverage because we care about them too. A health policy of 5 lakhs is provided and other benefits to you and your family. ****
  • Brand new MacBook 💻 You get a MacBook you can use for working anywhere, irrespective of your role at Tartan.
  • Unlimited Sick Leaves ❤ Times are tough, and we are all in this together. Sick leaves are not meant just for yourself but also to take care of your loved ones when they need you the most.
  • Passion Fund 🏀 Invest in yourself or your passion project. Take a course, do gardening or start a newsletter. At Tartan, we want to invest in your professional growth and personal.
  • Wind Down 🎥 Whether it’s Netflix, Prime, Hotstar with the family, or cozying up with an audio-book, we provide a monthly stipend to spend on the streaming, entertainment, or news sources of your choice.


We are not rigid. If you are a hard worker and are willing to learn, we'd love to hear from you.


If there isn’t an open position for you, don’t worry!


You can e-mail us at [email protected] and let us know how you can help grow and build Tartan in your way, and why you would be a great addition to our team.

Read more
Connect with the team
Profile picture
Prabhat Shobha
Profile picture
Ashish Goyal
Company social profiles
angellinkedintwitter

Similar jobs

Piako
PiaKo Store
Posted by PiaKo Store
Kolkata
4 - 8 yrs
₹12L - ₹24L / yr
Python
Amazon Web Services (AWS)
ETL

We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!

At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.

We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.

As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.

Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.

Key Responsibilities:

  • ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
  • AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
  • Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
  • Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
  • Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
  • Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
  • Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
  • Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
  • Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
  • Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer with a focus on ETL pipeline development.
  • Strong proficiency in Python programming.
  • In-depth knowledge of AWS serverless technologies and services.
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience with version control systems (e.g., Git).
  • Strong SQL skills for data extraction and transformation.
  • Excellent problem-solving and troubleshooting abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Effective communication skills for articulating technical concepts to non-technical stakeholders.
  • Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.

Preferred Experience:

  • Knowledge of data orchestration and workflow management tools
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.

What You Can Expect:

- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.

- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.

- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.

They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.

If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.

Apply today for more information about this exciting opportunity.

Onsite Location: Kolkata, India (Salt Lake Sector V)


Read more
A analytics consulting start-up
Remote only
7 - 12 yrs
₹10L - ₹15L / yr
Machine Learning (ML)
Data Science
MS-Office
Artificial Intelligence (AI)
Python
+2 more

 

  • A Data and MLOps Engineering lead that has a good understanding of modern Data engineering frameworks with a focus on Microsoft Azure and Azure Machine Learning and its development lifecycle and DevOps.
  • Aims to solve the problems encountered when turning Data into meaningful solutions using transformations and data science code into production Machine Learning systems. Some of these challenges include:
    • ML orchestration - how can I automate my ML workflows across multiple environments
    • Scalability - how can I take advantage of the huge computational power available in the cloud?
    • Serving - how can I make my ML models available to make predictions reliably when needed?
    • Monitoring - how can I effectively monitor my ML system in production to ensure reliability? Not just system metrics, but also get insight into how my models are performing over time
    • Reuse – how can I profess reuse of artefacts built and establish templates and patterns?


The MLOps team works closely with ML Engineering and DevOps teams. Rather than focus just on individual use cases, the focus would be to specialise in building the platforms and tools that can help adoption of MLOps across the organisation and develop best practices and ways of working to develop a state of the art MLOps capability.

A good understanding of AI/Machine Learning and software engineering best practices such as Cloud Engineering, Infrastructure-as-Code, and CI/CD.

Have excellent communication and consulting skills, while delivering innovative AI solutions on Azure.

Responsibilities will include:

  • Building state-of-the-art MLOps platforms and tooling to help adoption of MLOps across organization
  • Designing cloud ML architectures and provide a roadmap for flexible patterns
  • Optimizing solutions for performance and scalability
  • Leading and driving the evolving best practices for MLOps
  • Helping to showcase expertise and leadership in this field

 

Tech stack

These are some of the tools and technologies that we use day to day. Key to success will be attitude and aptitude with a vision to build the next big thing in AI/ML field.

  • Python - including poetry for dependency management, pytest for automated testing and fastapi for building APIs
  • Microsoft Azure Platform - primarily focused on Databricks, Azure ML
  • Containers
  • CI/CD – Azure DevOps
  • Strong programming skills in Python
  • Solid understanding of cloud concepts
  • Demonstrable interest in Machine Learning
  • Understanding of IaC and CI/CD concepts
  • Strong communication and presentation skills.


Remuneration: Best in the industry


Connect: https://www.linkedin.com/in/shweta-gupta-a361511

Read more
People Impact
Agency job
via People Impact by Pruthvi K
Remote only
4 - 10 yrs
₹10L - ₹20L / yr
Amazon Redshift
Datawarehousing
Amazon Web Services (AWS)
Snow flake schema
Data Warehouse (DWH)

Job Title: Data Warehouse/Redshift Admin

Location: Remote

Job Description

AWS Redshift Cluster Planning

AWS Redshift Cluster Maintenance

AWS Redshift Cluster Security

AWS Redshift Cluster monitoring.

Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters

Hands-on experience with Query Tuning in high concurrency environment

Expertise setting up and managing AWS Redshift

AWS certifications Preferred (AWS Certified SysOps Administrator)

Read more
Remote only
2 - 8 yrs
₹8L - ₹18L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

This person MUST have:

  • B.E Computer Science or equivalent.
  • In-depth knowledge of machine learning algorithms and their applications including practical experience with and theoretical understanding of algorithms for classification, regression and clustering.
  • Hands-on experience in computer vision and deep learning projects to solve real world problems involving vision tasks such as object detection, Object tracking, instance segmentation, activity detection, depth estimation, optical flow, multi-view geometry, domain adaptation etc.
  • Strong understanding of modern and traditional Computer Vision Algorithms.
  • Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow, Darknet(YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet,BERT etc.
  • Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix, and CycleGAN.
  • Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
  • Ability to transform research articles into working solutions to solve real-world problems.
  • High proficiency in Python programming knowledge.
  • Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
  • Strong communication skills.


Experience:

  • Min 2 year experience
  • Startup experience is a must. 

Location:

  • Remote developer

Timings:

  • 40 hours a week but with 4 hours a day overlapping with the client timezone.  Typically clients are in the California PST Timezone.

Position:

  • Full time/Direct
  • We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
  • We dont believe in locking in people with large notice periods.  You will stay here because you love the company.  We have only a 15 days notice period.
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
3 - 8 yrs
₹5L - ₹12L / yr
Data Analytics
Data modeling
Python
PySpark
ETL
+3 more

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
Credit Saison Finance Pvt Ltd
Najma Khanum
Posted by Najma Khanum
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹30L / yr
Data Science
R Programming
Python
Role & Responsibilities:
1) Understand the business objectives, formulate hypotheses and collect the relevant data using SQL/R/Python. Analyse bureau, customer and lending performance data on a periodic basis to generate insights. Present complex information and data in an uncomplicated, easyto-understand way to drive action.
2) Independently Build and refit robust models for achieving game-changing growth while managing risk.
3) Identify and implement new analytical/modelling techniques to improve model performance across customer lifecycle (acquisitions, management, fraud, collections, etc.
4) Help define the data infrastructure strategy for Indian subsidiary.
a. Monitor data quality and quantity.
b. Define a strategy for acquisition, storage, retention, and retrieval of data elements. e.g.: Identify new data types and collaborate with technology teams to capture them.
c. Build a culture of strong automation and monitoring
d. Staying connected to the Analytics industry trends - data, techniques, technology, etc. and leveraging them to continuously evolve data science standards at Credit Saison.

Required Skills & Qualifications:
1) 3+ years working in data science domains with experience in building risk models. Fintech/Financial analysis experience is required.
2) Expert level proficiency in Analytical tools and languages such as SQL, Python, R/SAS, VBA etc.
3) Experience with building models using common modelling techniques (Logistic and linear regressions, decision trees, etc.)
4) Strong familiarity with Tableau//Power BI/Qlik Sense or other data visualization tools
5) Tier 1 college graduate (IIT/IIM/NIT/BITs preferred).
6) Demonstrated autonomy, thought leadership, and learning agility.
Read more
Catalyst IQ
at Catalyst IQ
6 recruiters
Sidharth Maholia
Posted by Sidharth Maholia
Mumbai, Bengaluru (Bangalore)
1 - 5 yrs
₹15L - ₹25L / yr
Tableau
SQL
MS-Excel
Python
Data Analytics
+2 more
Responsibilities:
● Ability to do exploratory analysis: Fetch data from systems and analyze trends.
● Developing customer segmentation models to improve the efficiency of marketing and product
campaigns.
● Establishing mechanisms for cross functional teams to consume customer insights to improve
engagement along the customer life cycle.
● Gather requirements for dashboards from business, marketing and operations stakeholders.
● Preparing internal reports for executive leadership and supporting their decision making.
● Analyse data, derive insights and embed it into Business actions.
● Work with cross functional teams.
Skills Required
• Data Analytics Visionary.
• Strong in SQL & Excel and good to have experience in Tableau.
• Experience in the field of Data Analysis, Data Visualization.
• Strong in analysing the Data and creating dashboards.
• Strong in communication, presentation and business intelligence.
• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.
• Aggressive “Take no prisoners” approach.
Read more
Product Based MNC
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹5L - ₹20L / yr
Apache Spark
Python
Amazon Web Services (AWS)
SQL

 

Job Description

Role requires experience in AWS and also programming experience in Python and Spark

Roles & Responsibilities

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
  • Design, Develop and Deliver data integration interfaces in the AWS
  • Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
  • Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
  • Design, Develop and Deliver data integration interfaces at scale using Python / Spark 
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
  • Manage code and deploy DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

You Have:

  • Minimum 5 years of software development experience
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
  • Delivered data management projects in any of the AWS
  • Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Strong knowledge of continuous integration, static code analysis and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have Excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations

 

Read more
Quantiphi Inc.
at Quantiphi Inc.
1 video
10 recruiters
Anwar Shaikh
Posted by Anwar Shaikh
Mumbai
1 - 5 yrs
₹4L - ₹15L / yr
Python
Machine Learning (ML)
Deep Learning
TensorFlow
Keras
+1 more
1. The candidate should be passionate about machine learning and deep learning.
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.
Read more
LatentView Analytics
at LatentView Analytics
3 recruiters
Kannikanti madhuri
Posted by Kannikanti madhuri
Chennai
3 - 5 yrs
₹0L / yr
SAS
SQL server
Python
SOFA Statistics
Analytics
+11 more
Looking for Immediate JoinersAt LatentView, we would expect you to:- Independently handle delivery of analytics assignments- Mentor a team of 3 - 10 people and deliver to exceed client expectations- Co-ordinate with onsite LatentView consultants to ensure high quality, on-time delivery- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)You'll be a valuable addition to our team if you have:- 3 - 5 years of hands-on experience in delivering analytics solutions- Great analytical skills, detail-oriented approach- Strong experience in R, SAS, Python, SQL, SPSS, Statistica, MATLAB or such analytic tools would be preferable- Working knowledge in MS Excel, Power Point and data visualization tools like Tableau, etc- Ability to adapt and thrive in the fast-paced environment that young companies operate in- A background in Statistics / Econometrics / Applied Math / Operations Research / MBA, or alternatively an engineering degree from a premier institution.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos