Cutshort logo
DataMetica logo
SQL Engineers
DataMetica's logo

SQL Engineers

Sayali Kachi's profile picture
Posted by Sayali Kachi
4 - 10 yrs
₹5L - ₹20L / yr
Pune, Hyderabad
Skills
ETL
SQL
Data engineering
Analytics
PL/SQL
Shell Scripting
Linux/Unix
Datawarehousing

We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike. 

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description

Experience : 4-10 years

Location : Pune

 


Mandatory Skills - 

  • Strong in ETL/SQL development
  • Strong Data Warehousing skills
  • Hands-on experience working with Unix/Linux
  • Development experience in Enterprise Data warehouse projects
  • Good to have experience working with Python, shell scripting
  •  

Opportunities -

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
  • Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing


 

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

http://www.datametica.com/">www.datametica.com

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About DataMetica

Founded :
2013
Type
Size :
100-1000
Stage :
Profitable
About
As a global leader in Data Warehouse Migration, Data Modernization, and Data Analytics, we empower businesses through automation and help you attain excellence. Our Belief is to Empowering companies to master their businesses and helping them achieve their full potential, we nurture clients with our innovative frameworks. Our embedded values help us strengthen the bond with our clients, ensuring growth for all. Datametica is a preferred partner with leading cloud vendors. We offer solutions related to migration from current Enterprise Data Warehouses to the Cloud determining which of these is best suited to your needs. We are giving Data Wings.
Read more
Company video
DataMetica's video section
DataMetica's video section
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Sumangali Desai
Profile picture
Shivani Mahale
Profile picture
Nitish Saxena
Profile picture
Nikita Aher
Profile picture
Pooja Gaikwad
Profile picture
Sayali Kachi
Profile picture
syed raza
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
Python
Amazon Redshift
Amazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
Digi Upaay Solutions Pvt Ltd
Sridhar Chakkravarthy
Posted by Sridhar Chakkravarthy
Remote only
8 - 11 yrs
₹11L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
PL/SQL
+4 more

Required Skill Set-

Project experience in any of the following - Data Management,

Database Development, Data Migration or Data Warehousing.

• Expertise in SQL, PL/SQL.


Role and Responsibilities -

• Work on a complex data management program for multi-billion dollar

customer

• Work on customer projects related to data migration, data

integration

•No Troubleshooting

• Execution of data pipelines, perform QA, project documentation for

project deliverables

• Perform data profiling, data cleansing, data analysis for migration

data

• Participate and contribute in project meeting

• Experience in data manipulation using Python preferred

• Proficient in using Excel, PowerPoint

 -Perform other tasks as per project requirements.

Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
Java
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Deep-Rooted.co (formerly Clover)
at Deep-Rooted.co (formerly Clover)
6 candid answers
1 video
Likhithaa D
Posted by Likhithaa D
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
Java
Python
SQL
AWS Lambda
HTTP
+5 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.


Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.


Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.  

How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.


We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.

Purpose of the role:

* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making.
* Handle nuances of Excel and Google Sheets API.
* Pull data in and manage it growth, freshness and correctness.
* Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
* Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.

Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python.
* Good Knowledge of Data Warehousing, Data Architecture.
* Experience with Data Transformations and ETL; 
* Experience with API tools and more closed systems like Excel, Google Sheets etc.
* Experience AWS Cloud Platform and Lambda
* Experience with distributed data processing tools.
* Experiences with container-based deployments on cloud.

Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Read more
Ushur Technologies Pvt Ltd
at Ushur Technologies Pvt Ltd
1 video
2 recruiters
Priyanka N
Posted by Priyanka N
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
MongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more
Product based company
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Priyanka U
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹20L / yr
PySpark
SQL
Data Warehouse (DWH)
ETL
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
  • Minimum 1 year experience in Pyspark
Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
Data Engineer
Big Data
Python
Amazon Web Services (AWS)
SQL
+2 more
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
TechChefs Software
at TechChefs Software
2 recruiters
Shilpa Yadav
Posted by Shilpa Yadav
Remote, Anywhere from india
5 - 10 yrs
₹1L - ₹15L / yr
ETL
Informatica
Python
SQL

Responsibilities

  • Installing and configuring Informatica components, including high availability; managing server activations and de-activations for all environments; ensuring that all systems and procedures adhere to organizational best practices
  • Day to day administration of the Informatica Suite of services (PowerCenter, IDS, Metadata, Glossary and Analyst).
  • Informatica capacity planning and on-going monitoring (e.g. CPU, Memory, etc.) to proactively increase capacity as needed.
  • Manage backup and security of Data Integration Infrastructure.
  • Design, develop, and maintain all data warehouse, data marts, and ETL functions for the organization as a part of an infrastructure team.
  • Consult with users, management, vendors, and technicians to assess computing needs and system requirements.
  • Develop and interpret organizational goals, policies, and procedures.
  • Evaluate the organization's technology use and needs and recommend improvements, such as software upgrades.
  • Prepare and review operational reports or project progress reports.
  • Assist in the daily operations of the Architecture Team , analyzing workflow, establishing priorities, developing standards, and setting deadlines.
  • Work with vendors to manage support SLA’s and influence vendor product roadmap
  • Provide leadership and guidance in technical meetings, define standards and assist/provide status updates
  • Work with cross functional operations teams such as systems, storage and network to design technology stacks.

 

Preferred Qualifications

  • Minimum 6+ years’ experience as Informatica Engineer and Developer role
  • Minimum of 5+ years’ experience in an ETL environment as a developer.
  • Minimum of 5+ years of experience in SQL coding and understanding of databases
  • Proficiency in Python
  • Proficiency in command line troubleshooting
  • Proficiency in writing code in Perl/Shell scripting languages
  • Understanding of Java and concepts of Object-oriented programming
  • Good understanding of systems, networking, and storage
  • Strong knowledge of scalability and high availability
Read more
Helical IT Solutions
at Helical IT Solutions
4 recruiters
Niyotee Gupta
Posted by Niyotee Gupta
Hyderabad
1 - 5 yrs
₹3L - ₹8L / yr
ETL
Big Data
TAC
PL/SQL
Relational Database (RDBMS)
+1 more

ETL Developer – Talend

Job Duties:

  • ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,

best practices and are maintainable, modular and reusable.

  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • ETL Developer will analyze and review complex object and data models and the metadata

repository in order to structure the processes and data for better management and efficient

access.

  • Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
  • Training and mentoring Junior Analysts and building their proficiency in the ETL process.
  • Preparing mapping document to extract, transform, and load data ensuring compatibility with

all tables and requirement specifications.

  • Experience in ETL system design and development with Talend / Pentaho PDI is essential.
  • Create quality rules in Talend.
  • Tune Talend / Pentaho jobs for performance optimization.
  • Write relational(sql) and multidimensional(mdx) database queries.
  • Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &

Load balancing setup, and all its administrative functions.

  • Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,

dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,

and analytical models.

  • Exposure in Map Reduce components of Talend / Pentaho PDI.
  • Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and

maintenance.

  • Working knowledge of relational database theory and dimensional database models.
  • Creating and deploying Talend / Pentaho custom components is an add-on advantage.
  • Nice to have java knowledge.

Skills and Qualification:

  • BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
  • Having an experience of 3+ years.
  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • Ability to work independently.
  • Ability to handle a team.
  • Good written and oral communication skills.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos