Cutshort logo
Technovert logo
ODI Developer
Technovert's logo

ODI Developer

Dushyant Waghmare's profile picture
Posted by Dushyant Waghmare
5 - 8 yrs
₹12.5L - ₹24L / yr
Hyderabad
Skills
Data Warehouse (DWH)
Informatica
ETL

Role: ODI Developer

Location: Hyderabad (Initially remote)

Experience: 5-8 Years

 

Technovert is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.

We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.

We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.

 

Must have:

  • Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
  • Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
  • Should be able to implement reusability, parameterization, workflow design, etc.
  • Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
  • Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
  • Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
  • Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
  • Integrate ODI with multiple Sources/targets.

 

Nice to have:

  • Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
  • Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
  • Hands-on experience of ODI 12c would be an added advantage.

 

Qualification:

  • Overall 3+ years of experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
  • Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
  • Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
  • Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
  • Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.

 

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Technovert

Founded :
2012
Type
Size :
100-1000
Stage :
Profitable
About

We are a team of problem solvers passionate about design and technology, delivering Digital transformation and increasing productivity.

Read more
Connect with the team
Profile picture
Sashi Pagadala
Profile picture
Tallapragada Sameera
Profile picture
Vijay Yalamanchili
Profile picture
Ravi Krishna Teja Kuchibhotla
Profile picture
Kanna Uppalapati
Profile picture
Dushyant Waghmare
Profile picture
Navya Pasupuleti
Profile picture
Vinod Prasad Devkota
Profile picture
SachinKumar
Profile picture
Satyadipa Sarangi
Profile picture
Abhilipsa Rath
Profile picture
Souren Pradhan
Company social profiles
linkedintwitterfacebook

Similar jobs

Bengaluru (Bangalore), Gurugram
1 - 7 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconR Programming
SAS
Surveying
skill iconData Analytics
+2 more

Desired Skills & Mindset:


We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.


• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions

• Statistical programming software experience in SPSS and comfortable working with large data sets.

• R, Python, SAS & SQL are preferred but not a mandate

• Excellent time management skills

• Good written and verbal communication skills; understanding of both written and spoken English

• Strong interpersonal skills

• Ability to act autonomously, bringing structure and organization to work

• Creative and action-oriented mindset

• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged

• Ability to work under pressure and deliver on tight deadlines


Qualifications and Experience:


• Graduate degree in: Statistics/Economics/Econometrics/Computer

Science/Engineering/Mathematics/MBA (with a strong quantitative background) or

equivalent

• Strong track record work experience in the field of business intelligence, market

research, and/or Advanced Analytics

• Knowledge of data collection methods (focus groups, surveys, etc.)

• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,

and MS Office (Excel, PowerPoint, Word)

• Strong analytical and critical thinking skills

• Industry experience in Consumer Experience/Healthcare a plus

Read more
Conviva
at Conviva
1 recruiter
Anusha Bondada
Posted by Anusha Bondada
Bengaluru (Bangalore)
3 - 6 yrs
₹20L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

As Conviva is expanding, we are building products providing deep insights into end-user experience for our customers.

 

Platform and TLB Team

The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real-time. Engineer the next-gen Spark-like system for in-memory computation of large time-series datasets – both Spark-like backend infra and library-based programming model. Build a horizontally and vertically scalable system that analyses trillions of events per day within sub-second latencies. Utilize the latest and greatest big data technologies to build solutions for use cases across multiple verticals. Lead technology innovation and advancement that will have a big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.

 

What You’ll Do

This is an individual contributor position. Expectations will be on the below lines:

  • Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva’s products
  • Responsible for the architecture of the Conviva platform
  • Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
  • Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements, etc.
  • Lead a team to develop a feature or parts of a product
  • Adhere to the Agile model of software development to plan, estimate, and ship per business priority

 

What you need to succeed

  • 5+ years of work experience in software development of data processing products.
  • Engineering degree in software or equivalent from a premier institute.
  • Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
  • Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
  • Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
  • Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
  • Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
  • Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity

 

Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.  

Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets! 


Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
Komal S
Posted by Komal S
Remote only
4 - 10 yrs
₹10L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+2 more

Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance, and growth. 

Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalizing go-to-market models and not just explore possibilities. 

We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership, and collaboration our people work in a spirit of co-creation, co-innovation, and co-development to engineer next-generation software products with the help of accelerators. 

Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions. 
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes. 

We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We are constantly self-asses and realign to work with each customer in the most impactful manner. 

Pre-requisites for the Role 

 

  1. Job ID-EMBD0120PS 
  1. Primary skill: GCP DATA ENGINEER, BIGQUERY, ETL 
  1. Secondary skill: HADOOP, PYTHON, SPARK 
  1. Years of Experience: 5-8Years  
  1. Location: Remote 

 

Budget- Open  

NP- Immediate 

 

 

GCP DATA ENGINEER 

Position description 

  • Designing and implementing software systems 
  • Creating systems for collecting data and for processing that data 
  • Using Extract Transform Load operations (the ETL process) 
  • Creating data architectures that meet the requirements of the business 
  • Researching new methods of obtaining valuable data and improving its quality 
  • Creating structured data solutions using various programming languages and tools 
  • Mining data from multiple areas to construct efficient business models 
  • Collaborating with data analysts, data scientists, and other teams. 

Candidate profile 

  • Bachelor’s or master’s degree in information systems/engineering, computer science and management or related. 
  • 5-8 years professional experience as Big Data Engineer 
  • Proficiency in modelling and maintaining Data Lakes with PySpark – preferred basis. 
  • Experience with Big Data technologies (e.g., Databricks) 
  • Ability to model and optimize workflows GCP. 
  • Experience with Streaming Analytics services (e.g., Kafka, Grafana) 
  • Analytical, innovative and solution-oriented mindset 
  • Teamwork, strong communication and interpersonal skills 
  • Rigor and organizational skills 
  • Fluency in English (spoken and written). 
Read more
Docsumo
at Docsumo
6 recruiters
Vaidehi Tipnis
Posted by Vaidehi Tipnis
Remote only
4 - 6 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
OCR

About Us :

Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.

 

As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.

 

Responsibilities :

  • You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
  • You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
  • You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
  • Should have hands-on experience applying advanced statistical learning techniques to different types of data.
  • Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
  • Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.

 

Skills / Requirements :

  • Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
  • Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
  • Working with OpenCV, TensorFlow and Keras
  • Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
  • Familiarity with Version Control tools such as Git
  • Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
  • Must be self-motivated, flexible, collaborative, with an eagerness to learn
 
Read more
MNC
Bengaluru (Bangalore)
3 - 8 yrs
₹15L - ₹18L / yr
skill iconData Analytics
SQL server
SQL
Data Analyst

1. Ability to work independently and to set priorities while managing several projects simultaneously; strong attention to detail is essential.
2.Collaborates with Business Systems Analysts and/or directly with key business users to ensure business requirements and report specifications are documented accurately and completely.
3.Develop data field mapping documentation.
4. Document data sources and processing flow.
5. Ability to design, refine and enhance existing reports from source systems or data warehouse.
6.Ability to analyze and optimize data including data deduplication required for reports.
7. Analysis and rationalization of reports.
8. Support QA and UAT teams in defining test scenarios and clarifying requirements.
9. Effectively communicate results of the data analysis to internal and external customers to support decision making.
10.Follows established SDLC, change control, release management and incident management processes.
11.Perform source data analysis and assessment.
12. Perform data profiling to capture business and technical rules.
13. Track and help to remediate issues and defects due to data quality exceptions.


Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
3.5 - 8 yrs
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Streetmark
Agency job
via STREETMARK Info Solutions by Mohan Guttula
Remote, Bengaluru (Bangalore), Chennai
3 - 9 yrs
₹3L - ₹20L / yr
SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
+3 more

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
Rivet Systems Pvt Ltd.
at Rivet Systems Pvt Ltd.
1 recruiter
Shobha B K
Posted by Shobha B K
Bengaluru (Bangalore)
5 - 19 yrs
₹10L - ₹30L / yr
ETL
Hadoop
Big Data
Pig
Spark
+2 more
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig

To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Pramod Shivalingappa S
Posted by Pramod Shivalingappa S
Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
skill iconPython
skill iconData Science
skill iconR Programming
(Senior) Data Scientist Job Description

About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! 

What do we offer?
● Some of the most challenging research problems in NLP and Computer Vision. Huge text and image
datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really
excites you.
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible
working hours.
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem.

You are also expected to develop capabilities that open up new business productization opportunities.

We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.

If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.

Key problem areas
● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
● Document clustering, attribute tagging, data normalization, classification, summarization, sentiment
analysis.
● Image based clustering and classification, segmentation, object detection, extracting text from images,
generative models, recommender systems.
● Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,
optimization, algorithms and complexity.
● Background in one or more of information retrieval, data mining, statistical techniques, natural language
processing, and computer vision.
● Excellent coding skills on multiple programming languages with experience building production grade
systems. Prior experience with Python is a bonus.
● Experience building and shipping machine learning models that solve real world engineering problems.
Prior experience with deep learning is a bonus.
● Experience building robust clustering and classification models on unstructured data (text, images, etc).
Experience working with Retail domain data is a bonus.
● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
● Experience working with a variety of tools and libraries for machine learning and visualization, including
numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
● Understand the business problems we are solving. Build data science capability that align with our product strategy.
● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
● Build robust clustering and classification models in an iterative manner that can be used in production.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Take end to end ownership of the projects you are working on. Work with minimal supervision.
● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Read more
Fortunefootprints.com
at Fortunefootprints.com
5 recruiters
Rupanjali Sen Gupta
Posted by Rupanjali Sen Gupta
NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹12L - ₹20L / yr
Business Intelligence (BI)
skill iconData Science
Predictive modelling
As a Senior Consultant Analytics, you shall gather, analyze, identify and lead actionable data insights to solve real complex challenges. Your work will directly influence decisions taken to build the most successful web/mobile gaming platform with games played by millions across the world. Responsibilities Leverage complex data algorithms and numerical methods for forecasting, simulation, and predictive analysis, designed to optimize key decisions and metrics such as retention, engagement, and operations. Work with all teams in the company to understand, plan and drive data-based actionable decisions, championing the Data Science culture. Analyze consumer data to drive insight on how to target effectively, retain customers at low cost, and optimize engagement metrics. Develop a reporting process for all KPIs. Recruit, develop and coach talent. Champion awareness of data indicators within the organization and teach everyone to proactively identify data patterns, growth trends, and areas for improvement. Qualifications B.tech or Bachelor degree in Data science. An MBA or advanced degree in a quantitative discipline is preferred. 3+ years in Data Science and Analytics execution, with specific experience in all stages - gathering, hygiene, and analysis of data resulting in actionable insights for an end-to-end product, marketing and business operations. Knowledge of advanced analytics including but not limited to Regression, GLMs, survival modeling, predictive analytics, forecasting, machine learning, decision trees etc. Past experience in at least one Statistical and analytic tools or language expertise R, Python, SAS etc.. Expertise in analysis Algorithms, Numerical Methods and Data Tactics used to drive operational excellence, user engagement and retention. Expertise in at least one visualization platforms such as Tableau, QlikView, Spotfire and Excel. Tableau skills preferred. Advanced SQL is mandatory skills. Ability to creatively solve business problems through innovative approaches. Ability to work with various teams in a complex environment, ensuring timely delivery of multiple projects. Highly analytical with the ability to collate, analyze and present data, and drive clear insights into decisions that improve KPIs. Ability to effectively communicate and manage relationships with senior management, company divisions and partners.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos