Cutshort logo
Data Engineer
A Product Based Client,Chennai's logo

Data Engineer

Agency job
4 - 8 yrs
₹10L - ₹15L / yr
Chennai
Skills
ETL
Informatica
Data Warehouse (DWH)
Spark
PySpark
skill iconPython
SQL

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About A Product Based Client,Chennai

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
PowerBI
databricks
+4 more

About The Company


 The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide. 


The Role


Power BI front-end developer in the Data Domain (Manufacturing, Sales & Marketing, Purchasing, Logistics, …).Responsible for the Power BI front-end design, development, and delivery of highly visible data-driven applications in the Compressor Technique. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate, and user-friendly manner. You always ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make

recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis

of production issues and the implementation of bugfixes and enhancements.


Skills & Experience


• The ideal candidate has a degree in Computer Science, Information Technology or equal through experience.

• Strong knowledge on BI development principles, time intelligence, functions, dimensional modeling and data visualization is required.

• Advanced knowledge and 5-10 years experience with professional BI development & data visualization is preferred.

• You are familiar with data warehouse concepts.

• Knowledge on MS Azure (data lake, databricks, SQL) is considered as a plus.

• Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset.

• Good knowledge (oral and written) of English is required.

Read more
Carsome
at Carsome
3 recruiters
Piyush Palkar
Posted by Piyush Palkar
Remote, Kuala Lumpur
2 - 5 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconDjango
skill iconFlask
TensorFlow
+2 more
Carsome is a growing startup that is utilising data to improve the experience of second hand car shoppers. This involves developing, deploying & maintaining machine learning models that are used to improve our customers' experience. We are looking for candidates who are aware of the machine learning project lifecycle and can help managing ML deployments.

Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models

Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Read more
Arting Digital
Pragati Bhardwaj
Posted by Pragati Bhardwaj
Navi Mumbai
6 - 10 yrs
₹15L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
sql
Aws
+3 more

Title:- Data Scientist


Experience:-6 years

 

Work Mode:- Onsite

 

Primary Skills:- Data Science, SQL, Python, Data Modelling, Azure, AWS, Banking Domain (BFSI/NBFC)

 

Qualification:- Any

 

Roles & Responsibilities:-

 

1.  Acquiring, cleaning, and preprocessing raw data for analysis.

2.  Utilizing statistical methods and tools for analyzing and interpreting complex  datasets.

3.  Developing and implementing machine learning models for predictive analysis.

4.  Creating visualizations to effectively communicate insights to both technical and   non-technical stakeholders.

5.  Collaborating with cross-functional teams, including data engineers, business   analysts, and domain experts.

6.  Evaluating and optimizing the performance of machine learning models for   accuracy and efficiency.

7.  Identifying patterns and trends within data to inform business decision-making.

8.  Staying updated on the latest advancements in data science, machine learning, and  relevant technologies.

 

Requirement:- 

 

1.  Experience with modeling techniques such as Linear Regression, clustering, and classification techniques.

2.  Must have a passion for data, structured or unstructured.  0.6 – 5 years of hands-on experience with Python and SQL is a must.

3.   Should have sound experience in data mining, data analysis and machine learning techniques.

4.  Excellent critical thinking, verbal and written communications skills.

5.  Ability and desire to work in a proactive, highly engaging, high-pressure, client service environment.

6.   Good presentation skills.


Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters

Vamsikrishna G
Posted by Vamsikrishna G
Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
Job Description:

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
2 - 6 yrs
₹8L - ₹13L / yr
PySpark
SQL
Data modeling
Data Warehouse (DWH)
Informatica
+2 more
3+ years of professional work experience with a reputed analytics firm
 Expertise in handling large amount of data through Python or PySpark
 Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
 Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
 Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
 A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
 Track record of strong problem-solving, requirement gathering, and leading by example
 Ability to thrive in a flexible and collaborative environment
 Track record of completing projects successfully on time, within budget and as per scope
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹12L - ₹16L / yr
skill iconPython
Bash
MySQL
skill iconElastic Search
skill iconAmazon Web Services (AWS)

What are we looking for:

 

  1. Strong experience in MySQL and writing advanced queries
  2. Strong experience in Bash and Python
  3. Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
  4. Exposure to cloud services such as AWS, Azure, or GCP
  5. 2+ years of experience in the production support
  6. Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
  7. Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
  8. Interest in learning new languages / framework as needed
  9. Good written and oral communications skills
  10. A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with

 

As a product solutions engineer, you will:

 

  1. Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
  2. Work with developers to find and correct more complex issues
  3. Address urgent issues quickly, work within and measure against customer SLAs
  4. Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
  5. Build anomaly detectors wherever applicable
  6. Pass articulated feedback from customers to the development and product team
  7. Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
  8. Offer technical support needed in development

 

Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Pune, Bengaluru (Bangalore), Hyderabad, Chennai
3 - 8 yrs
₹2L - ₹9L / yr
Data engineering
Data engineer
Spark
Apache Spark
Apache Kafka
+13 more

Responsibilities for Data Engineer

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Foghorn Systems
at Foghorn Systems
1 recruiter
Abhishek Vijayvargia
Posted by Abhishek Vijayvargia
Pune
0 - 7 yrs
₹15L - ₹50L / yr
skill iconR Programming
skill iconPython
skill iconData Science

Role and Responsibilities

  • Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
  • The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
  • Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).

Core Qualifications
Candidates must meet ALL of the following qualifications:

  • Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
  • Three years of industry work experience, developing data mining models which were deployed and used.
  • Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
  • Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks

Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:

  • Soft Skills
    • Sets expectations, develops project plans and meets expectations.
    • Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
  • Technical skills
    • Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
    • Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
    • Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
    • Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
    • OpenCV or other image processing tools or libraries
    • Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
    • Decision trees like XGBoost or Random Forests is helpful.
    • Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
    • Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
    • Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
    • Experience with PMML is of interest (see www.DMG.org).
  • Vertical experience in Industrial Internet of Things (IoT) applications:
    • Energy: Oil and Gas, Wind Turbines
    • Manufacturing: Motors, chemical processes, tools, automotive
    • Smart Cities: Elevators, cameras on population or cars, power grid
    • Transportation: Cars, truck fleets, trains

 

About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.

Press:  https://www.foghorn.io/press-room/">https://www.foghorn.io/press-room/

Awards: https://www.foghorn.io/awards-and-recognition/">https://www.foghorn.io/awards-and-recognition/

  • 2019 Edge Computing Company of the Year – Compass Intelligence
  • 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
  • 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
  • 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
  • 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
  • 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
  • 2018 The 10 Most Innovative AI Solution Providers – Insights Success
  • 2018 The AI 100 – CB Insights
  • 2017 Cool Vendor in IoT Edge Computing – Gartner
  • 2017 20 Most Promising AI Service Providers – CIO Review

Our Series A round was for $15 million.  Our Series B round was for $30 million October 2017.  Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.

About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9.  We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects.  About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.

Our data scientists like our team because:

  • We care about “best practices”
  • Have a direct impact on the company’s revenue
  • Give or receive mentoring as part of the collaborative process
  • Questions and challenging the status quo with data is safe
  • Intellectual curiosity balanced with humility
  • Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning

 

Read more
Data ToBiz
at Data ToBiz
2 recruiters
Ankush Sharma
Posted by Ankush Sharma
Chandigarh
2 - 5 yrs
₹4L - ₹6L / yr
Algorithms
ETL
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
+3 more
Job Summary DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data. Responsibilities and Duties As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning. One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team. Ensure that the analytical plan aligns with the customer’s overall strategic need. Understand and identify appropriate data sources required for solving the business problem at hand. Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods. Execute project plan to meet requirements and timelines. Identify success metrics and monitor them to ensure high-quality output for the client. Deliver production-ready models that can be deployed in the production system. Create relevant output documents, as required – power point deck/ excel files, data frames etc. Overall project management - Creating a project plan and timelines for the project and obtain sign-off. Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner. Identify and evangelize new and upcoming analytical trends in the market within the organization. Implementing the applications of these algorithms/methods/techniques in R/Python Required Experience, Skills and Qualifications 3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics. 2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK. Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language. Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or FinTech Domain is a plus.
Read more
The Smart Cube
at The Smart Cube
1 recruiter
Jasmine Batra
Posted by Jasmine Batra
Remote, Noida, NCR (Delhi | Gurgaon | Noida)
2 - 5 yrs
₹2L - ₹5L / yr
skill iconR Programming
Advanced analytics
skill iconPython
Marketing analytics
• Act as a lead analyst on various data analytics projects aiding strategic decision making for Fortune 500 / FTSE 100 companies, Blue Chip Consulting Firms and Global Financial Services companies • Understand the client objectives, and work with the PL to design the analytical solution/framework. Be able to translate the client objectives / analytical plan into clear deliverables with associated priorities and constraints • Collect/Organize/Prepare/Manage data for the analysis and conduct quality checks • Use and implement basic and advanced statistical techniques like frequencies, cross-tabs, correlation, Regression, Decision Trees, Cluster Analysis, etc. to identify key actionable insights from the data • Develop complete sections of final client report in Power Point. Identify trends and evaluate insights in terms of logic and reasoning, and be able to succinctly present them in terms of an executive summary/taglines • Conduct sanity checks of the analysis output based on reasoning and common sense, and be able to do a rigorous self QC, as well as of the work assigned to analysts to ensure an error free output • Aid in decision making related to client management, and also be able to take client calls relatively independently • Support the project leads in managing small teams of 2-3 analysts, independently set targets and communicate to team members • Discuss queries/certain sections of deliverable report over client calls or video conferences Technical Skills: • Hands on experience of one or more statistical tools such as SAS, R and Python • Working knowledge or experience in using SQL Server (or other RDBMS tools) would be an advantage Work Experience: • 2-4 years of relevant experience in Marketing Analytics / MR. • Experience in managing, cleaning and analysis of large datasets using statistical packages like SAS, R, Python, etc. • Experience in data management using SQL queries on tools like Access/ SQL Server
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos