Cutshort logo
Simplifai Cognitive Solutions Pvt Ltd logo
BigData Developer (Spark+Python)
BigData Developer (Spark+Python)
Simplifai Cognitive Solutions Pvt Ltd's logo

BigData Developer (Spark+Python)

Priyanka Malani's profile picture
Posted by Priyanka Malani
2 - 15 yrs
₹10L - ₹30L / yr
Pune
Skills
Spark
Big Data
Apache Spark
skill iconPython
PySpark
Hadoop

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Simplifai Cognitive Solutions Pvt Ltd

Founded :
2017
Type
Size
Stage :
Bootstrapped
About

The growth of https://www.simplifai.ai/en/artificial-intelligence/">artificial intelligence accelerated these thoughts. Machine learning made it possible for the projects to get smaller, the solutions smarter, and the automation more efficient. Bård and Erik wanted to bring AI to the people, and they wanted to do it simply.

Simplifai was founded in 2017 and has grown considerably since then. Today we work globally and have offices in Norway, India, and Ukraine. We have built a global, diverse organization that is well prepared for further growth.

Read more
Company video
Simplifai Cognitive Solutions Pvt Ltd's video section
Simplifai Cognitive Solutions Pvt Ltd's video section
Connect with the team
Profile picture
Varun Pawar
Profile picture
Priyanka Malani
Profile picture
Vipul Tiwari
Company social profiles
N/A

Similar jobs

Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹8L - ₹13L / yr
Tableau
SQL
skill iconPython
Microsoft Excel
skill iconData Analytics
+1 more

Job Title

Data Analyst

 

Job Brief

The successful candidate will turn data into information, information into insight and insight into business decisions.

 

Data Analyst Job Duties

Data analyst responsibilities include conducting full lifecycle analysis to include requirements, activities and design. Data analysts will develop analysis and reporting capabilities. They will also monitor performance and quality control plans to identify improvements.

 

Responsibilities

● Interpret data, analyze results using statistical techniques and provide ongoing reports.

● Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality.

● Acquire data fromprimary orsecondary data sources andmaintain databases/data systems.

● Identify, analyze, and interpret trends orpatternsin complex data sets.

● Filter and “clean” data by reviewing computerreports, printouts, and performance indicatorsto locate and correct code problems.

● Work withmanagementto prioritize business and information needs.

● Locate and define new processimprovement opportunities. 

 

Requirements

● Proven working experienceas aData Analyst or BusinessDataAnalyst.

● Technical expertise regarding data models, database design development, data mining and segmentation techniques.

● Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks).

● Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc).

● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.

● Adept atqueries,reportwriting and presenting findings.

 

Job Location SouthDelhi, New Delhi 

Read more
Carsome
at Carsome
3 recruiters
Piyush Palkar
Posted by Piyush Palkar
Remote, Kuala Lumpur
1 - 6 yrs
₹10L - ₹30L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
SQL
Problem solving
+4 more

Carsome’s Data Department is on the lookout for a Data Scientist/Senior Data Scientist who has a strong passion in building data powered products.

 

Data Science function under the Data Department has a responsibility for standardisation of methods, mentoring team of data science resources/interns, including code libraries and documentation, quality assurance of outputs, modeling techniques and statistics, leveraging a variety of technologies, open-source languages, and cloud computing platform. 

 

You will get to lead & implement projects such as price optimization/prediction, enabling iconic personalization experiences for our customer, inventory optimization etc.

 

Job Descriptions

 

  • Identifying and integrating datasets that can be leveraged through our product and work closely with data engineering team to develop data products.
  • Execute analytical experiments methodically to help solve various problems and make a true impact across functions such as operations, finance, logistics, marketing. 
  • Identify, prioritize, and design testing opportunities that will inform algorithm enhancements. 
  • Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models and clean and validate data for uniformity and accuracy.
  • Unlock insights by analyzing large amounts of complex website traffic and transactional data. 
  • Implement analytical models into production by collaborating with data analytics engineers.

 

Technical Requirements

 

  • Expertise in model design, training, evaluation, and implementation ML Algorithm expertise K-nearest neighbors, Random Forests, Naive Bayes, Regression Models. PyTorch, TensorFlow, Keras, deep learning expertise, tSNE, gradient boosting expertise, regression implementation expertise, Python, Pyspark, SQL, R, AWS Sagemaker /personalize etc.
  • Machine Learning / Data Science Certification

 

Experience & Education 

 

  • Bachelor’s in Engineering / Master’s in Data Science  / Postgraduate Certificate in Data Science. 
Read more
[x]cube LABS
at [x]cube LABS
2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
ETL
Informatica
Data Warehouse (DWH)
PowerBI
DAX
+12 more
  • Creating and managing ETL/ELT pipelines based on requirements
  • Build PowerBI dashboards and manage datasets needed.
  • Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
  • Build data cubes for real-time visualisation needs and CXO dashboards.


Required Tech Skills


  • Microsoft PowerBI & DAX
  • Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
  • Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory



Read more
Compile
at Compile
16 recruiters
Sarumathi NH
Posted by Sarumathi NH
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
Spark

You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.

You will

  • Work on building and maintaining data pipelines (specifically RWD).
  • Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
  • Scheduling and maintaining pipeline jobs for RWD.
  • Develop, test, and implement data solutions based on the design.
  • Design and implement quality checks on existing and new data pipelines.
  • Ensure adherence to security and compliance that is required for the products.
  • Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.

You have

  • Hands-on experience with ETL process (min of 5 years).
  • Excellent communication skills and ability to work with multiple vendors.
  • High proficiency with Spark, SQL.
  • Proficiency in Data modeling, validation, quality check, and data engineering concepts.
  • Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
  • Familiarity with version control technologies, and CI/CD systems.
  • Understanding of scheduling tools like Airflow/Prefect.
  • Min of 3 years of experience managing data warehouses.
  • Familiarity with healthcare datasets is a plus.

Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.         

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
Numerator
at Numerator
4 recruiters
Ketaki Kambale
Posted by Ketaki Kambale
Remote, Pune
3 - 9 yrs
₹5L - ₹20L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
SQL
+1 more

We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless.  As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines.

Duties/Responsibilities Include:

  •  Develop expertise in the different upstream data stores and systems across Numerator.
  • Design, develop and maintain data integration pipelines for Numerators growing data sets and product offerings.
  • Build testing and QA plans for data pipelines.
  • Build data validation testing frameworks to ensure high data quality and integrity.
  • Write and maintain documentation on data pipelines and schemas
 

Requirements:

  • BS or MS in Computer Science or related field of study
  • 3 + years of experience in the data warehouse space
  • Expert in SQL, including advanced analytical queries
  • Proficiency in Python (data structures, algorithms, object oriented programming, using API’s)
  • Experience working with a cloud data warehouse (Redshift, Snowflake, Vertica)
  • Experience with a data pipeline scheduling framework (Airflow)
  • Experience with schema design and data modeling

Exceptional candidates will have:

  • Amazon Web Services (EC2, DMS, RDS) experience
  • Terraform and/or ansible (or similar) for infrastructure deployment
  • Airflow -- Experience building and monitoring DAGs, developing custom operators, using script templating solutions.
  • Experience supporting production systems in an on-call environment
Read more
Abu Dhabi, Dubai
6 - 12 yrs
₹18L - ₹25L / yr
PySpark
Big Data
Spark
Data Warehouse (DWH)
SQL
+2 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune
2 - 6 yrs
₹3L - ₹15L / yr
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
+3 more

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
Turing
at Turing
1 recruiter
Misbah Munir
Posted by Misbah Munir
Remote only
5 - 10 yrs
₹10L - ₹20L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Decision trees
skill iconDeep Learning
+5 more

About Turing:

Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.

We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (https://tcrn.ch/3lNKbM9">one of the largest in Silicon Valley) from:

  • Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo)
  • Executives from Google, Facebook, Square, Amazon, and Twitter
  • Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.)
  • Cyan Banister
  • Founder of Upwork (Beerud Sheth)

We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month. 

Some articles about Turing:


Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (https://techcrunch.com/2017/02/23/revcontent-acquires-rover/">Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini.


About the role:

Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us. 

 

We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc. 

A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job. 

     
 You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words. 

As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s).

This role will directly report to Turing’s founder and CTO, https://www.linkedin.com/in/vijay0/">Vijay Krishnan. This is his https://scholar.google.com/citations?user=uCRc7DgAAAAJ&;hl=en">Google Scholar profile. 


Responsibilities:

  • Enhance our existing machine learning systems using your core coding skills and ML knowledge.
  • Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems.
  • Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics.
  • Design features and builds large scale recommendation systems to improve targeting and engagement.
  • Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers.

 

 

Minimum Requirements:

 

  • BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred).
  • Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams
  • Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert.
  • 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing.
  • Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them.
  • Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools.
  • Good understanding of mathematical foundations of machine learning algorithms.
  • Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST).

 

Other Nice-to-have Requirements:

 

  • First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals. 
  • Strong performance in Kaggle competitions.
  • 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc.
  • Strong communication skills.
  • Experienced in leading large-scale multi-engineering projects.
  • Flexible, and a positive team player with outstanding interpersonal skills.
Read more
Octro Inc
at Octro Inc
1 recruiter
Reshma Suleman
Posted by Reshma Suleman
Noida, NCR (Delhi | Gurgaon | Noida)
1 - 7 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython

Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. 

 

They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. 

 

They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Responsibilities :

- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.

- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.

- Assess the effectiveness and accuracy of new data sources and data gathering techniques.

- Develop custom data models and algorithms to apply to data sets.

- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.

- Develop various A/B testing frameworks and test model qualities.

- Coordinate with different functional teams to implement models and monitor outcomes.

- Develop processes and tools to monitor and analyze model performance and data accuracy.

Qualifications :

- Strong problem solving skills with an emphasis on product development and improvement.

- Advanced knowledge of SQL and its use in data gathering/cleaning.

- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.

- Experience working with and creating data architectures.

- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.

- Excellent written and verbal communication skills for coordinating across teams.

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos