Cutshort logo
Jiva adventures logo
Data Scientist
Jiva adventures's logo

Data Scientist

Bharat Chinhara's profile picture
Posted by Bharat Chinhara
1 - 4 yrs
₹5L - ₹15L / yr
Bengaluru (Bangalore)
Skills
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Should be experienced in building Machine learning pipelines.   Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Jiva adventures

Founded :
2017
Type
Size
Stage :
Profitable
About

About Jiva:
Jiva was launched in 2017 by two seasoned Silicon Valley entrepreneurs with one core mission- ‘make
lives simple’. By bringing together the best in design thinking, data science and cutting-edge technology,
Jiva plans to create products and services that takes the hassle out of your everyday life- we do all the
work, so you can have all the fun. At the core of Jiva’s principle is the customer. Every product we
create, every action we undertake is done with one guiding principle- Does this help the customer?

Jiva’s is actively working to launch two visionary products: SmartNomad, a revolutionary travel product
and Parkquility, a simplified parking solution.
Why Jiva:
At Jiva, we do not look for employees – we look for futurists. If the limits of your skills chase the
boundlessness of your dreams, we want to talk to you. At Jiva, you will work with smart colleagues
under the mentorship of a very experienced leadership team. You will work on tough problems that will
simultaneously challenge and excite you. You will create products meant to be used by millions of
people around the world. Do you have a passion for making a positive impact on peoples’ lives? Do you
love data, design and technology? If yes, we want to hear from you!

About SmartNomad:
Everyone loves traveling, but planning a trip can be quite a hassle for most people. When to arrive,
where to stay, what are best things to do, where to eat and have all the fun while staying within the
budget- getting this right consumes a lot of time and energy, taking away from the richness of
experience.
Enter SmartNomad – a technology firm that connects the world of travel. With a mantra of ‘plan,
experience and socialize’, SmartNomad is one-stop shop for all travel needs. SmartNomad will create
intelligent itinerary for travelers solely based on their preferences, be their go-to virtual assistant during
their adventure and connect their travel experiences with their social-media savvy friends.
SmartNomad is founded by folks with vast experiences in entrepreneurship, analytics and strategy and
who as global travelers have a passion to disrupt and revolutionize the way people currently travel.
SmartNomad has offices in US and Bengaluru and plans to launch in the North American market.
About Parkquility:
Finding parking can be quite a hassle at times – people spend significant amount of time trying to find a
spot, wait in queue to get a spot or miss an available space since they couldn’t spot it. Parkquility uses
cutting-edge technology to make this process simple. Parkquility is a stress-free solution to simplify the
parking process while enhancing the experience for customers as well as parking space owners.
Parkquility allows customers to book and pay for a specific parking spot well in advance, skipping wait
time and hassle of finding a comfortable spot. Parking lot owners can manage their spaces effectively
online without worrying about infrastructure set up.
Parkquility is founded by a team with vast experiences in entrepreneurship, analytics and strategy with a
passion to solve complex world problems. It has offices in US and Bengaluru and plans to launch in the
North American market.

Read more
Connect with the team
Profile picture
Vishnu Subramaniam
Profile picture
Bharat Chinhara
Company social profiles
N/A

Similar jobs

Piako
PiaKo Store
Posted by PiaKo Store
Kolkata
4 - 8 yrs
₹12L - ₹24L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
ETL

We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!

At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.

We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.

As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.

Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.

Key Responsibilities:

  • ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
  • AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
  • Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
  • Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
  • Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
  • Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
  • Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
  • Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
  • Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
  • Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer with a focus on ETL pipeline development.
  • Strong proficiency in Python programming.
  • In-depth knowledge of AWS serverless technologies and services.
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience with version control systems (e.g., Git).
  • Strong SQL skills for data extraction and transformation.
  • Excellent problem-solving and troubleshooting abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Effective communication skills for articulating technical concepts to non-technical stakeholders.
  • Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.

Preferred Experience:

  • Knowledge of data orchestration and workflow management tools
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.

What You Can Expect:

- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.

- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.

- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.

They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.

If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.

Apply today for more information about this exciting opportunity.

Onsite Location: Kolkata, India (Salt Lake Sector V)


Read more
Tiger Analytics
Bengaluru (Bangalore), Chennai, Hyderabad
2 - 5 yrs
₹2L - ₹10L / yr
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Job brief

We are looking for a Lead Data Scientist to lead a technical team and help us gain

useful insight out of raw data.


Lead Data Scientist responsibilities include managing the data science team, planning

projects and building analytics models. You should have a strong problem-solving ability

and a knack for statistical analysis. If you’re also able to align our data products with our

business goals, we’d like to meet you.


Your ultimate goal will be to help improve our products and business decisions by

making the most out of our data.


Responsibilities

● Conceive, plan and prioritize data projects

● Ensure data quality and integrity

● Interpret and analyze data problems

● Build analytic systems and predictive models

● Align data projects with organizational goals

● Lead data mining and collection procedures

● Test performance of data-driven products

● Visualize data and create reports

● Build and manage a team of data scientists and data engineers


Requirements

● Proven experience as a Data Scientist or similar role

● Solid understanding of machine learning

● Knowledge of data management and visualization techniques

● A knack for statistical analysis and predictive modeling

● Good knowledge of R, Python and MATLAB

● Experience with SQL and NoSQL databases

● Strong organizational and leadership skills

● Excellent communication skills

● A business mindset

● Degree in Computer Science, Data Science, Mathematics or similar field


● Familiar with emerging/cutting edge, open source, data science/machine learning

libraries/big data platforms

Read more
TEKsystems
at TEKsystems
1 recruiter
priyanka kanwar
Posted by priyanka kanwar
Gurugram
5 - 10 yrs
₹15L - ₹25L / yr
Apache Spark
skill iconAmazon Web Services (AWS)
skill iconPython
airflow
Algorithms

TOP 3 SKILLS

Python (Language)

Spark Framework

Spark Streaming

Docker/Jenkins/ Spinakar

AWS

Hive Queries

He/She should be good coder.

Preff: - Airflow

Must have experience: -

Python

Spark framework and streaming

exposure to Machine Learning Lifecycle is mandatory.

Project:

This is searching domain project. Any searching activity which is happening on website this team create the model for the same, they create sorting/scored model for any search. This is done by the data

scientist This team is working more on the streaming side of data, the candidate would work extensively on Spark streaming and there will be a lot of work in Machine Learning.


INTERVIEW INFORMATION

3-4 rounds.

1st round based on data engineering batching experience.

2nd round based on data engineering streaming experience.

3rd round based on ML lifecycle (3rd round can be a techno-functional round based on previous

feedbacks otherwise 4th round will be a functional round if required.

Read more
Hyderabad
4 - 7 yrs
₹12L - ₹28L / yr
skill iconPython
Spark
Big Data
Hadoop
Apache Hive
Must have :

  • At least 4 to 7 years of relevant experience as Big Data Engineer
  • Hands-on experience in Scala or Python
  • Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
  • Strong programming experience in building applications/platform using Scala or Python.
  • Experienced in implementing Spark RDD Transformations, actions to implement business analysis


We are specialized in productizing solutions of new technology. 
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
Read more
Inviz Ai Solutions Private Limited
Shridhar Nayak
Posted by Shridhar Nayak
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

TSDE - Data 

Data Engineer:

 

  • Should have total 3-6 Yrs of experience in Data Engineering.
  • Person should have experience in coding data pipeline on GCP. 
  • Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience. 
  • Strong on programming languages like Scala, Python, Java. 
  • Good understanding of various data storage formats and it’s advantages. 
  • Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). 
  • Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. 
  • Data Engineer Certification preferred 

 

Experience in Working with GCP tools like

 
 

Store :  CloudSQL , Cloud Storage, Cloud Bigtable,  Bigquery, Cloud Spanner, Cloud DataStore

 

Ingest :  Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services

 

Schedule : Cloud Composer

 

Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep

 

CI/CD - Bitbucket+Jenkinjs / Gitlab

 

Atlassian Suite

 

 

 

 .
Read more
Indium Software
at Indium Software
16 recruiters
Karunya P
Posted by Karunya P
Bengaluru (Bangalore), Hyderabad
1 - 9 yrs
₹1L - ₹15L / yr
SQL
skill iconPython
Hadoop
HiveQL
Spark
+1 more

Responsibilities:

 

* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.

SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries

Languages: Python

* Good communicator, shows initiative, works well with stakeholders.

* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.

* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL

* Solid communication and articulation skills

* Able to handle stakeholders independently with less interventions of reporting manager.

* Develop strategies to solve problems in logical yet creative ways.

* Create custom reports and presentations accompanied by strong data visualization and storytelling

 

We would be excited if you have:

 

* Excellent communication and interpersonal skills

* Ability to meet deadlines and manage project delivery

* Excellent report-writing and presentation skills

* Critical thinking and problem-solving capabilities

Read more
Velocity Services
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
skill iconPython
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Megha M
Posted by Megha M
Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
skill iconPython
Data Structures
Algorithms
Web Scraping
Relevant set of skills
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
Read more
SmartJoules
at SmartJoules
1 video
9 recruiters
Saksham Dutta
Posted by Saksham Dutta
Remote, NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconMachine Learning (ML)
skill iconPython
Big Data
Apache Spark
skill iconDeep Learning

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes. 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary. 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase. 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark. 
  • Basic use of clouds (VM’s example EC2).
  • Brownie points for Kubernetes and Task Queues.      
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Read more
FarmGuide
at FarmGuide
1 recruiter
Anupam Arya
Posted by Anupam Arya
NCR (Delhi | Gurgaon | Noida)
0 - 8 yrs
₹7L - ₹14L / yr
Computer Security
Image processing
OpenCV
skill iconPython
Rational ClearCase
+8 more
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos