Data Scientist

at It's a deep-tech firm

Agency job
icon
Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹20L / yr
icon
Full time
Skills
Data Science
Python
Natural Language Processing (NLP)
Deep Learning
TensorFlow
Long short-term memory (LSTM)
RNN
  • Your responsibilities:
  • Build, improve and extend NLP capabilities
  • Research and evaluate different approaches to NLP problems
  • Must be able to write code that is well designed, produce deliverable results
  • Write code that scales and can be deployed to production
You must have:
  • Fundamentals of statistical methods is a must
  • Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM
  • A solid foundation in Python, data structures, algorithms, and general software development skills.
  • Ability to apply machine learning to problems that deal with language
  • Engineering ability to build robustly scalable pipelines
  • Ability to work in a multi-disciplinary team with a strong product focus
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

NLP Engineer (Founding Team)

at Zevi

Founded 2021  •  Product  •  0-20 employees  •  Raised funding
Natural Language Processing (NLP)
BERT
Machine Learning (ML)
Data Science
Natural Language Toolkit (NLTK)
TensorFlow
PyTorch
Python
Artificial Intelligence (AI)
Artificial Neural Network (ANN)
recommendation algorithm
sklearn
icon
Remote, Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹10L - ₹25L / yr

Job Description

  • Build state of the art langugae models to understand vernacular languages.
  • Build and push machine learning models to optimise the results.
  • Consume real-time data and build layers around that to leverage customer understanding.
  • Our state-of-the-art models are ingesting and generating relevant search results for our customers weekly. Work directly with our current models to tune them to the abundant inflow of data as well as architecting new ones to further infuse AI into search workflows.
  • Bee an integral part of Zevi, working on the core tech that makes our product what it is today. It doesn’t stop there though: As we collect more and more insights, get ready to shape the future of search.

Skills and Experience expected:

  • Have at least 2 years of experience working with language models, building, fine-tuning, training them.
  • Have closely read NLP publications and implemented some of them.
  • Have designed and implemented a scalable ML infrastructure that is both secure and modular.
  • Have pushed deep learning models in production.
  • Have been responsible for breaking down and solving complex problems.
  • Have developed engineering principles and designed processes/workflows.
  • Have experience working in Python, sklearn, Pytorch, Tensorflow, and are an expert in at least one of those technologies.

What can you expect from Zevi ?

  • Closely work with leading enterprise engineering teams.
  • Be a part of highly motivated core team.
  • Get access and contribute to all strategies being built by Zevi.
  • Full ownership of your product line.
Job posted by
Anshul Basia
Big Data
Scala
Spark
Hadoop
Python
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore), Hyderabad, Pune
icon
9 - 16 yrs
icon
₹7L - ₹32L / yr
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Job posted by
Haina khan

Data Science - Risk

at Rupifi

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Risk Management
Risk analysis
Data Science
Machine Learning (ML)
Python
SQL
Data Visualization
Big Data
Tableau
Data Structures
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹50L / yr

Data Scientist (Risk)/Sr. Data Scientist (Risk)


As a part of the Data science/Analytics team at Rupifi, you will play a significant role in  helping define the business/product vision and deliver it from the ground up by working with  passionate high-performing individuals in a very fast-paced working environment. 


You will work closely with Data Scientists & Analysts, Engineers, Designers, Product  Managers, Ops Managers and Business Leaders, and help the team make informed data  driven decisions and deliver high business impact.


Preferred Skills & Responsibilities: 

  1. Analyze data to better understand potential risks, concerns and outcomes of decisions.
  2. Aggregate data from multiple sources to provide a comprehensive assessment.
  3. Past experience of working with business users to understand and define inputs for risk models.
  4. Ability to design and implement best in class Risk Models in Banking & Fintech domain.
  5. Ability to quickly understand changing market trends and incorporate them into model inputs.
  6. Expertise in statistical analysis and modeling.
  7. Ability to translate complex model outputs into understandable insights for business users.
  8. Collaborate with other team members to effectively analyze and present data.
  9. Conduct research into potential clients and understand the risks of accepting each one.
  10. Monitor internal and external data points that may affect the risk level of a decision.

Tech skills: 

  • Hands-on experience in Python & SQL.
  • Hands-on experience in any visualization tool preferably Tableau
  • Hands-on experience in Machine & Deep Learning area
  • Experience in handling complex data sources
  • Experience in modeling techniques in the fintech/banking domain
  • Experience of working on Big data and distributed computing.

Preferred Qualifications: 

  • A BTech/BE/MSc degree in Math, Engineering, Statistics, Economics, ML, Operations  Research, or similar quantitative field.
  • 3 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
  • Strong analytical skills with good problem solving ability
  • Strong presentation and communication skills
  • Experience in working on advanced machine learning techniques
  • Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts.
Job posted by
Richa Tiwari

Data Engineering Head

at A FinTech NBFC dedicated to driving Financial inclusion

Agency job
via Jobdost
Data engineering
Spark
Big Data
Data engineer
Hadoop
Spring
Javascript
NodeJS (Node.js)
Amazon Web Services (AWS)
Python
Flask
Java
Express
MongoDB
SQL
NOSQL Databases
DynamoDB
data savviness
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹20L - ₹25L / yr
  • Play a critical role as a member of the leadership team in shaping and supporting our overall company vision, day-to-day operations, and culture.
  • Set the technical vision and build the technical product roadmap from launch to scale; including defining long-term goals and strategies
  • Define best practices around coding methodologies, software development, and quality assurance
  • Define innovative technical requirements and systems while balancing time, feasibility, cost and customer experience
  • Build and support production products
  • Ensure our internal processes and services comply with privacy and security regulations
  • Establish a high performing, inclusive engineering culture focused on innovation, execution, growth and development
  • Set a high bar for our overall engineering practices in support of our mission and goals
  • Develop goals, roadmaps and delivery dates to help us scale quickly and sustainably
  • Collaborate closely with Product, Business, Marketing and Data Science
  • Experience with financial and transactional systems
  • Experience engineering for large volumes of data at scale
  • Experience with financial audit and compliance is a plus
  • Experience building a successful consumer facing web and mobile apps at scale
Job posted by
Mamatha A

Data Engineer

at SenecaGlobal

Founded 2007  •  Products & Services  •  100-1000 employees  •  Profitable
Python
PySpark
Spark
Scala
Microsoft Azure Data factory
icon
Remote, Hyderabad
icon
4 - 6 yrs
icon
₹15L - ₹20L / yr
Should have good experience with Python or Scala/PySpark/Spark/
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Job posted by
Shiva V

Data Analyst

at A modern ayurvedic nutrition brand

Agency job
via Jobdost
Data Analytics
Data Analyst
MS-Excel
SQL
Python
R Language
icon
Bengaluru (Bangalore)
icon
1.5 - 3 yrs
icon
₹5L - ₹5L / yr
About the role:
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website. This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of our overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.


Job Responsibilities:
Strategy and planning:
● Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
● Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
● Drive planning for hypothesis-led testing of key metrics across the customer funnel
Analytics:
● Interpret data, analyze results using statistical techniques and provide ongoing reports
● Analyze large amounts of information to discover trends and patterns
● Work with business teams to prioritize business and information needs
● Collaborate with engineering and product development teams to setup data infrastructure as needed

Reporting and communication:
● Prepare reports / presentations to present actionable insights that can drive business objectives
● Setup live dashboards reporting key cross-functional metrics
● Coordinate with various stakeholders to collect useful and required data
● Present findings to business stakeholders to drive action across the organization
● Propose solutions and strategies to business challenges

Requirements sought:
Must haves:
● Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
● High proficiency in MS Excel and SQL
● Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
● Ability to work in a highly dynamic environment across cross-functional teams; good at
coordinating with different departments and managing timelines
● Exceptional English written/verbal communication
● A penchant for understanding consumer traits and behavior and a keen eye to detail

Good to have:
● Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
● Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
● Experience in developing predictive models and machine learning algorithms
Job posted by
Sathish Kumar

Data Scientist

at A fastest-growing data science company in Chennai

Agency job
via SourceGrids
Data Science
Machine Learning (ML)
Python
Statistical Modeling
Excel VBA
SQL
R Programming
icon
Chennai, Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹10L - ₹20L / yr
What you will be doing:
- Understand business problems and translate business requirements into technical requirements.
- Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it.
- Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization.
- Gather requirements and communicate findings in the form of a meaningful story with the stakeholders
- Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption.
- Lead and mentor data analysts.

What we are looking for:
- Apart from your love for data and ability to code even while sleeping you would need the following.
- Minimum of 02 years of experience in designing and delivery of data science solutions.
- You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off.
- Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand.
- Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc.
- Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics.
Job posted by
Farhin Shaikh

Project Engineer Intern

at Helical IT Solution

Founded 2012  •  Products & Services  •  20-100 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Hibernate (Java)
Jasmine (Javascript Testing Framework)
SQL
Python
icon
Hyderabad
icon
0 - 0 yrs
icon
₹1.2L - ₹3.5L / yr

Job description

About Company
Helical Insight an open source Business Intelligence tool from Helical IT Solutions Pvt. Ltd,
based out of Hyderabad, is looking for fresher’s having strong knowledge on SQL. Helical
Insight has more than 50+ clients from various sectors. It has been awarded the most promising
company in the Business Intelligence space. We are looking for rockstar team mate to join
our company.
Job Brief
We are looking for a Business Intelligence (BI) Developer to create and manage BI and analytics
solutions that turn data into knowledge.
In this role, you should have a background in data and business analysis. You should be
analytical and an excellent communicator. If you also have a business acumen and
problemsolving aptitude, we’d like to meet you. Excellent knowledge on SQLQuery is required.
Basic knowledge on HTML CSS and JS is required.
You would be working closely with customers of various domain to understand their data,
understand their business requirement and deliver the required analytics in form of varous
reports dashboards etc. Excellent client interfacing role with opportunity to work across various
sectors and geographies as well as varioud kind of DB including NoSQL, RDBMS, graph db,
Columnar DB etc
Skill set and Qualification required
Responsibilities
 Attending client calls to get requriement, show progress
 Translate business needs to technical specifications
 Design, build and deploy BI solutions (e.g. reporting tools)
 Maintain and support data analytics platforms)
 Conduct unit testing and troubleshooting
 Evaluate and improve existing BI systems
 Collaborate with teams to integrate systems
 Develop and execute database queries and conduct analyses
 Create visualizations and reports for requested projects
 Develop and update technical documentation
Requirements
 Excellent expertise on SQLQueries
 Proven experience as a BI Developer or Data Scientist
 Background in data warehouse design (e.g. dimensional modeling) and data mining
 In-depth understanding of database management systems, online analytical processing
(OLAP) and ETL (Extract, transform, load) framework
 Familiarity with BI technologies
 Proven abilities to take initiative and be innovative
 Analytical mind with a problem-solving aptitude
 BE in Computer Science/IT
Education: BE/ BTech/ MCA/BCA/ MTech/ MS, or equivalent preferred.
Interested candidates call us on +91 7569 765 162
Job posted by
Bhavani Thanga

AWS Data Engineer

at Advanced technology to Solve Business Problems.( A1)

Agency job
via Multi Recruit
Python
PySpark
Knowledge in AWS
icon
Hyderabad
icon
2 - 4 yrs
icon
₹10L - ₹15L / yr
  • Desire to explore new technology and break new ground.
  • Are passionate about Open Source technology, continuous learning, and innovation.
  • Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.

Qualifications

  • Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
  • You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
  • Participate in code reviews and provide meaningful feedback to other team members.
  • Create technical documentation.
  • Develop thorough Unit Tests to ensure code quality.

Skills and Experience

  • Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
  • Experience with event-driven architecture design patterns and practices
  • Experience in database design and architecture principles and strong SQL abilities
  • Message brokers like Kafka and Kinesis
  • Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
  • Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
  • Good understanding of distributed systems.
  • Candidates will be self-motivated and display initiative, ownership, and flexibility.

 

Preferred Qualifications

  • AWS Lambda function development experience with Java and/or Python.
  • Lambda triggers such as SNS, SES, or cron.
  • Databricks
  • Cloud development experience with AWS services, including:
  • IAM
  • S3
  • EC2
  • AWS CLI
  • API Gateway
  • ECR
  • CloudWatch
  • Glue
  • Kinesis
  • DynamoDB
  • Java 8 or higher
  • ETL data pipeline building
  • Data Lake Experience
  • Python
  • Docker
  • MongoDB or similar NoSQL DB.
  • Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
  • Gradle and/or Maven.
  • JUnit
  • Git
  • Scrum
  • Experience with Unix and/or macOS.
  • Immediate Joiners

Nice to have:

  • AWS / GCP / Azure Certification.
  • Cloud development experience with Google Cloud or Azure

 

Job posted by
Ranjini A R

Head Data Science

at My client is a US based Product development company.

Data Science
Natural Language Processing (NLP)
Machine Learning (ML)
Deep Learning
Predictive modelling
icon
Remote, Noida, NCR (Delhi | Gurgaon | Noida)
icon
8 - 15 yrs
icon
₹30L - ₹45L / yr

Responsibilities: 

  • Identify complex business problems and work towards building analytical solutions in-order to create large business impact.
  • Demonstrate leadership through innovation in software and data products from ideation/conception through design, development and ongoing enhancement, leveraging user research techniques, traditional data tools, and techniques from the data science toolkit such as predictive modelling, NLP, statistical analysis, vector space modelling, machine learning etc.
  • Collaborate and ideate with cross-functional teams to identify strategic questions for the business that can be solved and champion the effectiveness of utilizing data, analytics, and insights to shape business.
  • Contribute to company growth efforts, increasing revenue and supporting other key business outcomes using analytics techniques.
  • Focus on driving operational efficiencies by use of data and analytics to impact cost and employee efficiency.
  • Baseline current analytics capability, ensure optimum utilization and continued advancement to stay abridge with industry developments.
  • Establish self as a strategic partner with stakeholders, focused on full innovation system and fully supportive of initiatives from early stages to activation.
  • Review stakeholder objectives and team's recommendations to ensure alignment and understanding.
  • Drive analytics thought leadership and effectively contributes towards transformational initiatives.
  • Ensure accuracy of data and deliverables of reporting employees with comprehensive policies and processes.
Job posted by
Samir Jha
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at It's a deep-tech firm?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort