Cutshort logo
MNC logo
Data Scientist (Banking Domain Mandatory)
at MNC
Data Scientist (Banking Domain Mandatory)
MNC's logo

Data Scientist (Banking Domain Mandatory)

at MNC

Agency job
4 - 7 yrs
₹10L - ₹20L / yr
Bengaluru (Bangalore)
Skills
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
SQL
Work-days: Sunday through Thursday
Work shift: Day time


  •  Strong problem-solving skills with an emphasis on product development.
• Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.

Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About MNC

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Jio Platforms Limited
at Jio Platforms Limited
3 recruiters
Dixit Nahar
Posted by Dixit Nahar
Navi Mumbai, Mumbai
3 - 5 yrs
₹9L - ₹15L / yr
skill iconPython
TensorFlow
Keras
Apache Kafka
Spark
+7 more
Role - Data Scientist / Machine Learning Scientist / Deep Learning Engineer -: 3 - 5 yrs experienced Programming Must Know:- Python, Tensorflow, Keras, Kafka, Spark Must have worked in Video Analytics with at least 2 Deep Learning Models like R-CNN, LSTM, Object Detection Models like YOLO, Object tracking Models like Deep SORT Must have good model training and testing experience with Structured(statistical machine learning) and Unstructured Data Must be good with Statistics. Good to have Data visualization experience in Python or any data visualization tool. Good to have Kubernetes, Multiprocessing experience, MLops like docker, hydra etc

Team:- We are a team of 9 data scientists working on Video Analytics Projects, Data Analytics projects for internal AI requirements of Reliance Industries as well for the external business. At a time, we make progress on multiple projects(atleast 4) in Video Analytics or Data Analytics.
Read more
Bengaluru (Bangalore), Gurugram
1 - 6 yrs
₹7L - ₹15L / yr
Market Research
skill iconData Analytics
skill iconPython
skill iconR Programming
Linear regression
+4 more

Company Profile:

The company is World's No1 Global management consulting firm.


Job Qualifications
 Graduate or post graduate degree in statistics, economics, econometrics, computer science,
engineering, or mathematics
 2-5 years of relevant experience
 Adept in forecasting, regression analysis and segmentation work
 Understanding of modeling techniques, specifically logistic regression, linear regression, cluster
analysis, CHAID, etc.
 Statistical programming software experience in R & Python, comfortable working with large data
sets; SAS & SQL are also preferred
 Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify
root causes and recommend solutions
 Excellent time management skills
 Good written and verbal communication skills; understanding of both written and spoken English
 Strong interpersonal skills
 Ability to act autonomously, bringing structure and organization to work
 Creative and action-oriented mindset
 Ability to interact in a fluid, demanding and unstructured environment where priorities evolve
constantly and methodologies are regularly challenged
 Ability to work under pressure and deliver on tight deadlines
Read more
Hyderabad
3 - 4 yrs
₹10L - ₹15L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
+5 more

At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand.


Responsibilities

  • Lead the CV and DS Team
  • Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant).
  • Your work will include formulation and development of a Machine Learning models to solve the underlying problem.
  • You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo
  • Statistical data modelling and machine learning research.
  • Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes
  • Conduct applied research to model complex dependencies, statistical inference and predictive modelling
  • Technological conception, design and implementation of new features
  • Quality assurance of the software through planning, creation and execution of tests
  • Work with a cross-functional team to define, build, test, and deploy applications


Requirements:

  • Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields.
  • 3-4 years of experience with computer vision and data science.
  • Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques).
  • Experience building production-quality and well-engineered Computer Vision and Data Science products.
  • Experience in image processing, algorithms and neural networks.
  • Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform
  • Solid Python engineering skills and experience with Python, Tensorflow, Docker
  • Cooperative and independent work, analytical mindset, and willingness to take responsibility
  • Fluency in English, both written and spoken.
Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
Snow flake schema
Snowflake
+5 more

Job Title: AWS-Azure Data Engineer with Snowflake

Location: Bangalore, India

Experience: 4+ years

Budget: 15 to 20 LPA

Notice Period: Immediate joiners or less than 15 days

Job Description:

We are seeking an experienced AWS-Azure Data Engineer with expertise in Snowflake to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, implementing, and maintaining data infrastructure and systems using AWS, Azure, and Snowflake. Your primary focus will be on developing scalable and efficient data pipelines, optimizing data storage and processing, and ensuring the availability and reliability of data for analysis and reporting.

Responsibilities:

  1. Design, develop, and maintain data pipelines on AWS and Azure to ingest, process, and transform data from various sources.
  2. Optimize data storage and processing using cloud-native services and technologies such as AWS S3, AWS Glue, Azure Data Lake Storage, Azure Data Factory, etc.
  3. Implement and manage data warehouse solutions using Snowflake, including schema design, query optimization, and performance tuning.
  4. Collaborate with cross-functional teams to understand data requirements and translate them into scalable and efficient technical solutions.
  5. Ensure data quality and integrity by implementing data validation, cleansing, and transformation processes.
  6. Develop and maintain ETL processes for data integration and migration between different data sources and platforms.
  7. Implement and enforce data governance and security practices, including access control, encryption, and compliance with regulations.
  8. Collaborate with data scientists and analysts to support their data needs and enable advanced analytics and machine learning initiatives.
  9. Monitor and troubleshoot data pipelines and systems to identify and resolve performance issues or data inconsistencies.
  10. Stay updated with the latest advancements in cloud technologies, data engineering best practices, and emerging trends in the industry.

Requirements:

  1. Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
  2. Minimum of 4 years of experience as a Data Engineer, with a focus on AWS, Azure, and Snowflake.
  3. Strong proficiency in data modelling, ETL development, and data integration.
  4. Expertise in cloud platforms such as AWS and Azure, including hands-on experience with data storage and processing services.
  5. In-depth knowledge of Snowflake, including schema design, SQL optimization, and performance tuning.
  6. Experience with scripting languages such as Python or Java for data manipulation and automation tasks.
  7. Familiarity with data governance principles and security best practices.
  8. Strong problem-solving skills and ability to work independently in a fast-paced environment.
  9. Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams and stakeholders.
  10. Immediate joiner or notice period less than 15 days preferred.

If you possess the required skills and are passionate about leveraging AWS, Azure, and Snowflake to build scalable data solutions, we invite you to apply. Please submit your resume and a cover letter highlighting your relevant experience and achievements in the AWS, Azure, and Snowflake domains.

Read more
Abu Dhabi, Dubai
8 - 15 yrs
₹35L - ₹50L / yr
Informatica
Big Data
Spark
Hadoop
SQL
Skills- Informatica with Big Data Management
 
1. Minimum 6 to 8 years of experience in Informatica BDM development
 
2. Experience working on Spark/SQL
 
3. Develops informtica mapping/SQL 
 
4. Should have experience in Hadoop, spark, etc

Work Days-
 
Sunday to Thursday- Day shift
 
(Friday and Saturday would be weekly off.)
Read more
Gulf client
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
PowerBI
Data Warehouse (DWH)
SQL
DAX
Power query
Key Skills:
 Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
 Should have experience in Power BI mobile Dashboards.
 Strong knowledge in SQL.
 Good knowledge of DWH concepts.
 Work as an independent contributor at the client location.
 Implementing Access Control and impose required Security.
 Candidate must have very good communication skills.
Read more
VIMANA
at VIMANA
4 recruiters
Loshy Chandran
Posted by Loshy Chandran
Remote, Chennai
2 - 5 yrs
₹10L - ₹20L / yr
Data engineering
Data Engineer
Apache Kafka
Big Data
skill iconJava
+4 more

We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.

 

Responsibilities:

  • You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
  • You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
  • You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
  • You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
  • You will be building Domain Driven APIs as part of a micro-service architecture.
  • You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
  • You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.

 

Requirements:

  • Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
  • 2 to 5 years of product development experience.
  • Experience building applications using Java, NodeJS, or Python.
  • Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
  • Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
  • Experience using no-SQL databases like MongoDB or Elasticsearch.
  • Prior experience with container orchestrators like Kubernetes is a plus.
About VIMANA

We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.

Please visit https://govimana.com/ to learn more about what we do.

Why Explore a Career at VIMANA
  • We recognize that our dedicated team members make us successful and we offer competitive salaries.
  • We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
  • You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
  • Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!

VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.

1.Telephonic screening (30 Min )

A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds

2. Technical Rounds

This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.

3. HR Round

Candidate's team and cultural fit will be evaluated during this round

We would proceed with releasing the offer if the candidate clears all the above rounds.

Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
skill iconPython
MySQL
SQL
NOSQL Databases
PowerBI
+2 more

About Graphene  

Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best  

Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.  

  

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.  

Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.  

  

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.   

  

Job title: - Data Analyst 

Job Description  

Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines. 

Responsibilities and Duties  

  • Managing end to end data pipeline from data source to visualization layer 
  • Ensure data integrity; Ability to pre-empt data errors 
  • Organized managing and storage of data 
  • Provide quality assurance of data, working with quality assurance analysts if necessary. 
  • Commissioning and decommissioning of data sets. 
  • Processing confidential data and information according to guidelines. 
  • Helping develop reports and analysis. 
  • Troubleshooting the reporting database environment and reports. 
  • Managing and designing the reporting environment, including data sources, security, and metadata. 
  • Supporting the data warehouse in identifying and revising reporting requirements. 
  • Supporting initiatives for data integrity and normalization. 
  • Evaluating changes and updates to source production systems. 
  • Training end-users on new reports and dashboards. 
  • Initiate data gathering based on data requirements 
  • Analyse the raw data to check if the requirement is satisfied 

 

Qualifications and Skills   

  

  • Technologies required: Python, SQL/ No-SQL database(CosmosDB)     
  • Experience required 2 – 5 Years. Experience in Data Analysis using Python 

  Understanding of software development life cycle   

  • Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI) 
  • Automation steps needed to transform and enrich data.   
  • Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.   
  • Excellent verbal and written communication skills   
Read more
Ignite Solutions
at Ignite Solutions
6 recruiters
Juzar Malubhoy
Posted by Juzar Malubhoy
Pune
3 - 7 yrs
₹7L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconPython
skill iconData Science
We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound  Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus
Read more
Saama Technologies
at Saama Technologies
6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
skill iconJava
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos