Cutshort logo
Fintech lead,  logo
Data Scientist
Data Scientist
Fintech lead, 's logo

Data Scientist

Agency job
3 - 6 yrs
₹10L - ₹25L / yr
Remote only
Skills
BERT
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
skill iconPython

Responsibilities


Researches, develops and maintains machine learning and statistical models for

business requirements

Work across the spectrum of statistical modelling including supervised,

unsupervised, & deep learning techniques to apply the right level of solution to

the right problem Coordinate with different functional teams to monitor outcomes and refine/

improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation

Identify unexplored data opportunities for the business to unlock and maximize

the potential of digital data within the organization

Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data


Qualifications


3+ years of experience solving complex business problems using machine

learning.

Fluency in programming languages such as Python, NLP and Bert, is a must

Strong analytical and critical thinking skills

Experience in building production quality models using state-of-the-art technologies 

Familiarity with databases .

desirable Ability to collaborate on projects and work independently when required.


Previous experience in Fintech/payments domain is a bonus


You should have Bachelor’s or Master’s degree in Computer Science, Statistics

or Mathematics or another quantitative field from a top tier Institute

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Fintech lead,

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Fatakpay
at Fatakpay
2 recruiters
Ajit Kumar
Posted by Ajit Kumar
Mumbai
2 - 5 yrs
₹8L - ₹15L / yr
skill iconPython
Risk analysis
credit card

Job Title: Credit Risk Analyst

Company: FatakPay FinTech

Location: Mumbai, India

Salary Range: INR 8 - 15 Lakhs per annum

Job Description:

FatakPay, a leading player in the fintech sector, is seeking a dynamic and skilled Credit Risk Analyst to join our team in Mumbai. This position is tailored for professionals who are passionate about leveraging technology to enhance financial services. If you have a strong background in engineering and a keen eye for risk management, we invite you to be a part of our innovative journey.

Key Responsibilities:

  • Conduct thorough risk assessments by analyzing borrowers' financial data, including financial statements, credit scores, and income details.
  • Develop and refine predictive models using advanced statistical methods to forecast loan defaults and assess creditworthiness.
  • Collaborate in the formulation and execution of credit policies and risk management strategies, ensuring compliance with regulatory standards.
  • Monitor and analyze the performance of loan portfolios, identifying trends, risks, and opportunities for improvement.
  • Stay updated with financial regulations and standards, ensuring all risk assessment processes are in compliance.
  • Prepare comprehensive reports on credit risk analyses and present findings to senior management.
  • Work closely with underwriting, finance, and sales teams to provide critical input influencing lending decisions.
  • Analyze market trends and economic conditions, adjusting risk assessment models and strategies accordingly.
  • Utilize cutting-edge financial technologies for more efficient and accurate data analysis.
  • Engage in continual learning to stay abreast of new tools, techniques, and best practices in credit risk management.

Qualifications:

  • Minimum qualification: B.Tech or Engineering degree from a reputed institution.
  • 2-4 years of experience in credit risk analysis, preferably in a fintech environment.
  • Proficiency in data analysis, statistical modeling, and machine learning techniques.
  • Strong analytical and problem-solving skills.
  • Excellent communication skills, with the ability to present complex data insights clearly.
  • A proactive approach to work in a fast-paced, technology-driven environment.
  • Up-to-date knowledge of financial regulations and compliance standards.


We look forward to discovering how your expertise and innovative ideas can contribute to the growth and success of FatakPay. Join us in redefining the future of fintech!





Read more
Fintech lead,
Agency job
via The Hub by Sridevi Viswanathan
Chennai
4 - 7 yrs
₹15L - ₹35L / yr
BERT
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconData Science
skill iconPython
  • A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.

Roles and Responsibilities

 

  • Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
  • Mentor and coach other team members
  • Evaluate the performance of NLP models and ideate on how they can be improved
  • Support internal and external NLP-facing APIs
  • Keep up to date on current research around NLP, Machine Learning and Deep Learning

Mandatory Requirements

 

  • Any graduation with at least 2 years of demonstrated experience as a Data Scientist.

Behavioral Skills

 

Strong analytical and problem-solving capabilities.

  • Proven ability to multi-task and deliver results within tight time frames
  • Must have strong verbal and written communication skills
  • Strong listening skills and eagerness to learn
  • Strong attention to detail and the ability to work efficiently in a team as well as individually

Technical Skills

 

Hands-on experience with

 

  • NLP
  • Deep Learning
  • Machine Learning
  • Python
  • Bert

Preferred Requirements

  • Experience in Computer Vision is preferred
Read more
SteelEye
at SteelEye
1 video
3 recruiters
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
1 - 8 yrs
₹10L - ₹40L / yr
skill iconPython
ETL
skill iconJenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Top startup of India -  News App
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
skill iconPython
Hadoop
Apache Spark
skill iconMongoDB
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
ERUDITUS Executive Education
Payal Thakker
Posted by Payal Thakker
Remote only
3 - 10 yrs
₹15L - ₹45L / yr
skill iconDocker
skill iconKubernetes
Apache Kafka
Apache Beam
skill iconPython
+2 more
Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China. Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries. Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.
 
 
As a data engineer at Emeritus, you'll be working on a wide variety of data problems. At this fast paced company, you will frequently have to balance achieving an immediate goal with building sustainable and scalable architecture. The ideal candidate gets excited about streaming data, protocol buffers and microservices. They want to develop and maintain a centralized data platform that provides accurate, comprehensive, and timely data to a growing organization

Role & responsibilities:

    • Developing ETL pipelines for data replication
    • Analyze, query and manipulate data according to defined business rules and procedures
    • Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
    • Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
    • Resolve internal and external data exceptions in timely and accurate manner
    • Improve multi-environment data flow quality, security, and performance 

Skills & qualifications:

    • Must have experience with:
    • virtualization, containers, and orchestration (Docker, Kubernetes)
    • creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
    • workflow orchestration tools (Argo, Airflow)
    • supporting machine learning models in production
    • Have a desire to continually keep up with advancements in data engineering practices
    • Strong Python programming and exploratory data analysis skills
    • Ability to work independently and with team members from different backgrounds
    • At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
    • 3+ years of work in software/data engineering.
    • Superior interpersonal, independent judgment, complex problem-solving skills
    • Global orientation, experience working across countries, regions and time zones
Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Read more
Marj Technologies
at Marj Technologies
1 recruiter
Shyam Verma
Posted by Shyam Verma
Noida
3 - 10 yrs
₹7L - ₹10L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Job Description

We are looking for a highly capable machine learning engineer to optimize our deep learning systems. You will be evaluating existing deep learning (DL) processes, do hyperparameter tuning, performing statistical analysis (logging and evaluating model’s performance) to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.

You will be working with technologies like AWS Sagemaker, TensorFlow JS, TensorFlow/ Keras/TensorBoard to create Deep Learning backends that powers our application.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in Deep Learning role. A first-class machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software. To do this job successfully, you need exceptional skills in DL and programming.


Responsibilities

  • Consulting with managers to determine and refine machine learning objectives.

  • Designing deep learning systems and self-running artificial intelligence (AI) software to

    automate predictive models.

  • Transforming data science prototypes and applying appropriate ML algorithms and

    tools.

  • Carry out data engineering subtasks such as defining data requirements, collecting,

    labeling, inspecting, cleaning, augmenting, and moving data.

  • Carry out modeling subtasks such as training deep learning models, defining

    evaluation metrics, searching hyperparameters, and reading research papers.

  • Carry out deployment subtasks such as converting prototyped code into production

    code, working in-depth with AWS services to set up cloud environment for training,

    improving response times and saving bandwidth.

  • Ensuring that algorithms generate robust and accurate results.

  • Running tests, performing analysis, and interpreting test results.

  • Documenting machine learning processes.

  • Keeping abreast of developments in machine learning.

    Requirements

  • Proven experience as a Machine Learning Engineer or similar role.

  • Should have indepth knowledge of AWS Sagemaker and related services (like S3).

  • Extensive knowledge of ML frameworks, libraries, algorithms, data structures, data

    modeling, software architecture, and math & statistics.

  • Ability to write robust code in Python & Javascript (TensorFlow JS).

  • Experience with Git and Github.

  • Superb analytical and problem-solving abilities.

  • Excellent troubleshooting skills.

  • Good project management skills.

  • Great communication and collaboration skills.

  • Excellent time management and organizational abilities.

  • Bachelor's degree in computer science, data science, mathematics, or a related field;

    Master’s degree is a plus.

Read more
Zupee
at Zupee
8 recruiters
Abhishek Anand
Posted by Abhishek Anand
Remote only
4 - 8 yrs
₹15L - ₹15L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
recommendation algorithm

About Us

 

Zupee is India’s fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.

 

Know more about our recent funding: https://bit.ly/3AHmSL3

 

Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.

 

Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram

 

Core Responsibilities:

 

  • Responsible for designing, developing, testing, and deploying ML models that can leverage multiple signal sources, to build a personalized user experience
  • Work closely with the business stakeholders to identify the potential Data science applications
  • Contribute by doing opportunity analysis, building project proposals, designing and implementation of ML projects, in the areas of Ranking, Embeddings, Recommendation engines, etc
  • Collaborate with software engineering teams to design experiments, model implementations, and new feature creation
  • Clearly communicate technical details, strategies, and outcomes to a business audience

 

What are we looking for

 

  • 4+ years of hands-on experience in data science using techniques including but not limited to regression, classification, NLP etc.
  • Previous experience in model deployment, model monitoring, optimization and model interpretability.
  • Expertise with Random forests, Gradient boosting, KNN, Regression and unsupervised learning algorithms
  • Experience in using Neural Networks like ANN, RNN, Reinforcement Learning or Deep Learning etc.
  • Solid understanding of Python and common machine learning frameworks such as XGBoost, scikit-learn, PyTorch/Tensorflow
  • Outstanding problem-solving skills, with demonstrated ability to think creatively and strategically
  • Technology-driven mindset, up-to-date with digital and technology literature, trends.
  • Must have knowledge of Experimentation and Basic Statistics
  • Must have experience in Predictive Analytics

 

Non-Desired Skills:

  1. Experience in Descriptive Analytics – Don’t apply
  2. Experience in NLP Text Analysis, Image, Speech Analysis – Don’t apply
Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
vandana chauhan
Posted by vandana chauhan
Remote, Chennai
3 - 7 yrs
₹12L - ₹18L / yr
Big Data
skill iconAmazon Web Services (AWS)
Hadoop
SQL
skill iconPython
+5 more
Position: Data Engineer  
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.mobileprogramming.com/) 
Client Name: Samsung 


We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.

Responsibilities for Data Engineer
 Create and maintain optimal data pipeline architecture,
 Assemble large, complex data sets that meet functional / non-functional business requirements.
 Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
 Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
 Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
 Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
 Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer
 Experience building and optimizing big data ETL pipelines, architectures and data sets.
 Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
 Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
 Strong analytic skills related to working with unstructured datasets.
 Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
 A successful history of manipulating, processing and extracting value from large disconnected
datasets.

 Working knowledge of message queuing, stream processing and highly scalable ‘big datadata
stores.
 Strong project management and organizational skills.
 Experience supporting and working with cross-functional teams in a dynamic environment.

We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
 Experience with big data tools: Spark, Kafka, HBase, Hive etc.
 Experience with relational SQL and NoSQL databases
 Experience with AWS cloud services: EC2, EMR, RDS, Redshift
 Experience with stream-processing systems: Storm, Spark-Streaming, etc.
 Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

Skills: Big Data, AWS, Hive, Spark, Python, SQL
 
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos