Sr. Data Engineer ( a Fintech product company )

at Velocity.in

DP
Posted by chinnapareddy S
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr (ESOP available)
icon
Full time
Skills
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
Data Visualization
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Data-flow analysis
Amazon Web Services (AWS)
PL/SQL
NOSQL Databases
PostgreSQL
ETL
data pipelining

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

About Velocity.in

Fast, flexible growth capital for D2C eCommerce brands. Get up to Rs. 2 crore in revenue-based financing for marketing & working capital within 1 week. No equity dilution or collateral requirements.
Founded
2019
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Cloud Transformation products, frameworks and services Org

Agency job
via The Hub
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
R Programming
Python
TensorFlow
Tableau
BI tools
icon
Remote only
icon
3 - 7 yrs
icon
₹15L - ₹24L / yr

  Senior Data Scientist

  • 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
  • 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
  • 4+ years’ experience developing ML/AI models in Python/R
  • Must have leadership abilities to lead a project and team.
  • Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
  • Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
  • Should be able to help deploy various models and tune them for better performance.
  • Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
  • Experience with machine learning and computational statistics packages.
  • Experience with Data Bricks, Data Lake.
  • Experience with Dremio, Tableau, Power Bi.
  • Experience working with spark ML, spark DL with Pyspark would be a big plus!
  • Working knowledge of relational database systems like SQL Server, Oracle.
  • Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
  • Good knowledge in Continuous integration suites like Jenkins.
  • Good knowledge in web servers (Apache, NGINX).
  • Good knowledge in Git, Github, Bitbucket.
  • Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
  • Java, R, and Python programming experience.
  • Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
  • Big Data – Hadoop.
  • Expert knowledge using BI tools e.g.Tableau
  • Experience with machine learning and computational statistics packages.

 

Job posted by
Sridevi Viswanathan

Data Engineer

at Picture the future

Agency job
via Jobdost
PySpark
Data engineering
Big Data
Hadoop
Spark
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Relational Database (RDBMS)
RESTful APIs
C#
icon
Hyderabad
icon
4 - 7 yrs
icon
₹5L - ₹15L / yr

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.


Mandatory Requirements 

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

Job posted by
Shalaka ZawarRathi

Data Analyst

at A modern ayurvedic nutrition brand

Agency job
via Jobdost
Data Analytics
Data Analyst
MS-Excel
SQL
Python
R Language
icon
Bengaluru (Bangalore)
icon
1.5 - 3 yrs
icon
₹5L - ₹5L / yr
About the role:
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website. This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of our overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.


Job Responsibilities:
Strategy and planning:
● Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
● Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
● Drive planning for hypothesis-led testing of key metrics across the customer funnel
Analytics:
● Interpret data, analyze results using statistical techniques and provide ongoing reports
● Analyze large amounts of information to discover trends and patterns
● Work with business teams to prioritize business and information needs
● Collaborate with engineering and product development teams to setup data infrastructure as needed

Reporting and communication:
● Prepare reports / presentations to present actionable insights that can drive business objectives
● Setup live dashboards reporting key cross-functional metrics
● Coordinate with various stakeholders to collect useful and required data
● Present findings to business stakeholders to drive action across the organization
● Propose solutions and strategies to business challenges

Requirements sought:
Must haves:
● Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
● High proficiency in MS Excel and SQL
● Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
● Ability to work in a highly dynamic environment across cross-functional teams; good at
coordinating with different departments and managing timelines
● Exceptional English written/verbal communication
● A penchant for understanding consumer traits and behavior and a keen eye to detail

Good to have:
● Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
● Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
● Experience in developing predictive models and machine learning algorithms
Job posted by
Sathish Kumar

Senior Product Analyst

at AYM Marketing Management

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
SQL server
PowerBI
Spotfire
Qlikview
Tableau
Data Visualization
Data Analytics
Python
Data architecture
Mobile applications
ETL
Teamwork
Analytical Skills
Problem solving
Corporate Communications
Google Analytics
icon
Remote only
icon
2 - 8 yrs
icon
₹10L - ₹25L / yr

Senior Product Analyst

Pampers Start Up Team

India / Remote Working

 

 

Team Description

Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.

 

Role Description

In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.

 

This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.

 

Responsibilities

● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems

● Work with the campaign team to do data fixes when issues with campaigns

● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis

● Document data, create data dictionaries and be the “go to” person in understanding what data flows

● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders

● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB

● Translate analytics into a clear powerpoint deck with actionable insights

● Write up clear documentation on processes

● Innovate with new processes or ways of providing analytics and reporting

● Help the data lead to find new ways of adding value

 

 

Requirements

● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data

● Strong SQL Server and Power BI required

● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.

● Understanding of mobile app data (Events, CTAs, Screen Views etc)

● Knowledge of data architecture and ETL

● Experience in analyzing customer behavior and providing insightful recommendations

● Self-starter, with a keen interest in technology and highly motivated towards success

● Must be proactive and be prepared to address meetings

● Must show initiative and desire to learn business subjects

● Able to work independently and provide updates to management

● Strong analytical and problem-solving capabilities with meticulous attention to detail

● Excellent problem-solving skills; proven teamwork and communication skills

● Experience working in a fast paced “start-up like” environment

 

Desirable

  • Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
  • Knowledge of loyalty data
Job posted by
Stephen FitzGerald

Business Analyst

at India's largest all-in-one app for teachers, schools and coa

Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
SQL
Python
MS-Excel
R Programming
Product Lifecycle Management (PLM)
Business Analysis
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹10L - ₹15L / yr

Work closely with Product Managers to drive product improvements through data driven decisions.

Conduct analysis to determine new project pilot settings, new features, user behaviour, and in-app behaviour.

Present insights and recommendations to leadership using high quality visualizations and concise messaging.

Own the implementation of data collection and tracking, and co-ordinate with engineering and product team.

Create and maintain dashboards for product and business teams.

Requirements

1+ years’ experience in analytics. Experience as Product analyst will be added advantage.

Technical skills: SQL, Advanced Excel

Good to have: R/Python, Dashboarding experience

Ability to translate structured and unstructured problems into analytical framework

Excellent analytical skills

Good communication & interpersonal skills

Ability to work in a fast-paced start-up environment, learn on the job and get things done.

Job posted by
Suganya Martin

Machine Learning Engineer

at Zocket

Founded 2021  •  Product  •  0-20 employees  •  Raised funding
Machine Learning (ML)
Data Science
Python
Big Data
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹15L / yr

Machine Learning Engineer at Zocket


We are looking for a curious Machine Learning Engineer to join our extremely fast growing Tech Team at Zocket!


About Zocket:

Zocket helps businesses create digital ads in less than 30 seconds and grow digitally without any expertise.

Currently there are only two options for an SMB owner, either employ a digital marketing agency or stay away from digital ads. True to the mission, Zocket leverages AI to simplify  digital marketing for 300 million+ small businesses around the globe.


You are ideal if you have:

  • Interest in working with a fast growing Start Up
  • Strong communication and Presentation skills
  • Ability to meet deadlines 
  • Critical thinking Abilities
  • Interest in working in a high paced environment
  • Desire for Lots and lots of learning
  • Inclination towards working on diverse projects and to make real contributions to the company

Requirements:


  • Bachelor's Degree in Computer Science  or any quantitative discipline (Statistics, Mathematics, Economics)
  • 3+ Years of relevant experience
  • Experience working with languages like Python(mandatory) and R 
  • Experience working with Visualisation tools like Tableau, PowerBI.
  • Experience working in Frameworks such as OpenCV, PyTorch ,Tensorflow
  • Prior experience in building and deploying ML systems using AWS(EC2, Sagemaker)
  • Understanding of statistical concepts
  • Hands-on Computer Vision experience
  • Experience in MySQL is required
  • Cookie points if you have expertise with NLP 

 

 

Apply Away!

 

 

 

 

 

 

 

Job posted by
Shraavani Tulshibagwale

Software Engineer

at Netconnect Pvt. Ltd.

Founded 1998  •  Products & Services  •  100-1000 employees  •  Profitable
Perl
Shell Scripting
PL/SQL
icon
Pune
icon
3 - 5 yrs
icon
₹3L - ₹10L / yr
Skill required-

• Experienced Developer in Shell scripting,

• PERL Scripting

• PL/SQL knowledge is required.

• Advance Communication skill is a must.

• Ability to learn new applications and technologies
Job posted by
Ruchika M

Talend Developer

at Product based company

Agency job
via Crewmates
ETL
talend
Talend
icon
Coimbatore
icon
4 - 15 yrs
icon
₹5L - ₹20L / yr
Hi Professionals,
Role : Talend developer
Location : Coimbatore
Experience : 4+Years
Skills : Talend, any DB
Notice period : Immediate to 15 Days
Job posted by
Gowtham V

Data Engineer

at Yulu Bikes Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Spark
Scala
Hadoop
Apache Kafka
ETL
Datawarehousing
Distributed computing
MongoDB
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹15L - ₹28L / yr
Job Description
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.

Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure

Skills 
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Job posted by
Keerthana k

Data Engineer

at Pluto Seven Business Solutions Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
MySQL
Python
Big Data
Google Cloud Storage
API
SQL Query Analyzer
Relational Database (RDBMS)
Agile/Scrum
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹6L - ₹18L / yr
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Job posted by
Sindhu Narayan
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Velocity.in?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort