Data Engineer 2

at Klubworks

Posted by Anupam Arya
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
Full time
Data Warehouse (DWH)
Big Data
ETL architecture
About the role
You will be building data pipelines that transform raw, unstructured data into formats data scientists can use for analysis. You will be responsible for creating and maintaining the analytics infrastructure that enables almost every other data function. This includes architectures such as databases, servers, and large-scale processing systems.
  • Responsible for setting up a  scalable Data warehouse
  • Building data pipeline mechanisms  to integrate the data from various sources for all of Klub’s data.  
  • Setup data as a service to expose the needed data as part of APIs. 
  • Have a good understanding on how the finance data works.
  • Standardize and optimize design thinking across the technology team.
  • Collaborate with stakeholders across engineering teams to come up with short and long-term architecture decisions.
  • Build robust data models that will help to support various reporting requirements for the business , ops and the leadership team. 
  • Participate in peer reviews , provide code/design comments.
  • Own the problem and deliver to success.

  • Overall 3+ years of industry experience
  • Prior experience on Backend and Data Engineering systems
  • Should have at least 1 + years of working experience in distributed systems
  • Deep understanding on python tech stack with the libraries like Flask, scipy, numpy, pytest frameworks.
  • Good understanding of Apache Airflow or similar orchestration tools. 
  • Good knowledge on data warehouse technologies like Apache Hive or similar. Good knowledge on Apache PySpark or similar. 
  • Good knowledge on how to build analytics services on the data for different reporting and BI needs. 
  • Good knowledge on data pipeline/ETL tools Hevo data or similar. Good knowledge on Trino / graphQL or similar query engine technologies. 
  • Deep understanding of concepts on Dimensional Data Models. Familiarity with RDBMS (MySQL/ PostgreSQL) , NoSQL (MongoDB/DynamoDB) databases & caching(redis or similar).
  • Should be proficient in writing SQL queries. 
  • Good knowledge on kafka. Be able to write clean, maintainable code.
Nice to have
  • Built a Data Warehouse from the scratch and set up a scalable data infrastructure.
  • Prior experience in fintech would be a plus.
  • Prior experience on data modelling.
Read more

About Klubworks

Raised funding
Klub is India's largest revenue based financing marketplace that offers flexible growth financing for loved brands.
Read more
Connect with the team
Anupam Arya
Deepak Bansal
Anant Jain
Tausif Raza
Company social profiles
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Matches delivered
Network size
Companies hiring

Similar jobs

Leading StartUp Focused On Employee Growth
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
Data engineering
Data Analytics
Big Data
Apache Spark
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
cloud Transformation products, frameworks and services
Agency job
via The Hub by Sridevi Viswanathan
Remote only
3 - 8 yrs
₹20L - ₹26L / yr
Amazon Redshift
Amazon Web Services (AWS)
+4 more
  • Experience with Cloud native Data tools/Services such as AWS Athena, AWS Glue, Redshift Spectrum, AWS EMR, AWS Aurora, Big Query, Big Table, S3, etc.


  • Strong programming skills in at least one of the following languages: Java, Scala, C++.


  • Familiarity with a scripting language like Python as well as Unix/Linux shells.


  • Comfortable with multiple AWS components including RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Equivalent tools in the GCP stack will also suffice.


  • Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.


  • Experience implementing software around data processing, metadata management, and ETL pipeline tools like Airflow.


Experience with the following software/tools is highly desired:


  • Apache Spark, Kafka, Hive, etc.


  • SQL and NoSQL databases like MySQL, Postgres, DynamoDB.


  • Workflow management tools like Airflow.


  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR.


  • Familiarity with Spark programming paradigms (batch and stream-processing).


  • RESTful API services.
Read more
Open Eyes software solutions pvt
Agency job
via Talentmint consultancy by Divya Varshney
2 - 7 yrs
₹2L - ₹7L / yr
Amazon Web Services (AWS)
Machine Learning (ML)
+2 more
Key Skills
o Convert machine learning models into APIs for applications accessibility
o Running machine learning tests and experiments
o Implementing appropriate ML algorithms
o Creating machine learning models and retraining systems
o Study and transform data science prototypes
o Design machine learning systems
o Research and implement appropriate ML algorithms and tools
o Train and retrain systems when necessary
o Test and deploy models
o Use AI to empower the company with novel capabilities
o Designing and developing machine learning and deep learning system
o Outstanding analytical and problem-solving skills
• Alexa
o Excellent in Python programming
o Experience with AWS Lamda
o Experience with Alexa skills
o Alexa skill directives
• Google
o Excellent in NodeJS programming
o Experience with GCP - Dialog Flow and Actions on Google
o Using built-in intents and developing custom intents
o API integration and Postman knowledge
Read more
A Product Company
Agency job
via wrackle by Lokesh M
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹26L / yr
Big Data
Apache Hive
+4 more
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more
A Pre-series A funded FinTech Company
Agency job
via GoHyre by Avik Majumder
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
Data Visualization
+6 more


  • Ensure and own Data integrity across distributed systems.
  • Extract, Transform and Load data from multiple systems for reporting into BI platform.
  • Create Data Sets and Data models to build intelligence upon.
  • Develop and own various integration tools and data points.
  • Hands-on development and/or design within the project in order to maintain timelines.
  • Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
  • Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
  • Work with both Web Analytics and Backend Data analytics.
  • Support the rest of the BI team in generating reports and analysis
  • Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
  • Assist in presenting demos and preparing materials for Leadership.


  • Strong experience in Datawarehouse modeling techniques and SQL queries
  • A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
  • Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
  • Knowledge and experience in prototyping, designing, and requirement analysis
  • Be able to implement row-level security on data and understand application security layer models in Power BI
  • Proficiency in making DAX queries in Power BI desktop.
  • Expertise in using advanced level calculations on data sets
  • Experience in the Fintech domain and stakeholder management.
Read more
UAE Client
Agency job
via Fragma Data Systems by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
6 - 10 yrs
₹15L - ₹22L / yr
Big Data
Apache Spark
+1 more

Skills- Informatica with Big Data Management


1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more
Posted by Nipun Gupta
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹30L / yr
Data engineering
Data Engineer
+5 more
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies
Read more
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Apache Kafka
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.



  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.



  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
5 - 15 yrs
₹5L - ₹35L / yr
Machine Learning (ML)
R Programming
Deep Learning
+2 more

Job Description

Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.


Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.



10+ Years






Required Skills:

Bachelors/Masters or Phd in Computer Science or related industry experience

3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow

7+ Years of industry experience in scripting languages such as Python, R.

7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing

Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc

Some experience in setting up large scale training data pipelines.

Some experience in using Cloud services such as AWS, GCP, Azure

Desired Skills:

Experience in building deep learning models for Computer Vision and Natural Language Processing domains

Experience in productionizing/serving machine learning in industry setting

Understand the principles of developing cloud native applications




Collect, Organize and Process data pipelines for developing ML models

Research and develop novel prototypes for customers

Train, implement and evaluate shippable machine learning models

Deploy and iterate improvements of ML Models through feedback

Read more
Anywhere, United States, Canada
3 - 10 yrs
₹2L - ₹10L / yr
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
+2 more
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Klubworks?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort