Data Engineer

at CustomerGlu

DP
Posted by Barkha Budhori
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹8L - ₹12L / yr
icon
Full time
Skills
Data engineering
Data Engineer
MongoDB
DynamoDB
Apache
Apache Kafka
Hadoop
pandas
NumPy
Python
Machine Learning (ML)
Big Data
API
Data Structures
AWS Lambda
Glue semantics

CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.

As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.

Hence we are adding more team members to our existing data team and looking for a Data Engineer.

Responsibilities

  • Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
  • Develop low-latency real-time data analytics and segmentation applications.
  • Setup infrastructure for easily building data products on top of the data platform.
  • Be responsible for logging, monitoring, and error recovery of data pipelines.
  • Build workflows for automated scheduling of data transformation processes.
  • Able to lead a team

Requirements

  • 3+ years of experience and ability to manage a team
  • Experience working with databases like MongoDB and DynamoDB.
  • Knowledge of building batch data processing applications using Apache Spark.
  • Understanding of how backend services like HTTP APIs and Queues work.
  • Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
  • Working knowledge of version control systems like Git.

Bonus Skills

  • Experience in real-time data processing using Apache Kafka or AWS Kinesis.
  • Experience with AWS tools like Lambda and Glue.
Read more

About CustomerGlu

Create Game-like Experiences in your Application to Improve Customer Engagement & User Experience. Easy to Use Gamification for Mobile Apps with Drag & Drop builder.
Read more
Founded
2016
Type
Products & Services
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Product based company

Agency job
via Zyvka Global Services
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
Scala
Amazon Web Services (AWS)
ETL
CleverTap
Linux/Unix
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹5L - ₹30L / yr

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
Job posted by
Ridhima Sharma

Data Engineer_Scala

at Ganit Business Solutions

Founded 2017  •  Products & Services  •  100-1000 employees  •  Bootstrapped
ETL
Informatica
Data Warehouse (DWH)
Big Data
Scala
Hadoop
Apache Hive
PySpark
Spark
icon
Remote only
icon
4 - 7 yrs
icon
₹10L - ₹30L / yr

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.    The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

 

 

 

Read more
Job posted by
Vijitha VS

Big Data Engineer

at Propellor.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Python
SQL
Spark
Hadoop
Big Data
Data engineering
PySpark
icon
Remote only
icon
1 - 4 yrs
icon
₹5L - ₹15L / yr

Big Data Engineer/Data Engineer


What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge

• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries

These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation

At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment

What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily

Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!

To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0

About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.

Role Description

• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.

Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)  • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure

• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design

Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams

Preferred Immediate Joiners

Read more
Job posted by
Kajal Jain

Data Analyst

at Extramarks Education India Pvt Ltd

Founded 2007  •  Product  •  1000-5000 employees  •  Profitable
MySQL
Python
icon
Noida
icon
1 - 3 yrs
icon
₹7L - ₹10L / yr

Data Analyst preferably with SQL , Advance Excel and Python experience, but with experience in providing deep insights from live data of various products.

Required Experience

  • 3+ years of relevant technical experience as a data analyst role
  • Intermediate / expert skills with SQL and basic statistics
  • Experience in Advance SQL
  • Good in Python programming
  • Strong problem solving and structuring skills
  • Automation in connecting various sources to the data and representing it through various dashboards
  • Excellent with Numbers and communicate data points through various reports/templates
  • Ability to communicate effectively internally and outside Data Analytics team
  • Proactively take up work responsibilities and take adhocs as and when needed
  • Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
  • Strong technical communication skills; both written and verbal
  • Ability to understand and articulate the "big picture" and simplify complex ideas
  • Ability to identify and learn applicable new techniques independently as needed
  • Must have worked with various Databases (Relational and Non-Relational) and ETL processes
  • Must have experience in handling large volume and data and adhere to optimization and performance standards
  • Should have the ability to analyse and provide relationship views of the data from different angles
  • Must have excellent Communication skills (written and oral).
  • Knowing Data Science is an added advantage

Required Skills:

MYSQL, Python, Advanced Excel, Tableau, Reporting and dashboards, MS office, Analytical skills

Read more
Job posted by
Vidhi Solanki

AGM Data Engineering

at ACT FIBERNET

Founded 2008  •  Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
Datapipeline
icon
Bengaluru (Bangalore)
icon
9 - 14 yrs
icon
₹20L - ₹36L / yr

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Read more
Job posted by
Sumit Sindhwani

Big Data Engineer

at Hiring for one of the MNC for India location

Agency job
via Natalie Consultants
Python
Hadoop
Big Data
Spark
Data engineering
PySpark
Apache Hive
Apache HBase
icon
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
icon
2 - 9 yrs
icon
₹8L - ₹20L / yr

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Job posted by
Rahul Kumar

Lead Technical Trainer

at Commoditize data engineering. (X1)

Agency job
via Multi Recruit
Python
SQL
Lead Technical Trainer
Trainer
IT Trainer
Lead Trainer
icon
Bengaluru (Bangalore)
icon
9 - 20 yrs
icon
₹40L - ₹44L / yr

This is the first senior person we are bringing for this role. This person will start with the training program but will go on to build a team and eventually also be responsible for the entire training program + Bootcamp.

 

We are looking for someone fairly senior and has experience in data + tech. At some level, we have all the technical expertise to teach you the data stack as needed. So it's not super important you know all the tools. However, having basic knowledge of the stack requirement. The training program covers 2 parts - Technology (our stack) and Process (How we work with clients). Both of which are super important.

  • Full-time flexible working schedule and own end-to-end training
  • Self-starter - who can communicate effectively and proactively
  • Function effectively with minimal supervision.
  • You can train and mentor potential 5x engineers on Data Engineering skillsets
  • You can spend time on self-learning and teaching for new technology when needed
  • You are an extremely proactive communicator, who understands the challenges of remote/virtual classroom training and the need to over-communicate to offset those challenges.

Requirements

  • Proven experience as a corporate trainer or have passion for Teaching/ Providing Training
  • Expertise in Data Engineering Space and have good experience in Data Collection, Data
  • Ingestion, Data Modeling, Data Transformation, and Data Visualization technologies and techniques
  • Experience Training working professionals on in-demand skills like Snowflake, debt, Fivetran, google data studio, etc.
  • Training/Implementation Experience using Fivetran, DBT Cloud, Heap, Segment, Airflow, Snowflake is a big plus

 

Read more
Job posted by
savitha Rajesh

SDE1 Data Scientist

at App-based lending platform. ( AF1)

Agency job
via Multi Recruit
Machine Learning (ML)
Data Science
Data Scientist
Python
pandas
Logistic regression
SKLearn
Random Forest Classifier
Gradient Boosting Regressor
icon
Bengaluru (Bangalore)
icon
1 - 2 yrs
icon
₹15L - ₹17L / yr
  • Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
  • Implement data pipelines, new features, and algorithms that are critical to our production models
  • Create scalable strategies to deploy and execute your models
  • Write well designed, testable, efficient code
  • Identify valuable data sources and automate collection processes.
  • Undertake to preprocess of structured and unstructured data.
  • Analyze large amounts of information to discover trends and patterns.

Requirements:

  • 1+ years of experience in applied data science or engineering with a focus on machine learning
  • Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
  • strong quantitative and programming skills with a product-driven sensibility

 

Read more
Job posted by
Ayub Pasha

Python Machine Learning Developer

at SpotDraft

Founded 2017  •  Product  •  0-20 employees  •  Raised funding
Python
TensorFlow
caffee
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
3 - 7 yrs
icon
₹3L - ₹24L / yr
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
Job posted by
Madhav Bhagat

Engineering Manager

at Uber

Founded 2012  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Leadership
Engineering Management
Architecture
icon
Bengaluru (Bangalore)
icon
9 - 15 yrs
icon
₹50L - ₹80L / yr
Minimum 5+ years of experience as a manager and overall 10+ years of industry experience in a variety of contexts, during which you've built scalable, robust, and fault-tolerant systems. You have a solid knowledge of the whole web stack: front-end, back-end, databases, cache layer, HTTP protocol, TCP/IP, Linux, CPU architecture, etc. You are comfortable jamming on complex architecture and design principles with senior engineers. Bias for action. You believe that speed and quality aren't mutually exclusive. You've shown good judgement about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way. Mentorship/ Guidance. You know that the most important part of your job is setting the team up for success. Through mentoring, teaching, and reviewing, you help other engineers make sound architectural decisions, improve their code quality, and get out of their comfort zone. Commitment. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You hold yourself personally accountable, jumping in and taking ownership of problems that might not even be in your team's scope. Hiring know-how. You're a thoughtful interviewer who constantly raises the bar for excellence. You believe that what seems amazing one day becomes the norm the next day, and that each new hire should significantly improve the team. Design and business vision. You help your team understand requirements beyond the written word and you thrive in an environment where you can uncover subtle details.. Even in the absence of a PM or a designer, you show great attention to the design and product aspect of anything your team ships.
Read more
Job posted by
Swati Singh
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at CustomerGlu?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort