Cutshort logo
Blitz Electric (IT W24) logo
Data Engineer
Blitz Electric (IT W24)'s logo

Data Engineer

Puneeta Mishra's profile picture
Posted by Puneeta Mishra
2 - 4 yrs
₹12L - ₹15L / yr
Remote only
Skills
skill iconPython
SQL
skill iconJava

About us


Blitz is into Instant Logistics in Southeast Asia. Blitz was founded in the year 2021. It is in the business of delivering orders using EV bikes. Blitz not only delivers instant orders through EV Bikes, but it also finances the EV bikes to the drivers on lease and generates another source of revenue from the leasing as well apart from delivery charges. Blitz is revolutionizing instant coordination with the help of advanced technology-based solutions. It is a product-driven company and uses modern technologies to build products that solve problems in EV-based Logistics. Blitz is utilizing data sources coming from the EV bikes through IOT and smart engines to make technology-driven decisions to create a delightful experience for consumers



About the Role


We are seeking an experienced Data Engineer to join our dynamic team. The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives. The ideal candidate will have a strong background in software engineering, database management, and data architecture, with a passion for building robust and efficient data systems



What you will do


  1. Design, build, and maintain scalable data pipelines and infrastructure to ingest, process, and analyze large volumes of structured and unstructured data.
  2. Collaborate with cross-functional teams to understand data requirements and develop solutions to meet business needs.
  3. Optimise data processing and storage solutions for performance, reliability, and cost-effectiveness.
  4. Implement data quality and validation processes to ensure accuracy and consistency of data. 
  5. Monitor and troubleshoot data pipelines to identify and resolve issues in time.
  6. Stay updated on emerging technologies and best practices in data engineering and recommend innovations to enhance our data infrastructure.
  7. Document data pipelines, workflows, and infrastructure to facilitate knowledge sharing and ensure maintainability.
  8. Create Data Dashboards from the datasets to visualize different data requirements


What we need


  1. Bachelor's degree or higher in Computer Science, Engineering, or a related field.
  2. Proven experience as a Data Engineer or similar role, with expertise in building and maintaining data pipelines and infrastructure.
  3. Proficiency in programming languages such as Python, Java, or Scala.
  4. Strong knowledge of database systems (e.g., SQL, NoSQL, BigQuery) and data warehousing concepts.
  5. Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
  6. Familiarity with data processing frameworks and tools (e.g., Apache, Spark, Hadoop, Kafka).
  7. Excellent problem-solving skills and attention to detail.
  8. Strong communication and collaboration skills.


Preferred Qualifications


  1. Advanced degree in Computer Science, Engineering, or related field.
  2. Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes).
  3. Knowledge of machine learning and data analytics concepts.
  4. Experience with DevOps practices and tools.
  5. Certifications in relevant technologies (e.g., AWS Certified Big Data Specialty, Google Professional Data Engineer).


Please refer to the Company’s website - https://rideblitz.com/


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Blitz Electric (IT W24)

Founded :
2021
Type
Size
Stage :
Raised funding
About

Blitz is into Instant Logistics in Southeast Asia. Blitz was founded in the year 2021. It is in the business of delivering orders using EV bikes. Blitz not only delivers instant orders through EV Bikes, but it also finances the EV bikes to the drivers on lease and generates another source of revenue from the leasing as well apart from delivery charges. Blitz is revolutionizing instant coordination with the help of advanced technology-based solutions. It is a product-driven company and uses modern technologies to build products that solve problems in EV-based Logistics. Blitz is utilizing data sources coming from the EV bikes through IOT and smart engines to make technology-driven decisions to create a delightful experience for consumers

Read more
Company social profiles
N/A

Similar jobs

Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹28L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.

Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹25L / yr
Cloud
Google Cloud Platform (GCP)
BigQuery
skill iconPython
SQL
+2 more

Specific Responsibilities

  • Minimum of 2 years Experience in Google Big Query and Google Cloud Platform.
  • Design and develop the ETL framework using BigQuery
  • Expertise in Big Query concepts like Nested Queries, Clustering, Partitioning, etc.
  • Working Experience of Clickstream database, Google Analytics/ Adobe Analytics.
  • Should be able to automate the data load from Big Query using APIs or scripting language.
  • Good experience in Advanced SQL concepts.
  • Good experience with Adobe launch Web, Mobile & e-commerce tag implementation.
  • Identify complex fuzzy problems, break them down in smaller parts, and implement creative, data-driven solutions
  • Responsible for defining, analyzing, and communicating key metrics and business trends to the management teams
  • Identify opportunities to improve conversion & user experience through data. Influence product & feature roadmaps.
  • Must have a passion for data quality and be constantly looking to improve the system. Drive data-driven decision making through the stakeholders & drive Change Management
  • Understand requirements to translate business problems & technical problems into analytics problems.
  • Effective storyboarding and presentation of the solution to the client and leadership.
  • Client engagement & management
  • Ability to interface effectively with multiple levels of management and functional disciplines.
  • Assist in developing/coaching individuals technically as well as on soft skills during the project and as part of Client Project’s training program.

 

Work Experience
  • 2 to 3 years of working experience in Google Big Query & Google Cloud Platform
  • Relevant experience in Consumer Tech/CPG/Retail industries
  • Bachelor’s in engineering, Computer Science, Math, Statistics or related discipline
  • Strong problem solving and web analytical skills. Acute attention to detail.
  • Experience in analyzing large, complex, multi-dimensional data sets.
  • Experience in one or more roles in an online eCommerce or online support environment.
 
Skills
  • Expertise in Google Big Query & Google Cloud Platform
  • Experience in Advanced SQL, Scripting language (Python/R)
  • Hands-on experience in BI tools (Tableau, Power BI)
  • Working Experience & understanding of Adobe Analytics or Google Analytics
  • Experience in creating and debugging website & app tracking (Omnibus, Dataslayer, GA debugger, etc.)
  • Excellent analytical thinking, analysis, and problem-solving skills.
  • Knowledge of other GCP services is a plus
 
Read more
Leading StartUp Focused On Employee Growth
Leading StartUp Focused On Employee Growth
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹45L / yr
skill iconData Analytics
Data Analyst
Tableau
Mixpanel
CleverTap
+2 more
4+ years of experience in data and analytics.
● Knowledge of Excel,SQL and writing code in python.
● Experience with Reporting and Business Intelligence tools like Tableau, Metabase.
● Exposure with distributed analytics processing technologies is desired (e.g. Hive, Spark).
● Experience with Clevertap, Mixpanel, Amplitude, etc.
● Excellent communication skills.
● Background in market research and project management.
● Attention to detail.
● Problem-solving aptitude.
Read more
Personal Care Product Manufacturing
Personal Care Product Manufacturing
Agency job
via Qrata by Rayal Rajan
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
AI-powered cloud-based SaaS solution provider
AI-powered cloud-based SaaS solution provider
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 6 yrs
₹20L - ₹40L / yr
skill iconData Science
Weka
Data Scientist
Statistical Modeling
Mathematics
+5 more
Roles and Responsibilities
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypothesis through theoretical and empirical approaches.
● Productize proven or working models into production quality code.
● Collaborate with product management, marketing and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision makers.
● File patents for innovative solutions that add to company's IP portfolio

Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/PhD in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes (only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Expertise in Python programming language and fluency in analytical tools
such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis

● Strong communication and collaboration skills.
Read more
It's a deep-tech and research company.
It's a deep-tech and research company.
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
skill iconDeep Learning
Long short-term memory (LSTM)
+8 more
Job Description: 
 
We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents. 
 
Your responsibilities: 
 
• Build, improve and extend NLP capabilities 
• Research and evaluate different approaches to NLP problems 
• Must be able to write code that is well designed, produce deliverable results 
• Write code that scales and can be deployed to production 
 
You must have: 
 
• Fundamentals of statistical methods is a must 
• Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM 
• A solid foundation in Python, data structures, algorithms, and general software development skills. 
• Ability to apply machine learning to problems that deal with language 
• Engineering ability to build robustly scalable pipelines
 • Ability to work in a multi-disciplinary team with a strong product focus
Read more
NextGen Invent Corporation
Deepshikha Gupta
Posted by Deepshikha Gupta
Remote only
0 - 8 yrs
₹3L - ₹20L / yr
skill iconPython
Object Oriented Programming (OOPs)
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
+3 more

Experience: 1- 5 Years

Job Location: WFH

No. of Position: Multiple

Qualifications: Ph.D. Must have

Work Timings: 1:30 PM IST to 10:30 PM IST

Functional Area: Data Science

NextGen Invent is currently searching for Data Scientist. This role will directly report to the VP, Data Science in Data Science Practice. The person will work on data science use-cases for the enterprise and must have deep expertise in supervised and unsupervised machine learning, modeling and algorithms with a strong focus on delivering use-cases and solutions at speed and scale to solve business problems.

Job Responsibilities:

  • Leverage AI/ML modeling and algorithms to deliver on use cases
  • Build modeling solutions at speed and scale to solve business problems
  • Develop data science solutions that can be tested and deployed in Agile delivery model
  • Implement and scale-up high-availability models and algorithms for various business and corporate functions
  • Investigate and create experimental prototypes that work on specific domains and verticals
  • Analyze large, complex data sets to reveal underlying patterns, and trends
  • Support and enhance existing models to ensure better performance
  • Set up and conduct large-scale experiments to test hypotheses and delivery of models

Skills, Knowledge, Experience:

  • Must have Ph.D. in an analytical or technical field (e.g. applied mathematics, computer science)
  • Strong knowledge of statistical and machine learning methods
  • Hands on experience on building models at speed and scale
  • Ability to work in a collaborative, transparent style with cross-functional stakeholders across the organization to lead and deliver results
  • Strong skills in oral and written communication
  • Ability to lead a high-functioning team and develop and train people
  • Must have programming experience in SQL, Python and R
  • Experience conceiving, implementing and continually improving machine learning projects
  • Strong familiarity with higher level trends in artificial intelligence and open-source platforms
  • Experience working with AWS, Azure, or similar cloud platform
  • Familiarity with visualization techniques and software
  • Healthcare experience is a plus
  • Experience in Kafka, Chatbot and blockchain is a plus.


Read more
Metadata Technology North America
Metadata Technology North America
Agency job
via RS Consultants by Biswadeep RS
Remote only
8 - 16 yrs
₹20L - ₹50L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
sagemaker
skill iconGo Programming (Golang)
+9 more
Data Scientist Lead / Manager
Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.

Location: Pune (Initially remote due to COVID 19)

*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.


About the Organisation :

- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.

- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.

- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.


Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure

Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only

Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills

Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras
Read more
Simform Solutions
at Simform Solutions
4 recruiters
Dipali Pithava
Posted by Dipali Pithava
Ahmedabad
4 - 8 yrs
₹5L - ₹12L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
DBA
+4 more
We are looking for Lead DBA, with 4-7 years of experience

We are a fast-growing digital, cloud, and mobility services provider with a principal market being North
America. We are looking for talented database/SQL experts for the management and analytics of large
data in various enterprise projects.

Responsibilities
 Translate business needs to technical specifications
 Manage and maintain various database servers (backup, replicas, shards, jobs)
 Develop and execute database queries and conduct analyses
 Occasionally write scripts for ETL jobs.
 Create tools to store data (e.g. OLAP cubes)
 Develop and update technical documentation

Requirements
 Proven experience as a database programmer and administrator
 Background in data warehouse design (e.g. dimensional modeling) and data mining
 Good understanding of SQL and NoSQL databases, online analytical processing (OLAP) and ETL
(Extract, transform, load) framework
 Advance Knowledge of SQL queries, SQL Server Reporting Services (SSRS) and SQL Server
Integration Services (SSIS)
 Familiarity with BI technologies (strong Tableu hands-on experience) is a plus
 Analytical mind with a problem-solving aptitude
Read more
Avhan Technologies Pvt Ltd
Aarti Vohra
Posted by Aarti Vohra
Kolkata
7 - 10 yrs
₹8L - ₹20L / yr
MDX
DAX
SQL
SQL server
Microsoft Analysis Services
+3 more
Exp : 7 to 8 years
Notice Period: Immediate to 15 days
Job Location : Kolkata
 
Responsibilities:
• Develop and improve solutions spanning data processing activities from the data lake (stage) to star schemas and reporting view’s / tables and finally into SSAS.
• Develop and improve Microsoft Analysis Services cubes (tabular and dimensional)
• Collaborate with other teams within the organization and be able to devise the technical solution as it relates to the business & technical requirements
• Mentor team members and be proactive in training and coaching team members to develop their proficiency in Analysis Services
• Maintain documentation for all processes implemented
• Adhere to and suggest improvements to coding standards, applying best practices
 
Skillsets:
• Proficient in MDX and DAX for query in SSAS
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos