Data Engineer

at Koo

DP
Posted by Neha Gandhi
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹30L - ₹70L / yr
icon
Full time
Skills
Data engineering
Data Engineer
Big Data
Big Data Engineer
Data Warehouse (DWH)
ELT
Amazon Web Services (AWS)

Problem Statement-Solution 

Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects.  The internet has largely been in English. A good part of India is now getting internet  connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will  balloon to about 600 million users out of the total 750 million internet users in India by 2020.  This will make the vernacular segment one of the largest segments in the world - almost 2x the  size of the US population. The vernacular segment has very few products that they can use on  the internet.

One large human need is that of sharing thoughts and connecting with people of the same  community on the basis of language and common interests. Twitter serves this need globally  but the experience is mostly in English. There’s a large unaddressed need for these vernacular  users to express themselves in their mother tongue and connect with others from their  community. Koo is a solution to this problem.


About Koo

Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.

Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.

Since June 2021, Koo is available in Nigeria.  

Founding Team

Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO,  Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).

 

 

Technology Team & Culture

The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.


Technology skill sets required for a matching profile

  1. Work experience of 4 to 8 years in building large scale high user traffic consumer  facing applications with desire to work in a fast paced startup.
  2. Development experience of real-time data analytics backend infrastructure on AWS
  3. Responsible for building data and analytical engineering solutions with standard e2e  design & ELT patterns, implementing data compaction pipelines, data modelling and  overseeing overall data quality.
  4. Responsible to enable access of data in AWS S3 storage layer and transformations in  Data Warehouse 
  5. Implement Data warehouse entities with common re-usable data model designs with  automation and data quality capabilities.
  6. Integrate domain data knowledge into development of data requirements.
  7. Identify downstream implications of data loads/migration (e.g., data quality,  regulatory)

About Koo

Koo App enables Indians to express themselves in Indian languages. Express on Koo App using Audio, Video or Text. Speak to India here! Download the Koo App and start Kooing! Koo is India's Aatmanirbhar App.
Founded
2020
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Fintech Company

Agency job
via Jobdost
Python
SQL
Data Warehouse (DWH)
Hadoop
Amazon Web Services (AWS)
DevOps
Git
Selenium
Informatica
ETL
Big Data
Postman
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹7L - ₹12L / yr

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Job posted by
Shalaka ZawarRathi

Data Engineer

at AI-powered cloud-based SaaS solution

Agency job
via wrackle
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
Data Structures
Agile/Scrum
SaaS
Cassandra
Spark
Python
NOSQL Databases
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon S3
Apache Kafka
Apache ZooKeeper
Systems Development Life Cycle (SDLC)
Java
YARN
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹15L - ₹50L / yr
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Job posted by
Naveen Taalanki

Principal Data Engineer

at AI-powered cloud-based SaaS solution provider

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
Apache ZooKeeper
Data engineer
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon EMR
Amazon S3
Apache Spark
Java
PythonAnywhere
Test driven development (TDD)
Cloud Computing
Google Cloud Platform (GCP)
Agile/Scrum
OOD
Software design
Architecture
YARN
icon
Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹25L - ₹60L / yr
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Job posted by
Naveen Taalanki

Data Engineer

at Consulting and Services company

Agency job
via Jobdost
Amazon Web Services (AWS)
Apache
Python
PySpark
icon
Hyderabad, Ahmedabad
icon
5 - 10 yrs
icon
₹5L - ₹30L / yr

Data Engineer 

  

Mandatory Requirements  

  • Experience in AWS Glue 
  • Experience in Apache Parquet  
  • Proficient in AWS S3 and data lake  
  • Knowledge of Snowflake 
  • Understanding of file-based ingestion best practices. 
  • Scripting language - Python & pyspark 

 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS  
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies  
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform  
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations  
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data. 
  • Define process improvement opportunities to optimize data collection, insights and displays. 
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible  
  • Identify and interpret trends and patterns from complex data sets  
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.  
  • Key participant in regular Scrum ceremonies with the agile teams   
  • Proficient at developing queries, writing reports and presenting findings  
  • Mentor junior members and bring best industry practices  

 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)  
  • Strong background in math, statistics, computer science, data science or related discipline 
  • Advanced knowledge one of language: Java, Scala, Python, C#  
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake   
  • Proficient with 
  • Data mining/programming tools (e.g. SAS, SQL, R, Python) 
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum) 
  • Data visualization (e.g. Tableau, Looker, MicroStrategy) 
  • Comfortable learning about and deploying new technologies and tools.  
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.  
  • Good written and oral communication skills and ability to present results to non-technical audiences  
  • Knowledge of business intelligence and analytical tools, technologies and techniques. 

 

Familiarity and experience in the following is a plus:  

  • AWS certification 
  • Spark Streaming  
  • Kafka Streaming / Kafka Connect  
  • ELK Stack  
  • Cassandra / MongoDB  
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools 
Job posted by
Sathish Kumar

Bigdata Lead Architecture

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
PySpark
Python
Data engineering
icon
Pune, Hyderabad
icon
7 - 12 yrs
icon
₹12L - ₹33L / yr

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Job posted by
Nikita Aher

Sr. Data Engineer ( a Fintech product company )

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
Data Visualization
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Data-flow analysis
Amazon Web Services (AWS)
PL/SQL
NOSQL Databases
PostgreSQL
ETL
data pipelining
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Job posted by
chinnapareddy S

Data Engineer

at Streetmark

SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
Amazon Web Services (AWS)
Python
Microsoft App-V
icon
Remote, Bengaluru (Bangalore), Chennai
icon
3 - 9 yrs
icon
₹3L - ₹20L / yr

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Job posted by
Mohan Guttula
Spark
Big Data
Data engineering
Hadoop
Apache Kafka
Amazon Web Services (AWS)
Amazon S3
SQL server
Python
Java
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹6L - ₹12L / yr
Data Engineer

• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.

• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.
Job posted by
geeti gaurav mohanty

ETL Engineer - Data Pipeline

at DataToBiz

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
ETL
Amazon Web Services (AWS)
Amazon Redshift
Python
icon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹7L - ₹15L / yr
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.

Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Job posted by
PS Dhillon

Data Architect

at I Base IT

Founded 2011  •  Product  •  100-500 employees  •  Raised funding
Data Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
cube building
data lake
Hadoop
Java
icon
Hyderabad
icon
9 - 13 yrs
icon
₹10L - ₹23L / yr
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Job posted by
Sravanthi Alamuri
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Koo?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort