Cutshort logo
India's best Short Video App logo
Backend Data Engineer
Backend Data Engineer
India's best Short Video App's logo

Backend Data Engineer

Agency job
4 - 12 yrs
₹25L - ₹50L / yr
Bengaluru (Bangalore)
Skills
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
Data engineer
Elastic Search
MongoDB
Python
Apache Storm
Druid Database
Apache HBase
Cassandra
DynamoDB
Memcached
Proxies
HDFS
Pig
Scribe
Apache ZooKeeper
Agile/Scrum
Roadmaps
DevOps
Software Testing (QA)
Data Warehouse (DWH)
flink
aws kinesis
presto
airflow
caches
data pipeline
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About India's best Short Video App

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Remote only
3 - 7 yrs
₹15L - ₹24L / yr
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
R Programming
+4 more

  Senior Data Scientist

  • 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
  • 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
  • 4+ years’ experience developing ML/AI models in Python/R
  • Must have leadership abilities to lead a project and team.
  • Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
  • Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
  • Should be able to help deploy various models and tune them for better performance.
  • Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
  • Experience with machine learning and computational statistics packages.
  • Experience with Data Bricks, Data Lake.
  • Experience with Dremio, Tableau, Power Bi.
  • Experience working with spark ML, spark DL with Pyspark would be a big plus!
  • Working knowledge of relational database systems like SQL Server, Oracle.
  • Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
  • Good knowledge in Continuous integration suites like Jenkins.
  • Good knowledge in web servers (Apache, NGINX).
  • Good knowledge in Git, Github, Bitbucket.
  • Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
  • Java, R, and Python programming experience.
  • Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
  • Big Data – Hadoop.
  • Expert knowledge using BI tools e.g.Tableau
  • Experience with machine learning and computational statistics packages.

 

Read more
It's a deep-tech and research company.
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹25L / yr
Data Science
Python
Natural Language Processing (NLP)
Deep Learning
Long short-term memory (LSTM)
+8 more
Job Description: 
 
We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents. 
 
Your responsibilities: 
 
• Build, improve and extend NLP capabilities 
• Research and evaluate different approaches to NLP problems 
• Must be able to write code that is well designed, produce deliverable results 
• Write code that scales and can be deployed to production 
 
You must have: 
 
• Fundamentals of statistical methods is a must 
• Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM 
• A solid foundation in Python, data structures, algorithms, and general software development skills. 
• Ability to apply machine learning to problems that deal with language 
• Engineering ability to build robustly scalable pipelines
 • Ability to work in a multi-disciplinary team with a strong product focus
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
Java
Python
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
Hyderabad, Bengaluru (Bangalore), Delhi
2 - 5 yrs
₹3L - ₹8L / yr
Artificial Intelligence (AI)
Machine Learning (ML)
Python
Agile/Scrum
Job Description

Artificial Intelligence (AI) Researchers and Developers

Successful candidate will be part of highly productive teams working on implementing core AI algorithms, Cryptography libraries, AI enabled products and intelligent 3D interface. Candidates will work on cutting edge products and technologies in highly challenging domains and will need to have highest level of commitment and interest to learn new technologies and domain specific subject matter very quickly. Successful completion of projects will require travel and working in remote locations with customers for extended periods

Education Qualification: Bachelor, Master or PhD degree in Computer Science, Mathematics, Electronics, Information Systems from a reputed university and/or equivalent Knowledge and Skills

Location : Hyderabad, Bengaluru, Delhi, Client Location (as needed)

Skillset and Expertise
• Strong software development experience using Python
• Strong background in mathematical, numerical and scientific computing using Python.
• Knowledge in Artificial Intelligence/Machine learning
• Experience working with SCRUM software development methodology
• Strong experience with implementing Web services, Web clients and JSON protocol is required
• Experience with Python Meta programming
• Strong analytical and problem-solving skills
• Design, develop and debug enterprise grade software products and systems
• Software systems testing methodology, including writing and execution of test plans, debugging, and testing scripts and tools
• Excellent written and verbal communication skills; Proficiency in English. Verbal communication in Hindi and other local
Indian languages
• Ability to effectively communicate product design, functionality and status to management, customers and other stakeholders
• Highest level of integrity and work ethic

Frameworks
1. Scikit-learn
2. Tensorflow
3. Keras
4. OpenCV
5. Django
6. CUDA
7. Apache Kafka

Mathematics
1. Advanced Calculus
2. Numerical Analysis
3. Complex Function Theory
4. Probability

Concepts (One or more of the below)
1. OpenGL based 3D programming
2. Cryptography
3. Artificial Intelligence (AI) Algorithms a) Statistical modelling b.) DNN c. RNN d. LSTM e.GAN f. CN
Read more
NSEIT
at NSEIT
4 recruiters
Akansha Singh
Posted by Akansha Singh
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹20L / yr
ELK
ELK Stack
Elastic Search
Logstash
Kibana
+2 more
• Introduction: ELK(Elasticsearch, Logstash, and Kibana) stack. ELK (Elasticsearch, Logstash and Kibana) stack is an end-to-end stack that delivers actionable insights in real time from almost any type of structured and unstructured data source. ELK Stack is the most popular log management platform.

• Responsibilities:
o Should be able to work with API, shards etc in Elasticsearch.
o Write parser in Logstash
o Create Dashboards in Kibana


• Mandatory Experience.
o Must have very good understanding of Log Analytics
o Hands on experience in Elasticsearch, logstash & Kibana should be at expert level
o Elasticsearch : Should be able to write Kibana API
o Logstash : Should be able to write parsers.
o Kibana : Create different visualization and dashboards according to the Client needs
o Scripts : Should be able to write scripts in linux.
Read more
Simplilearn Solutions
at Simplilearn Solutions
1 video
36 recruiters
Aniket Manhar Nanjee
Posted by Aniket Manhar Nanjee
Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹10L / yr
Data Science
R Programming
Python
Scala
Tableau
+1 more
Simplilearn.com is the world’s largest professional certifications company and an Onalytica Top 20 influential brand. With a library of 400+ courses, we've helped 500,000+ professionals advance their careers, delivering $5 billion in pay raises. Simplilearn has over 6500 employees worldwide and our customers include Fortune 1000 companies, top universities, leading agencies and hundreds of thousands of working professionals. We are growing over 200% year on year and having fun doing it. Description We are looking for candidates with strong technical skills and proven track record in building predictive solutions for enterprises. This is a very challenging role and provides an opportunity to work on developing insights based Ed-Tech software products used by large set of customers across globe. It provides an exciting opportunity to work across various advanced analytics & data science problem statement using cutting-edge modern technologies collaborating with product, marketing & sales teams. Responsibilities • Work on enterprise level advanced reporting requirements & data analysis. • Solve various data science problems customer engagement, dynamic pricing, lead scoring, NPS improvement, optimization, chatbots etc. • Work on data engineering problems utilizing our tech stack - S3 Datalake, Spark, Redshift, Presto, Druid, Airflow etc. • Collect relevant data from source systems/Use crawling and parsing infrastructure to put together data sets. • Craft, conduct and analyse A/B experiments to evaluate machine learning models/algorithms. • Communicate findings and take algorithms/models to production with ownership. Desired Skills • BE/BTech/MSc/MS in Computer Science or related technical field. • 2-5 years of experience in advanced analytics discipline with solid data engineering & visualization skills. • Strong SQL skills and BI skills using Tableau & ability to perform various complex analytics in data. • Ability to propose hypothesis and design experiments in the context of specific problems using statistics & ML algorithms. • Good overlap with Modern Data processing framework such as AWS-lambda, Spark using Scala or Python. • Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting various A/B experiments. • Bachelor Degree in Statistics or, prior experience with Ed-Tech is a plus
Read more
Rely
at Rely
1 video
3 recruiters
Hizam Ismail
Posted by Hizam Ismail
Bengaluru (Bangalore)
2 - 10 yrs
₹8L - ₹35L / yr
Python
Hadoop
Spark
Amazon Web Services (AWS)
Big Data
+2 more

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Intelliswift Software
at Intelliswift Software
12 recruiters
Pratish Mishra
Posted by Pratish Mishra
Chennai
4 - 8 yrs
₹8L - ₹17L / yr
Big Data
Spark
Scala
SQL
Greetings from Intelliswift! Intelliswift Software Inc. is a premier software solutions and Services Company headquartered in the Silicon Valley, with offices across the United States, India, and Singapore. The company has a proven track record of delivering results through its global delivery centers and flexible engagement models for over 450 brands ranging from Fortune 100 to growing companies. Intelliswift provides a variety of services including Enterprise Applications, Mobility, Big Data / BI, Staffing Services, and Cloud Solutions. Growing at an outstanding rate, it has been recognized as the second largest private IT Company in the East Bay. Domains: IT, Retail, Pharma, Healthcare, BFSI, and Internet & E-commerce website https://www.intelliswift.com/ Experience: 4-8 Years Job Location: Chennai Job Description: Skills: Spark, Scala, Big data, Hive · Strong Working experience in Spark, Scala, big data, h base and hive. · Should have good working experience in SQL and Spark SQL. · Good to have knowledge or experience in Teradata. · Familiar with General engineering Git, jenkins, sbt, maven.
Read more
Crisp Analytics
at Crisp Analytics
8 recruiters
Sneha Pandey
Posted by Sneha Pandey
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹5L - ₹12L / yr
Spark
Apache Kafka
Hadoop
Pig
HDFS
Together we will create wonderful solutions which deliver value for us and our customers.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos