Cutshort logo
Hadoop Jobs in Bangalore (Bengaluru)

50+ Hadoop Jobs in Bangalore (Bengaluru) | Hadoop Job openings in Bangalore (Bengaluru)

Apply to 50+ Hadoop Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Hadoop Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Bengaluru (Bangalore)
3 - 5 yrs
Best in industry
skill iconPython
Apache Spark
Hadoop
SQL

Responsibilities:


• Build customer facing solution for Data Observability product to monitor Data Pipelines

• Work on POCs to build new data pipeline monitoring capabilities.

• Building next-generation scalable, reliable, flexible, high-performance data pipeline capabilities for ingestion of data from multiple sources containing complex dataset.

•Continuously improve services you own, making them more performant, and utilising resources in the most optimised way.

• Collaborate closely with engineering, data science team and product team to propose an optimal solution for a given problem statement

• Working closely with DevOps team on performance monitoring and MLOps


Required Skills:

• 3+ Years of Data related technology experience.

• Good understanding of distributed computing principles

• Experience in Apache Spark

•  Hands on programming with Python

• Knowledge of Hadoop v2, Map Reduce, HDFS

• Experience with building stream-processing systems, using technologies such as Apache Storm, Spark-Streaming or Flink

• Experience with messaging systems, such as Kafka or RabbitMQ

• Good understanding of Big Data querying tools, such as Hive

• Experience with integration of data from multiple data sources

• Good understanding of SQL queries, joins, stored procedures, relational schemas

• Experience with NoSQL databases, such as HBase, Cassandra/Scylla, MongoDB

• Knowledge of ETL techniques and frameworks

• Performance tuning of Spark Jobs

• General understanding of Data Quality is a plus point

• Experience on Databricks,snowflake and BigQuery or similar lake houses would be a big plus

• Nice to have some knowledge in DevOps

Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
4 - 9 yrs
₹8L - ₹12L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
Hadoop
+5 more

Job Description: Data Engineer


Experience: Over 4 years


Responsibilities:

-       Design, develop, and maintain scalable data pipelines for efficient data extraction, transformation, and loading (ETL) processes.

-       Architect and implement data storage solutions, including data warehouses, data lakes, and data marts, aligned with business needs.

-       Implement robust data quality checks and data cleansing techniques to ensure data accuracy and consistency.

-       Optimize data pipelines for performance, scalability, and cost-effectiveness.

-       Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions.

-       Develop and maintain data security measures to ensure data privacy and regulatory compliance.

-       Automate data processing tasks using scripting languages (Python, Bash) and big data frameworks (Spark, Hadoop).

-       Monitor data pipelines and infrastructure for performance and troubleshoot any issues.

-       Stay up to date with the latest trends and technologies in data engineering, including cloud platforms (AWS, Azure, GCP).

-        Document data pipelines, processes, and data models for maintainability and knowledge sharing.

-       Contribute to the overall data governance strategy and best practices.

 

Qualifications:

-       Strong understanding of data architectures, data modelling principles, and ETL processes.

-       Proficiency in SQL (e.g., MySQL, PostgreSQL) and experience with big data querying languages (e.g., Hive, Spark SQL).

-       Experience with scripting languages (Python, Bash) for data manipulation and automation.

-       Experience with distributed data processing frameworks (Spark, Hadoop) (preferred).

-       Familiarity with cloud platforms (AWS, Azure, GCP) for data storage and processing (a plus).

-       Experience with data quality tools and techniques.

-       Excellent problem-solving, analytical, and critical thinking skills.

-       Strong communication, collaboration, and teamwork abilities.

Read more
Sadup Softech

at Sadup Softech

1 recruiter
madhuri g
Posted by madhuri g
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Must have skills


3 to 6 years

Data Science

SQL, Excel, Big Query - mandate 3+ years

Python/ML, Hadoop, Spark - 2+ years


Requirements


• 3+ years prior experience as a data analyst

• Detail oriented, structural thinking and analytical mindset.

• Proven analytic skills, including data analysis and data validation.

• Technical writing experience in relevant areas, including queries, reports, and presentations.

• Strong SQL and Excel skills with the ability to learn other analytic tools

• Good communication skills (being precise and clear)

• Good to have prior knowledge of python and ML algorithms

Read more
Sigmoid

at Sigmoid

1 video
4 recruiters
Jayakumar AS
Posted by Jayakumar AS
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹12L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Sigmoid works with a variety of clients from start-ups to fortune 500 companies. We are looking for a detailed oriented self-starter to assist our engineering and analytics teams in various roles as a Software Development Engineer.


This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programming principles, experience in programming in Java, Python or similar languages and can expect to

spend a majority of their time coding.


Location - Bengaluru and Hyderabad


Responsibilities:

● Good development practices

○ Hands on coder with good experience in programming languages like Java or

Python.

○ Hands-on experience on the Big Data stack like PySpark, Hbase, Hadoop, Mapreduce and ElasticSearch.

○ Good understanding of programming principles and development practices like checkin policy, unit testing, code deployment

○ Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments

○ Excellent experience in Application development and support, integration development and data management.

● Align Sigmoid with key Client initiatives

○ Interface daily with customers across leading Fortune 500 companies to understand strategic requirements


● Stay up-to-date on the latest technology to ensure the greatest ROI for customer &Sigmoid

○ Hands on coder with good understanding on enterprise level code

○ Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems

○ Experience in defining technical requirements, data extraction, data

transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment


● Culture

○ Must be a strategic thinker with the ability to think unconventional /

out:of:box.

○ Analytical and data driven orientation.

○ Raw intellect, talent and energy are critical.


○ Entrepreneurial and Agile : understands the demands of a private, high growth company.

○ Ability to be both a leader and hands on "doer".


Qualifications: -

- Years of track record of relevant work experience and a computer Science or related technical discipline is required

- Experience with functional and object-oriented programming, Java must.

- hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.

- Effective communication skills (both written and verbal)

- Ability to collaborate with a diverse set of engineers, data scientists and product managers

- Comfort in a fast-paced start-up environment


Preferred Qualification:

- Technical knowledge in Map Reduce, Hadoop & GCS Stack a plus.

- Experience in agile methodology

- Experience with database modeling and development, data mining and warehousing.

- Experience in architecture and delivery of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, propose comprehensive solutions and guide junior staff

- Experience working with large, complex data sets from a variety of sources

Read more
xyz

xyz

Agency job
via HR BIZ HUB by Pooja shankla
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
skill iconJava
Big Data
Apache Hive
Hadoop
Spark

Job Title Big Data Developer

Job Description

Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.

Solid Experience of software development experience and leading teams of engineers and scrum teams.

4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).

Solid Datawarehousing concepts.

Knowledge of Financial reporting ecosystem will be a plus.

4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.

Expert on Distributed ecosystem.

Hands-on experience with programming using Core Java or Python/Scala

Expert on Hadoop and Spark Architecture and its working principle

Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.

Experience in UNIX shell scripting.

Roles & Responsibilities

Ability to design and develop optimized Data pipelines for batch and real time data processing

Should have experience in analysis, design, development, testing, and implementation of system applications

Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.

Excellent technical and analytical aptitude

Good communication skills.

Excellent Project management skills.

Results driven Approach.

Mandatory SkillsBig Data, PySpark, Hive

Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
Accolite Digital
Nitesh Parab
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Telstra

at Telstra

1 video
1 recruiter
Mahesh Balappa
Posted by Mahesh Balappa
Bengaluru (Bangalore), Hyderabad, Pune
3 - 7 yrs
Best in industry
Spark
Hadoop
NOSQL Databases
Apache Kafka

About Telstra

 

Telstra is Australia’s leading telecommunications and technology company, with operations in more than 20 countries, including In India where we’re building a new Innovation and Capability Centre (ICC) in Bangalore.

 

We’re growing, fast, and for you that means many exciting opportunities to develop your career at Telstra. Join us on this exciting journey, and together, we’ll reimagine the future.

 

Why Telstra?

 

  • We're an iconic Australian company with a rich heritage that's been built over 100 years. Telstra is Australia's leading Telecommunications and Technology Company. We've been operating internationally for more than 70 years.
  • International presence spanning over 20 countries.
  • We are one of the 20 largest telecommunications providers globally
  • At Telstra, the work is complex and stimulating, but with that comes a great sense of achievement. We are shaping the tomorrow's modes of communication with our innovation driven teams.

 

Telstra offers an opportunity to make a difference to lives of millions of people by providing the choice of flexibility in work and a rewarding career that you will be proud of!

 

About the team

Being part of Networks & IT means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy.

With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme.

 

Focus of the role

Any new engineer who comes into data chapter would be mostly into developing reusable data processing and storage frameworks that can be used across data platform.

 

About you

To be successful in the role, you'll bring skills and experience in:-

 

Essential 

  • Hands-on experience in Spark Core, Spark SQL, SQL/Hive/Impala, Git/SVN/Any other VCS and Data warehousing
  • Skilled in the Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)
  • Azure data factory/Airflow/control-M/Luigi
  • PL/SQL
  • Exposure to NOSQL(Hbase/Cassandra/GraphDB(Neo4J)/MongoDB)
  • File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)
  • Kafka/Kinesis/Eventhub

 

Highly Desirable

Experience and knowledgeable on the following:

  • Spark Streaming
  • Cloud exposure (Azure/AWS/GCP)
  • Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc.
  • Presto/Athena
  • Azure DevOps
  • Jenkins/ Bamboo/Any similar build tools
  • Power BI
  • Prior experience in building or working in team building reusable frameworks,
  • Data modelling.
  • Data Architecture and design principles. (Delta/Kappa/Lambda architecture)
  • Exposure to CI/CD
  • Code Quality - Static and Dynamic code scans
  • Agile SDLC      

 

If you've got a passion to innovate, succeed as part of a great team, and looking for the next step in your career, we'd welcome you to apply!

___________________________

 

We’re committed to building a diverse and inclusive workforce in all its forms. We encourage applicants from diverse gender, cultural and linguistic backgrounds and applicants who may be living with a disability. We also offer flexibility in all our roles, to ensure everyone can participate.

To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit tel.st/thrive.

Read more
Play Games24x7

at Play Games24x7

2 recruiters
Agency job
via Zyoin Web Private Limited by Vishali Vashnavi
Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹50L / yr
skill iconJava
J2EE
skill iconPostgreSQL
MySQL
skill iconMongoDB
+19 more
Requirements:
• B. E. /B. Tech. in Computer Science or MCA from a reputed university.
• 3.5 plus years of experience in software development, with emphasis on JAVA/J2EE Server side
programming.
• Hands on experience in core Java, multithreading, RMI, socket programing, JDBC, NIO, webservices
and design patterns.
• Knowledge of distributed system, distributed caching, messaging frameworks, ESB etc.
• Experience in Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database.
• Additionally, knowledge of HBase, Hadoop and Hive is desirable.
• Familiarity with message queue systems and AMQP and Kafka is desirable.
• Experience as a participant in agile methodologies.
• Excellent written and verbal communication skills and presentation skills.
• This is not a fullstack requirement, we are looking for a purely backend expert.
Read more
Tata Digital Pvt Ltd

Tata Digital Pvt Ltd

Agency job
via Seven N Half by Priya Singh
Bengaluru (Bangalore)
8 - 13 yrs
₹10L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

 

              Data Engineer

 

-          High Skilled and proficient on Azure Data Engineering Tech stacks (ADF, Databricks)

-          Should be well experienced in design and development of Big data integration platform (Kafka, Hadoop).

-          Highly skilled and experienced in building medium to complex data integration pipelines for Data at Rest and streaming data using Spark.

-          Strong knowledge in R/Python.

-          Advanced proficiency in solution design and implementation through Azure Data Lake, SQL and NoSQL Databases.

-          Strong in Data Warehousing concepts

-          Expertise in SQL, SQL tuning, Data Management (Data Security), schema design, Python and ETL processes

-          Highly Motivated, Self-Starter and quick learner

-          Must have Good knowledge on Data modelling and understating of Data analytics

-          Exposure to Statistical procedures, Experiments and Machine Learning techniques is an added advantage.

-          Experience in leading small team of 6/7 Data Engineers.

-          Excellent written and verbal communication skills

 

Read more
Simpl

at Simpl

3 recruiters
Elish Ismael
Posted by Elish Ismael
Bengaluru (Bangalore)
3 - 10 yrs
₹10L - ₹50L / yr
skill iconJava
Apache Spark
Big Data
Hadoop
Apache Hive
About Simpl
The thrill of working at a start-up that is starting to scale massively is something else. Simpl (FinTech startup of the year - 2020) was formed in 2015 by Nitya Sharma, an investment banker from Wall Street and Chaitra Chidanand, a tech executive from the Valley, when they teamed up with a very clear mission - to make money simple so that people can live well and do amazing things. Simpl is the payment platform for the mobile-first world, and we’re backed by some of the best names in fintech globally (folks who have invested in Visa, Square and Transferwise), and
has Joe Saunders, Ex Chairman and CEO of Visa as a board member.

Everyone at Simpl is an internal entrepreneur who is given a lot of bandwidth and resources to create the next breakthrough towards the long term vision of “making money Simpl”. Our first product is a payment platform that lets people buy instantly, anywhere online, and pay later. In
the background, Simpl uses big data for credit underwriting, risk and fraud modelling, all without any paperwork, and enables Banks and Non-Bank Financial Companies to access a whole new consumer market.
In place of traditional forms of identification and authentication, Simpl integrates deeply into merchant apps via SDKs and APIs. This allows for more sophisticated forms of authentication that take full advantage of smartphone data and processing power

Skillset:
 Workflow manager/scheduler like Airflow, Luigi, Oozie
 Good handle on Python
 ETL Experience
 Batch processing frameworks like Spark, MR/PIG
 File formats: parquet, JSON, XML, thrift, avro, protobuff
 Rule engine (drools - business rule management system)
 Distributed file systems like HDFS, NFS, AWS, S3 and equivalent
 Built/configured dashboards

Nice to have:
 Data platform experience for eg: building data lakes, working with near - realtime
applications/frameworks like storm, flink, spark.
 AWS
 File encoding types: Thrift, Avro, Protobuff, Parquet, JSON, XML
 HIVE, HBASE
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Ushur Technologies Pvt Ltd

at Ushur Technologies Pvt Ltd

1 video
2 recruiters
Priyanka N
Posted by Priyanka N
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
skill iconMongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more
Cloudera

at Cloudera

2 recruiters
Sushmitha Rengarajan
Posted by Sushmitha Rengarajan
Bengaluru (Bangalore)
3 - 20 yrs
₹1L - ₹44L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
Data Structures
+7 more

 

Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.A Day in the LifeOver the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution.  We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations.  You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.Opportunity:Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.Apache Hive

 

Responsibilities:

•Build robust and scalable data infrastructure software

•Design and create services and system architecture for your projects

•Improve code quality through writing unit tests, automation, and code reviews

•The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.

•Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs

•The candidate has to understand the basics of Kubernetes.

•Build out the production and test infrastructure.

•Develop automation frameworks to reproduce issues and prevent regressions.

•Work closely with other developers providing services to our system.

•Help to analyze and to understand how customers use the product and improve it where necessary. 

Qualifications:

•Deep familiarity with Java programming language.

•Hands-on experience with distributed systems.

•Knowledge of database concepts, RDBMS internals.

•Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus. 

•Has experience working in a distributed team.

•Has 3+ years of experience in software development.

 

Read more
Cloudera

at Cloudera

2 recruiters
Sushmitha Rengarajan
Posted by Sushmitha Rengarajan
Remote, Bengaluru (Bangalore)
5 - 20 yrs
₹1L - ₹44L / yr
skill iconJava
skill iconKubernetes
skill iconDocker
Hadoop
Apache Kafka
+3 more

 

Senior Software Engineer - 221254.

 

We (the Software Engineer team) are looking for a motivated, experienced person with a data driven approach to join our Distribution Team in Budapest or Szeged to help design, execute and improve our test sets and infrastructure for producing high-quality Hadoop software.

 

A Day in the life

 

You will be part of a team that makes sure our releases are predictable and deliver high value to the customer. This team is responsible for automating and maintaining our test harness, and making test results reliable and repeatable.

 

You will…

•work on making our distributed software stack more resilient to high-scale endurance runs and customer simulations

•provide valuable fixes to our product development teams to the issues you’ve found during exhaustive test runs

•work with product and field teams to make sure our customer simulations match the expectations and can provide valuable feedback to our customers

•work with amazing people - We are a fun & smart team, including many of the top luminaries in Hadoop and related open source communities. We frequently interact with the research community, collaborate with engineers at other top companies & host cutting edge researchers for tech talks.

•do innovative work - Cloudera pushes the frontier of big data & distributed computing, as our track record shows. We work on high-profile open source projects, interacting daily with engineers at other exciting companies, speaking at meet-ups, etc.

•be a part of a great culture - Transparent and open meritocracy. Everybody is always thinking of better ways to do things, and coming up with ideas that make a difference. We build our culture to be the best workplace in our careers.

 

You have...

•strong knowledge in at least 1 of the following languages: Java / Python / Scala / C++ / C#

•hands-on experience with at least 1 of the following configuration management tools: Ansible, Chef, Puppet, Salt

•confidence with Linux environments

•ability to identify critical weak spots in distributed software systems

•experience in developing automated test cases and test plans

•ability to deal with distributed systems

•solid interpersonal skills conducive to a distributed environment

•ability to work independently on multiple tasks

•self-driven & motivated, with a strong work ethic and a passion for problem solving

•innovate and automate and break the code

The right person in this role has an opportunity to make a huge impact at Cloudera and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.

 

Read more
Subhanu Consulting

at Subhanu Consulting

4 recruiters
Rashmi Anand
Posted by Rashmi Anand
Bengaluru (Bangalore)
8 - 15 yrs
₹10L - ₹15L / yr
J2EE
Apache Kafka
API
JMS
Hadoop
+4 more
  • Produce clean code and automated tests
  • Align with enterprise architecture frameworks and standards
  • Be the role-model for all engineers in the team in terms of technical competency
  • Research, assess and adopt new technologies as required
  • Be a guide and mentor to the team members and help in ramping up the overall skill-base of the team.
  • Produce detailed estimates and optimized work plans for requirements and changes
  • Ensure that features are delivered on time and that they meet the business needs
  • Strive for quality of performance, usability, reliability, maintainability, and extensibility
  • Identify opportunities for process and tool improvements
  • Use analytical rigor to produce effective solutions to poorly defined problems
  • Follow Build to Ship mantra in practice with full Dev Ops implementation
  • 10+ years of core software development and product creation experience in CPaaS.
  • Working knowledge in VoIP, communication API , J2EE, JMS/ Kafka, Web-Services, Hadoop, React, Node.js, GoLang.
  • Working knowledge in Various CPaaS channels - SMS, voice, WhatsApp, RCS, Email.
  • Working knowledge of DevOps, automation testing, test driven development, behavior driven development, server-less or micro-services
  • Experience with AWS / Azure deployments
  • Solid background in large scale software development.
  • Full stack understanding of web/mobile/API/database development concepts and patterns
  • Exposure to Microservices, Iaas, PaaS, service mesh, SaaS and cloud native application development.

  • Understanding of Agile Scrum and SDLC principles.
  • Containerization and orchestrations:- Dockers, kuberenetes, openshift, consule etc.
  • Knowledge on NFV (openstack, Vsphere, Vcloud etc)
  • Experience in Data Analytics/AI/ML or Marketing Tech domain is an added advantage
  •  
Read more
Tier 1 MNC

Tier 1 MNC

Agency job
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
3 - 12 yrs
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
Product based company

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
Persistent System Ltd

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Pune, Hyderabad
4 - 6 yrs
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
Indium Software

at Indium Software

16 recruiters
Karunya P
Posted by Karunya P
Bengaluru (Bangalore), Hyderabad
1 - 9 yrs
₹1L - ₹15L / yr
SQL
skill iconPython
Hadoop
HiveQL
Spark
+1 more

Responsibilities:

 

* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.

SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries

Languages: Python

* Good communicator, shows initiative, works well with stakeholders.

* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.

* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL

* Solid communication and articulation skills

* Able to handle stakeholders independently with less interventions of reporting manager.

* Develop strategies to solve problems in logical yet creative ways.

* Create custom reports and presentations accompanied by strong data visualization and storytelling

 

We would be excited if you have:

 

* Excellent communication and interpersonal skills

* Ability to meet deadlines and manage project delivery

* Excellent report-writing and presentation skills

* Critical thinking and problem-solving capabilities

Read more
US Based Product Organization

US Based Product Organization

Agency job
via e-Hireo by Biswajit Banik
Bengaluru (Bangalore)
10 - 15 yrs
₹25L - ₹45L / yr
Hadoop
HDFS
Apache Hive
Zookeeper
Cloudera
+8 more

Responsibilities :

  • Provide Support Services to our Gold & Enterprise customers using our flagship product suits. This may include assistance provided during the engineering and operations of distributed systems as well as responses for mission-critical systems and production customers.
  • Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
  • Lead and mentor others about concurrency, parallelization to deliver scalability, performance, and resource optimization in a multithreaded and distributed environment
  • Demonstrate the ability to actively listen to customers and show empathy to the customer’s business impact when they experience issues with our products


Requires Skills :

  • 10+ years of Experience with a highly scalable, distributed, multi-node environment (100+ nodes)
  • Hadoop operation including Zookeeper, HDFS, YARN, Hive, and related components like the Hive metastore, Cloudera Manager/Ambari, etc
  • Authentication and security configuration and tuning (KNOX, LDAP, Kerberos, SSL/TLS, second priority: SSO/OAuth/OIDC, Ranger/Sentry)
  • Java troubleshooting, e.g., collection and evaluation of jstacks, heap dumps
  • Linux, NFS, Windows, including application installation, scripting, basic command line
  • Docker and Kubernetes configuration and troubleshooting, including Helm charts, storage options, logging, and basic kubectl CLI
  • Experience working with scripting languages (Bash, PowerShell, Python)
  • Working knowledge of application, server, and network security management concepts
  • Familiarity with virtual machine technologies
  • Knowledge of databases like MySQL and PostgreSQL,
  • Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes is a big plus
Read more
NA

NA

Agency job
via Talent folks by Rijooshri Saikia
Bengaluru (Bangalore)
7 - 13 yrs
₹10L - ₹12L / yr
Team Management
skill iconJava
Hadoop
Microservices
People Management
+1 more

Senior Team Lead, Software Engineering (96386)

 

Role: Senior Team Lead


Skills:  Has to be an expert in these -               

  1. Java
  2. Microservices
  3. Hadoop
  4. People Management Skills.

                   
Will be a plus if knowledge on -            

AWS

Location:    Bangalore India – North Gate.

 

Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
6 - 10 yrs
Best in industry
SRE
Reliability engineering
Site reliability
Hadoop
HDFS
+1 more

Senior SRE - Acceldata (IC3 Level)


About the Job


You will join a team of highly skilled engineers who are responsible for delivering Acceldata’s support services. Our Site Reliability Engineers are trained to be active listeners and demonstrate empathy when customers encounter product issues. In our fun and collaborative environment  Site Reliability Engineers develop strong business, interpersonal and technical skills to deliver high-quality service to our valued customers.


When you arrive for your first day, we’ll want you to have:

  • Solid skills in troubleshooting to repair failed products or processes on a machine or a system using a logical, systematic search for the source of a problem in order to solve it, and make the product or process operational again
  • A strong ability to understand the feelings of our customers as we empathize with them on the issue at hand
  • A strong desire to increase your product and technology skillset; increase- your confidence supporting our products so you can help our customers succeed

In this position you will…

  • Provide Support Services to our Gold & Enterprise customers using our flagship Acceldata Pulse,Flow & Torch Product suits. This may include assistance provided during the engineering and operations of distributed systems as well as responses for mission-critical systems and production customers.
  • Demonstrate the ability to actively listen to customers and show empathy to the customer’s business impact when they experience issues with our products
  • Participate in the queue management and coordination process by owning customer escalations, managing the unassigned queue.
  • Be involved with and work on other support related activities - Performing POC & assisting Onboarding deployments of Acceldata & Hadoop distribution products.
  • Triage, diagnose and escalate customer inquiries when applicable during their engineering and operations efforts.
  • Collaborate and share solutions with both customers and the Internal team.
  • Investigate product related issues both for particular customers and for common trends that may arise
  • Study and understand critical system components and large cluster operations
  • Differentiate between issues that arise in operations, user code, or product
  • Coordinate enhancement and feature requests with product management and Acceldata engineering team.
  • Flexible in working in Shifts.
  • Participate in a Rotational weekend on-call roster for critical support needs.
  • Participate as a designated or dedicated engineer for specific customers. Aspects of this engagement translates to building long term successful relationships with customers, leading weekly status calls, and occasional visits to customer sites

In this position, you should have…

  • A strong desire and aptitude to become a well-rounded support professional. Acceldata Support considers the service we deliver as our core product.
  • A positive attitude towards feedback and continual improvement
  • A willingness to give direct feedback to and partner with management to improve team operations
  • A tenacity to bring calm and order to the often stressful situations of customer cases
  • A mental capability to multi-task across many customer situations simultaneously
  • Bachelor degree in Computer Science or Engineering or equivalent experience. Master’s degree is a plus
  • At least 2+ years of experience with at least one of the following cloud platforms: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), experience with managing and supporting a cloud infrastructure on any of the 3 platforms. Also knowledge on Kubernetes, Docker is a must.
  • Strong troubleshooting skills (in example, TCP/IP, DNS, File system, Load balancing, database, Java)
  • Excellent communication skills in English (written and verbal)
  • Prior enterprise support experience in a technical environment strongly preferred

Strong Hands-on Experience Working With Or Supporting The Following

  • 8-12 years of Experience with a highly-scalable, distributed, multi-node environment (50+ nodes)
  • Hadoop operation including Zookeeper, HDFS, YARN, Hive, and related components like the Hive metastore, Cloudera Manager/Ambari, etc
  • Authentication and security configuration and tuning (KNOX, LDAP, Kerberos, SSL/TLS, second priority: SSO/OAuth/OIDC, Ranger/Sentry)
  • Java troubleshooting, e.g., collection and evaluation of jstacks, heap dumps

You might also have…

  • Linux, NFS, Windows, including application installation, scripting, basic command line
  • Docker and Kubernetes configuration and troubleshooting, including Helm charts, storage options, logging, and basic kubectl CLI
  • Experience working with scripting languages (Bash, PowerShell, Python)
  • Working knowledge of application, server, and network security management concepts
  • Familiarity with virtual machine technologies
  • Knowledge of databases like MySQL and PostgreSQL,
  • Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes is a big plus

The right person in this role has an opportunity to make a huge impact at Acceldata and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.

Learn more at https://www.acceldata.io/about-us">https://www.acceldata.io/about-us



Read more
NoBroker

at NoBroker

1 video
26 recruiters
noor aqsa
Posted by noor aqsa
Bengaluru (Bangalore)
1 - 3 yrs
₹6L - ₹8L / yr
skill iconJava
Spark
PySpark
Data engineering
Big Data
+2 more
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Read more
Hiring for one of the MNC for India location

Hiring for one of the MNC for India location

Agency job
via Natalie Consultants by Rahul Kumar
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Fintech Company

Fintech Company

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹12L / yr
skill iconPython
SQL
Data Warehouse (DWH)
Hadoop
skill iconAmazon Web Services (AWS)
+7 more

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
They platform powered by machine learning. (TE1)

They platform powered by machine learning. (TE1)

Agency job
via Multi Recruit by Paramesh P
Bengaluru (Bangalore)
1.5 - 4 yrs
₹8L - ₹16L / yr
skill iconScala
skill iconJava
Spark
Hadoop
Rest API
+1 more
  • Involvement in the overall application lifecycle
  • Design and develop software applications in Scala and Spark
  • Understand business requirements and convert them to technical solutions
  • Rest API design, implementation, and integration
  • Collaborate with Frontend developers and provide mentorship for Junior engineers in the team
  • An interest and preferably working experience in agile development methodologies
  • A team player, eager to invest in personal and team growth
  • Staying up to date with cutting edge technologies and best practices
  • Advocate for improvements to product quality, security, and performance

 

Desired Skills and Experience

  • Minimum 1.5+ years of development experience in Scala / Java language
  • Strong understanding of the development cycle, programming techniques, and tools.
  • Strong problem solving and verbal and written communication skills.
  • Experience in working with web development using J2EE or similar frameworks
  • Experience in developing REST API’s
  • BE in Computer Science
  • Experience with Akka or Micro services is a plus
  • Experience with Big data technologies like Spark/Hadoop is a plus company offers very competitive compensation packages commensurate with your experience. We offer full benefits, continual career & compensation growth, and many other perks.

 

Read more
Big revolution in the e-gaming industry. (GK1)

Big revolution in the e-gaming industry. (GK1)

Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
2 - 3 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconScala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Noida
5 - 9 yrs
₹10L - ₹17L / yr
Data engineering
Spark
skill iconScala
Hadoop
Apache Hadoop
+1 more
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more
UAE Client

UAE Client

Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
3 - 8 yrs
₹15L - ₹20L / yr
Apache Kafka
kafka
Spark
Hadoop
confluence
Experience in Apache Kafka,
Good communication skills
Good to have Hadoop and spark
Hands on with Apache kafka and confluence kafka
Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹40L / yr
Hadoop
SRE
DevOps
Reliability engineering
Load balancing
+2 more
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
 
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data operations platform that focuses on optimizing modern data lakes for both on-premise and cloud environments.

 

Responsibilities

  • Our Site reliability engineers work on improving the availability, scalability, performance, and reliability of enterprise production services for our products as well as our customer’s data lake environments.
  • You will use your expertise to improve the reliability and performance of Hadoop Data lake clusters and data management services. Just as our products, our SRE are expected to be platform and vendor-agnostic when it comes to implementing, stabilizing, and tuning Hadoop ecosystems.
  • You’d be required to provide implementation guidance, best practices framework, and technical thought leadership to our customers for their Hadoop Data lake implementation and migration initiatives.
  • You need to be 100% hand-on and as a required test, monitor, administer, and operate multiple Data lake clusters across data centers.
  • Troubleshoot issues across the entire stack - hardware, software, application, and network.
  • Dive into problems with an eye to both immediate remediations as well as the follow-through changes and automation that will prevent future occurrences.
  • Must demonstrate exceptional troubleshooting and strong architectural skills and clearly and effectively describe this in both a verbal and written format.

Requirements

  • Customer-focused, Self-driven, and Motivated with a strong work ethic and a passion for problem-solving.
  • 4+ years of designing, implementing, tuning, and managing services in a distributed, enterprise-scale on-premise and public/private cloud environment.
  • Familiarity with infrastructure management and operations lifecycle concepts and ecosystem.
  • Hadoop cluster design, Implementation, management and performance tuning experience with HDFS, YARN,
  • HIVE/IMPALA, SPARK, Kerberos and related Hadoop technologies are a must.
  • Must have strong SQL/HQL query troubleshooting and tuning skills on Hive/HBase.
  • Must have a strong capacity planning experience for Hadoop ecosystems/data lakes.
  • Good to have hands-on experience with – KAFKA, RANGER/SENTRY, NiFi, Ambari, Cloudera Manager, and HBASE.
  • Good to have data modeling, data engineering, and data security experience within the Hadoop ecosystem.Good to have deep JVM/Java debugging and tuning skills.
Read more
AI-powered cloud-based SaaS solution provider

AI-powered cloud-based SaaS solution provider

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹60L / yr
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
+20 more
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
AI-powered cloud-based SaaS solution

AI-powered cloud-based SaaS solution

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
2 - 10 yrs
₹15L - ₹50L / yr
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
+18 more
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
a global business process management company

a global business process management company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
3 - 8 yrs
₹14L - ₹20L / yr
Business Intelligence (BI)
PowerBI
Windows Azure
skill iconGit
SVN
+9 more

Power BI Developer(Azure Developer )

Job Description:

Senior visualization engineer with understanding in Azure Data Factory & Databricks to develop and deliver solutions that enable delivery of information to audiences in support of key business processes.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business and technical counterparts.

 

Desired Competencies:

  • Strong designing concepts of data visualization centered on business user and a knack of communicating insights visually.
  • Ability to produce any of the charting methods available with drill down options and action-based reporting. This includes use of right graphs for the underlying data with company themes and objects.
  • Publishing reports & dashboards on reporting server and providing role-based access to users.
  • Ability to create wireframes on any tool for communicating the reporting design.
  • Creation of ad-hoc reports & dashboards to visually communicate data hub metrics (metadata information) for top management understanding.
  • Should be able to handle huge volume of data from databases such as SQL Server, Synapse, Delta Lake or flat files and create high performance dashboards.
  • Should be good in Power BI development
  • Expertise in 2 or more BI (Visualization) tools in building reports and dashboards.
  • Understanding of Azure components like Azure Data Factory, Data lake Store, SQL Database, Azure Databricks
  • Strong knowledge in SQL queries
  • Must have worked in full life-cycle development from functional design to deployment
  • Intermediate understanding to format, process and transform data
  • Should have working knowledge of GIT, SVN
  • Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
  • Basic understanding of data modelling and ability to combine data from multiple sources to create integrated reports

 

Preferred Qualifications:

  • Bachelor's degree in Computer Science or Technology
  • Proven success in contributing to a team-oriented environment
Read more
Games 24x7

Games 24x7

Agency job
via zyoin by Shubha N
Bengaluru (Bangalore)
0 - 6 yrs
₹10L - ₹21L / yr
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
+5 more
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Read more
world’s fastest growing consumer internet company

world’s fastest growing consumer internet company

Agency job
via Hunt & Badge Consulting Pvt Ltd by Chandramohan Subramanian
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
+5 more

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more
UAE Client

UAE Client

Agency job
via Fragma Data Systems by Harpreet kour
Dubai, Bengaluru (Bangalore)
4 - 8 yrs
₹6L - ₹16L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Apache Spark
+3 more
• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
UAE Client

UAE Client

Agency job
via Fragma Data Systems by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
6 - 10 yrs
₹15L - ₹22L / yr
Informatica
Big Data
SQL
Hadoop
Apache Spark
+1 more

Skills- Informatica with Big Data Management

 

1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more
ACT FIBERNET

at ACT FIBERNET

1 video
2 recruiters
Sumit Sindhwani
Posted by Sumit Sindhwani
Bengaluru (Bangalore)
9 - 14 yrs
₹20L - ₹36L / yr
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
+1 more

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Read more
upGrad

at upGrad

1 video
19 recruiters
Priyanka Muralidharan
Posted by Priyanka Muralidharan
Mumbai, Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹60L / yr
Technical Architecture
Technical architect
skill iconJava
skill iconGo Programming (Golang)
skill iconReact.js
+10 more
About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19,
  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-after startups in India.
  • upGrad was earlier selected as one of the top ten most innovative companies in India by FastCompany.
  • We were also covered by the Financial Times along with other disruptors in Ed-Tech.
  • upGrad is the official education partner for Government of India - Startup India program.
  • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning.

About the Role

A highly motivated individual who has expe rience in architecting end to end web based ecommerce/online/SaaS products and systems; bringing them to production quickly and with high quality. Able to understand expected business results and map architecture to drive business forward. Passionate about building world class solutions.

Role and Responsibilities

  • Work with Product Managers and Business to understand business/product requirements and vision.
  • Provide a clear architectural vision in line with business and product vision.
  • Lead a team of architects, developers, and data engineers to provide platform services to other engineering teams.
  • Provide architectural oversight to engineering teams across the organization.
  • Hands on design and development of platform services and features owned by self - this is a hands-on coding role.
  • Define guidelines for best practices covering design, unit testing, secure coding etc.
  • Ensure quality by reviewing design, code, test plans, load test plans etc. as appropriate.
  • Work closely with the QA and Support teams to track quality and proactively identify improvement opportunities.
  • Work closely with DevOps and IT to ensure highly secure and cost optimized operations in the cloud.
  • Grow technical skills in the team - identify skill gaps with plans to address them, participate in hiring, mentor other architects and engineers.
  • Support other engineers in resolving complex technical issues as a go-to person.

Skills/Experience
  • 12+ years of experience in design and development of ecommerce scale systems and highly scalable SaaS or enterprise products.
  • Extensive experience in developing extensible and scalable web applications with
    • Java, Spring Boot, Go
    • Web Services - REST, OAuth, OData
    • Database/Caching - MySQL, Cassandra, MongoDB, Memcached/Redis
    • Queue/Broker services - RabbitMQ/Kafka
    • Microservices architecture via Docker on AWS or Azure.
    • Experience with web front end technologies - HTML5, CSS3, JavaScript libraries and frameworks such as jQuery, AngularJS, React, Vue.js, Bootstrap etc.
  • Extensive experience with cloud based architectures and how to optimize design for cost.
  • Expert level understanding of secure application design practices and a working understanding of cloud infrastructure security.
  • Experience with CI/CD processes and design for testability.
  • Experience working with big data technologies such as Spark/Storm/Hadoop/Data Lake Architectures is a big plus.
  • Action and result-oriented problem-solver who works well both independently and as part of a team; able to foster and develop others' ideas as well as his/her own.
  • Ability to organize, prioritize and schedule a high workload and multiple parallel projects efficiently.
  • Excellent verbal and written communication with stakeholders in a matrixed environment.
  • Long term experience with at least one product from inception to completion and evolution of the product over multiple years.
Qualification
B.Tech/MCA (IT/Computer Science) from a premier institution (IIT/NIT/BITS) and/or a US Master's degree in Computer Science.
Read more
Looking to hire Data Engineers for a client in Bangalore.

Looking to hire Data Engineers for a client in Bangalore.

Agency job
via Artifex HR by Maria Theyos
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Big Data
Hadoop
Apache Spark
Spark
Apache Kafka
+11 more

We are looking for a savvy Data Engineer to join our growing team of analytics experts. 

 

The hire will be responsible for:

- Expanding and optimizing our data and data pipeline architecture

- Optimizing data flow and collection for cross functional teams.

- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates

- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.

Nice to have experience with :

- Big data tools: Hadoop, Spark and Kafka

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow

- Stream-processing systems: Storm

Database : SQL DB

Programming languages : PL/SQL, Spark SQL

Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.

The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.

Read more
IT MNC
Agency job
via Apical Mind by Madhusudan Patade
Bengaluru (Bangalore), Hyderabad, Noida, Chennai, NCR (Delhi | Gurgaon | Noida)
3 - 12 yrs
₹15L - ₹40L / yr
Presto
Hadoop
presto
SQL

Experience – 3 – 12 yrs

Budget - Open

Location - PAN India (Noida/Bangaluru/Hyderabad/Chennai)


Presto Developer (4)

 

Understanding of distributed SQL query engine running on Hadoop 

Design and develop core components for Presto 

Contribute to the ongoing Presto development by implementing new features, bug fixes, and other improvements 

Develop new and extend existing Presto connectors to various data sources 

Lead complex and technically challenging projects from concept to completion 

Write tests and contribute to ongoing automation infrastructure development 

Run and analyze software performance metrics 

Collaborate with teams globally across multiple time zones and operate in an Agile development environment 

Hands-on experience and interest with Hadoop 

Read more
Bidgely

at Bidgely

1 recruiter
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹40L / yr
SDET
Automation
Test Automation (QA)
Selenium
TestNG
+18 more
Responsibilities

  • Design and develop a framework, internal tools, and scripts for testing large-scale data systems, machine learning algorithms, and responsive User Interfaces.
  • Create repeatability in testing through automation
  • Participate in code reviews, design reviews, architecture discussions.
  • Performance testing and benchmarking of Bidgely product suites
  • Driving the adoption of these best practices around coding, design, quality, performance in your team.
  • Lead the team on all technical aspects and own the quality of your teams’ deliverables
  • Understand requirements, design exhaustive test scenarios, execute manual and automated test cases, dig deeper into issues, identify root causes, and articulate defects clearly.
  • Strive for excellence in quality by looking beyond obvious scenarios and stated requirements and by keeping end-user needs in mind.
  • Debug automation, product, deployment, and production issues and work with stakeholders/team on quick resolution
  • Deliver a high-quality robust product in a fast-paced start-up environment.
  • Collaborate with the engineering team and product management to elicit & understand their requirements and develop potential solutions.
  • Stay current with the latest technology, tools, and methodologies; share knowledge by clearly articulating results and ideas to key decision-makers.

Requirements

  •  BS/MS in Computer Science, Electrical or equivalent
  • 6+ years of experience in designing automation frameworks, tools
  • Strong object-oriented design skills, knowledge of design patterns, and an uncanny ability to
    design intuitive module and class-level interfaces
  • Deep understanding of design patterns, optimizations
  • Experience leading multi-engineer projects and mentoring junior engineers
  • Good understanding of data structures and algorithms and their space and time complexities. Strong technical aptitude and a good knowledge of CS fundamentals
  • Experience in non-functional testing and performance benchmarking
  • Knowledge of Test-Driven Development &  implementing CD/CD
  • Strong hands-on and practical working experience with at least one programming language: Java/Python/C++
  • Strong analytical, problem solving, and debugging skills.
  • Strong experience in API automation using Jersey/Rest Assured.
  • Fluency in automation tools, frameworks such as Selenium, TestNG, Jmeter, JUnit, Jersey, etc...
  • Exposure to distributed systems or web applications
  • Good in RDBMS or any of the large data systems such as Hadoop, Cassandra, etc.
  • Hands-on experience with build tools like Maven/Gradle &  Jenkins
  • Experience in testing on various browsers and devices.
  • Strong communication and collaboration skills.
Read more
Bidgely

at Bidgely

1 recruiter
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹40L / yr
skill iconJava
skill iconSpring Boot
NOSQL Databases
SQL
skill iconAmazon Web Services (AWS)
+13 more
Responsibilities

● Design and deliver scalable web services, APIs, and backend data modules.
Understand requirements and develop reusable code using design patterns &
component architecture and write unit test cases.

● Collaborate with product management and engineering teams to elicit &
understand their requirements & challenges and develop potential solutions

● Stay current with the latest tools, technology ideas, and methodologies; share
knowledge by clearly articulating results and ideas to key decision-makers.

Requirements

● 3-6 years of strong experience in developing highly scalable backend and
middle tier. BS/MS in Computer Science or equivalent from premier institutes
Strong in problem-solving, data structures, and algorithm design. Strong
experience in system architecture, Web services development, highly scalable
distributed applications.

● Good in large data systems such as Hadoop, Map Reduce, NoSQL Cassandra, etc.. Fluency in Java, Spring, Hibernate, J2EE, REST Services Ability to deliver code
quickly from given scenarios in a fast-paced start-up environment.

● Attention to detail. Strong communication and collaboration skills.
Read more
Rakuten

at Rakuten

1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
Recko

at Recko

1 recruiter
Agency job
via Zyoin Web Private Limited by Chandrakala M
Bengaluru (Bangalore)
3 - 7 yrs
₹16L - ₹40L / yr
Big Data
Hadoop
Spark
Apache Hive
Data engineering
+6 more

Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem.

 

What are we looking for:

  1. 3+  years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc.

  2. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud

  3. 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc

  4. 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred

  5. Knowledge of statistical analysis tools like R, SAS etc 

  6. Familiarity with any data visualization software

  7. A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with

As a data engineer at Recko, you will:

  1. Create and maintain optimal data pipeline architecture,

  2. Assemble large, complex data sets that meet functional / non-functional business requirements.

  3. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  4. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

  5. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

  6. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

  7. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

  8. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

  9. Work with data and analytics experts to strive for greater functionality in our data systems.

 

About Recko: 

Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and  across external payment ecosystems with agility, confidence and at scale.  . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations. 

 

Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use.

 

Working at Recko: 

We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.

Read more
Digital Banking Firm

Digital Banking Firm

Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
+5 more
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort