Spark Jobs in Bangalore (Bengaluru)

Explore top Spark Job opportunities in Bangalore (Bengaluru) from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

A leading provider of analytics solutions for the Telecom

Agency job
via Talengage by Himani Walia
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹10L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
Responsibilities & Deliverables:

This position reports into the Platform Development Manager and would be responsible for design, development and deployment of platform components. As a software engineer in Platform team, you will be responsible for:

- Design and development of Data platform components and feature-sets as specified in requirement specs

- Taking research and innovation approach to deliver highly reliable and performant components that can support extremely high volume of data and unknown scenarios

- Collaborate with peer engineers and architects on various technical projects to maintain and enhance platform capabilities.

- Participate in code and design review of different platform components

- Write and run comprehensive integration tests to deliver high-quality products

- Capacity planning and specification of resource requirements for different deployment scenarios

- Trouble-shoot and provide fix for any production bug in timely manner

- Gather feedback from stakeholders for improvement in code-stack and feature-sets

- Assist documentation team for providing good customer documentation, support and deployment guides

- Adopt agile practice to track and update status of assigned tasks/stories

Educational Qualification: BE. / B.Tech., M.E./M. Tech. or M. Sc. in Computer Science or any equivalent degree from a premium institute

Skill Set:

- Good experience in Software Product development (B2B)

- Hands on programming experience in Java and Linux/Unix commands

- Practical knowledge of object-oriented programming concepts, data structures and algorithms

- Understanding of distributed in-memory computing

- Hands-on experience in Java/Scala and Big Data tech stack - Apache Pinot/Druid/Hudi, HDFS, Hive, Presto, Flume, Storm, Kafka, Spark, NO-SQL databases is a must

- Good understanding of distributed systems and parallel computing

- Understanding of source code maintenance and CI/CD (Continuous integration/continuous Development) practice

- Familiarity with Containerization and micro-service architecture a plus
Read more

Diggibyte Technologies

Agency job
via KUKULKAN by Pragathi P
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹10L - ₹15L / yr
Scala
Spark
Python
Microsoft Windows Azure
SQL Azure

Hiring For Data Engineer - Bangalore (Novel Tech Park)

Salary : Max upto 15LPA

Experience : 3-5years

  • We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
  • Someone who can help client to build scalable, reliable, and secure Data analytic solutions.

 

Technologies you will get to work with:

 

1.Azure Data-bricks

2.Azure Data factory

3.Azure DevOps

4.Spark with Python & Scala and Airflow scheduling.

 

What You will Do: -

 

* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.

* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.

* Use best practices in continuous integration and delivery.

* Sharing technical knowledge with other members of the Data Engineering Team.

* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.

* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.

 

Job Functions:

Information Technology Employment

Type - Full-time

Who can apply -Seniority Level / Mid / Entry level

Read more
DP
Posted by Nelson Xavier
icon
Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 8 yrs
icon
₹10L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

Job responsibilities

- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges

- You will pair to write clean and iterative code based on TDD

- Leverage various continuous delivery practices to deploy, support and operate data pipelines

- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

- Create data models and speak to the tradeoffs of different modeling approaches

- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

- Encouraging open communication and advocating for shared outcomes

 

Technical skills

- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop

- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 



Professional skills

- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

- An interest in coaching, sharing your experience and knowledge with teammates

- You enjoy influencing others and always advocate for technical excellence while being open to change when needed

- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

Read more

For MNC Company with providing Remote Working Currently

Agency job
icon
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Jaipur, Chandigarh
icon
7 - 10 yrs
icon
₹1L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more

Role: Data Engineer

Experience: 7 to 10 Years

Skills Required:

Languages : Python

- AWS** : Glue, Lambda, Athena, Lake Formation, ECS, IAM, SQS, SNS, KMS

- Spark** (Experience in AWS PaaS not only scoped to EMR or On-prem like cloudera)/Pyspark

 

Secondary skills

------------------------

- Airflow (Good to have understanding)

- PyTest or UnitTest (Any testing Framework)

- CI/CD : Drone or CircleCI or TravisCI or any other tool (Understanding of how it works)

- Understanding of configuration framework (YAML, JSON, star lark etc)

- Terraform (IaC)

- Kubernetes (Understanding of containers and their deployment)

 

Read more
icon
Bengaluru (Bangalore)
icon
3 - 7.5 yrs
icon
₹10L - ₹25L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Spark
Software deployment
+1 more
Job ID: ZS0701

Hi,

We are hiring for Data Scientist for Bangalore.

Req Skills:

  • NLP 
  • ML programming
  • Spark
  • Model Deployment
  • Experience processing unstructured data and building NLP models
  • Experience with big data tools pyspark
  • Pipeline orchestration using Airflow and model deployment experience is preferred
Read more

at PayU

DP
Posted by Vishakha Sonde
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr
Python
ETL
Data engineering
Informatica
SQL
+2 more

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more

Tier 1 MNC

Agency job
icon
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
icon
3 - 12 yrs
icon
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
icon
Bengaluru (Bangalore), Mangalore
icon
4 - 8 yrs
icon
Best in industry
Data engineering
SQL
Spark
Apache
HiveQL
+1 more
Roles and Responsibilities:
• Responsible to Ingest data from files, streams and databases. Process the data with Apache Kafka, Spark, Google
Fire Store, Google BigQuery
• Drive Data Foundation initiatives, like Modelling, Data Quality Management, Data Governance, Data Maturity
Assessments and Data Strategy in support of the key business stakeholders.
• Implementing ETL process using Google BigQuery
• Monitoring performance and advising any necessary infrastructure changes
• Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies
Pyspark, Kafka, Google BigQuery, etc.
• Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
• Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing
systems
• Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various
use cases built on the platform
• Provide high operational excellence guaranteeing high availability and platform stability

Desired Profile:
• Deep understanding of the ecosystem, including ingestion (e.g. Kafka, Kinesis, Apache Airflow), processing
frameworks (e.g. Spark, Flink) and storage engines (e.g. Google FIreStore, Google BigQuery).
• Should have indepth understanding of Bigquery architecture, table partitioning, clustering, best practices, type of
tables, etc.
• Should know how to reduce BigQuery costs by reducing the amount of data processed by your queries
• Practical knowledge of Kafka to build real-time streaming data pipelines and applications that adapt to the data
streams.
• Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields
• Implementing ETL jobs using Bigquery
• Understanding of Bigquery ML
• Knowledge on latest database technologies like MongoDB, Cassandra, Data Bricks etc.
• Experience with various messaging systems, such as Kafka or RabbitMQ
• Experience in GCP and Managed services of GCP
Read more
DP
Posted by Payal Joshi
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
Best in industry
Data Science
Machine Learning (ML)
Computer Vision
Forecasting
Python
+3 more

Qualifications

  • 5+ years of professional experience in experiment design and applied machine learning predicting outcomes in large-scale, complex datasets.
  • Proficiency in Python, Azure ML, or other statistics/ML tools.
  • Proficiency in Deep Neural Network, Python based frameworks.
  • Proficiency in Azure DataBricks, Hive, Spark.
  • Proficiency in deploying models into production (Azure stack).
  • Moderate coding skills. SQL or similar required. C# or other languages strongly preferred.
  • Outstanding communication and collaboration skills. You can learn from and teach others.
  • Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services.
  • Experience with prediction in adversarial (energy) environments highly desirable.
  • Understanding of the model development ecosystem across platforms, including development, distribution, and best practices, highly desirable.


As a dedicated Data Scientist on our Research team, you will apply data science and your machine learning expertise to enhance our intelligent systems to predict and provide proactive advice. You’ll work with the team to identify and build features, create experiments, vet ML models, and ship successful models that provide value additions for hundreds of EE customers.

At EE, you’ll have access to vast amounts of energy-related data from our sources. Our data pipelines are curated and supported by engineering teams (so you won't have to do much data engineering - you get to do the fun stuff.) We also offer many company-sponsored classes and conferences that focus on data science and ML. There’s great growth opportunity for data science at EE.

Read more
DP
Posted by Lokesh Manikappa
icon
Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more

SaaS Company strive to make selling fun with our SaaS incen

Agency job
via Jobdost by Mamatha A
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹25L / yr
Relational Database (RDBMS)
PostgreSQL
MySQL
Python
Spark
+6 more

What is the role?

You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project

Key Responsibilities

  • Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
  • Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
  • Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
  • Optimize performance to meet high throughput and scale

What are we looking for?

  • 4+ years of relevant industry experience.
  • Working with data at the terabyte scale.
  • Experience designing, building and operating robust distributed systems.
  • Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
  • Building and leading teams.
  • Working knowledge of relational databases like Postgresql/MySQL.
  • Experience with Python / Spark / Kafka / Celery
  • Experience working with OLTP and OLAP systems
  • Excellent communication skills, both written and verbal.
  • Experience working in cloud e.g., AWS, Azure or GCP

Whom will you work with?

You will work with a top-notch tech team, working closely with the architect and engineering head.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

We  strive to make selling fun with our SaaS incentive gamification product.  Company  is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.

We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.

Way forward

If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.

 
Read more

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
icon
Pune, Bengaluru (Bangalore), Coimbatore, Hyderabad, Gurugram
icon
3 - 10 yrs
icon
₹18L - ₹40L / yr
Apache Kafka
Spark
Hadoop
Apache Hive
Big Data
+5 more

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.



You’ll spend time on the following:

  • You will partner with teammates to create complex data processing pipelines in order to solve our clients’ most ambitious challenges
  • You will collaborate with Data Scientists in order to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches

Here’s what we’re looking for:

 

  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategy to solve business problems 
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • Strong communication and client-facing skills with the ability to work in a consulting environment
Read more

Top 3 Fintech Startup

Agency job
via Jobdost by Sathish Kumar
icon
Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹16L - ₹24L / yr
SQL
Amazon Web Services (AWS)
Spark
PySpark
Apache Hive

We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.

 

Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree

 

Job Responsibilities:

• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team

• Have minimum 3 years of AWS Cloud experience.

• Well versed in languages such as Python, PySpark, SQL, NodeJS etc

• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing

• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.

• Experience with modern Database systems such as Redshift, Presto, Hive etc.

• Worked on building data lakes in the past on S3 or Apache Hudi

• Solid understanding of Data Warehousing Concepts

• Good to have experience on tools such as Kafka or Kinesis

• Good to have AWS Developer Associate or Solutions Architect Associate Certification

• Have experience in managing a team

Read more
DP
Posted by Shanu Mohan
icon
Gurugram, Mumbai, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹10L - ₹17L / yr
Python
PySpark
Amazon Web Services (AWS)
Spark
Scala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
icon
Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 6 yrs
icon
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
Go Programming (Golang)
Ruby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
DP
Posted by Karunya P
icon
Bengaluru (Bangalore), Hyderabad
icon
1 - 9 yrs
icon
₹1L - ₹15L / yr
SQL
Python
Hadoop
HiveQL
Spark
+1 more

Responsibilities:

 

* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.

SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries

Languages: Python

* Good communicator, shows initiative, works well with stakeholders.

* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.

* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL

* Solid communication and articulation skills

* Able to handle stakeholders independently with less interventions of reporting manager.

* Develop strategies to solve problems in logical yet creative ways.

* Create custom reports and presentations accompanied by strong data visualization and storytelling

 

We would be excited if you have:

 

* Excellent communication and interpersonal skills

* Ability to meet deadlines and manage project delivery

* Excellent report-writing and presentation skills

* Critical thinking and problem-solving capabilities

Read more

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
icon
Pune, Bengaluru (Bangalore), Hyderabad
icon
4 - 9 yrs
icon
₹8L - ₹27L / yr
Python
PySpark
Amazon Web Services (AWS)
Spark
Scala
Greetings..

We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.

Exp: 4-9yrs

Location: Pune/Bangalore/Hyderabad

Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
Read more

PRODUCT ENGINEERING BASED MNC

Agency job
via Exploro Solutions by ramya ramchandran
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹25L / yr
Scala
Spark
Apache Spark
ETL
SQL

 Strong experience on SQL and relational databases

  • Good programming exp on Scala & Spark
  • Good exp on ETL batch data pipelines development and migration/upgrading
  • Python – Good to have.
  • AWS – Good to have
  • Knowledgeable in the areas of Big data/Hadoop/S3/HIVE. Nice to have exp on ETL frameworks (ex: Airflow, Flume, Oozie etc.)
  • Ability to work independently, take ownership and strong troubleshooting/debugging skills
  • Good communication and collaboration skills
Read more

They platform powered by machine learning. (TE1)

Agency job
via Multi Recruit by Paramesh P
icon
Bengaluru (Bangalore)
icon
1.5 - 4 yrs
icon
₹8L - ₹16L / yr
Scala
Java
Spark
Hadoop
Rest API
+1 more
  • Involvement in the overall application lifecycle
  • Design and develop software applications in Scala and Spark
  • Understand business requirements and convert them to technical solutions
  • Rest API design, implementation, and integration
  • Collaborate with Frontend developers and provide mentorship for Junior engineers in the team
  • An interest and preferably working experience in agile development methodologies
  • A team player, eager to invest in personal and team growth
  • Staying up to date with cutting edge technologies and best practices
  • Advocate for improvements to product quality, security, and performance

 

Desired Skills and Experience

  • Minimum 1.5+ years of development experience in Scala / Java language
  • Strong understanding of the development cycle, programming techniques, and tools.
  • Strong problem solving and verbal and written communication skills.
  • Experience in working with web development using J2EE or similar frameworks
  • Experience in developing REST API’s
  • BE in Computer Science
  • Experience with Akka or Micro services is a plus
  • Experience with Big data technologies like Spark/Hadoop is a plus company offers very competitive compensation packages commensurate with your experience. We offer full benefits, continual career & compensation growth, and many other perks.

 

Read more

Big revolution in the e-gaming industry. (GK1)

Agency job
via Multi Recruit by Ayub Pasha
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹15L - ₹20L / yr
Python
Scala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
DP
Posted by Rajesh C
icon
Bengaluru (Bangalore), Chennai
icon
12 - 15 yrs
icon
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
icon
Bengaluru (Bangalore), Chennai
icon
10 - 14 yrs
icon
₹40L - ₹60L / yr
Engineering Management
Python
Spark
Java
Big Data
+3 more

Job Title: Engineering Manager

Job Location: Chennai, Bangalore
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.

Read more

Snapblocs

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹20L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
You should hold a B. Tech/MTech degree. • You should have 5 to 10 years of experience with a minimum of 3 years in working in any data driven company/platform. • Competency in core java is a must. • You should have worked with distributed data processing frameworks like Apache Spark, Apache Flink or Hadoop. • You should be a team player and have an open mind to approach the problems to solve them in the right manner with the right set of tools and technologies by working with the team. • You should have knowledge of frameworks & distributed systems, be good at algorithms, data structures, and design patterns. • You should have an in-depth understanding of big data technologies and NoSql databases (Kafka, HBase, Spark, Cassandra, MongoDb etc). • Work experience with AWS cloud platform, Spring Boot and developing API will be a plus. • You should have exceptional problem solving and analytical abilities, and organisation skills with an eye for detail
Read more
icon
Chennai, Bengaluru (Bangalore)
icon
10 - 13 yrs
icon
₹30L - ₹50L / yr
Engineering Management
Engineering Manager
Machine Learning (ML)
Deep Learning
Python
+2 more
Job Title: Engineering Manager
Job Location: Chennai
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
Read more

Global SaaS product built to help revenue teams. (TP1)

Agency job
via Multi Recruit by Kavitha S
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹30L - ₹40L / yr
Spark
Data Engineer
Airflow
SQL
No SQL
+1 more
  • 3-6 years of relevant work experience in a Data Engineering role.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • A good understanding of Airflow, Spark, NoSQL databases, Kafka is nice to have.
  • Premium Institute Candidates only
Read more

Leading Sales Enabler

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹25L - ₹40L / yr
ETL
Spark
Python
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
DP
Posted by Rajesh C
icon
Bengaluru (Bangalore), Noida
icon
5 - 9 yrs
icon
₹10L - ₹17L / yr
Data engineering
Spark
Scala
Hadoop
Apache Hadoop
+1 more
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more

AI-powered Growth Marketing platform

Agency job
via Jobdost by Sathish Kumar
icon
Mumbai, Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹8L - ₹25L / yr
Java
NOSQL Databases
MongoDB
Cassandra
Apache
+3 more
The Impact You Will Create
  • Build campaign generation services which can send app notifications at a speed of 10 million a minute
  • Dashboards to show Real time key performance indicators to clients
  • Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
  • Building highly available & horizontally scalable platform services for ever growing data
  • Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
  • Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
  • You will build backend services and APIs to create scalable engineering systems.
  • As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
  • You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
  • Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
  • Identify and improvise areas of improvement through data insights and research.
What we look for?
  • 2-5 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
  • Solid understanding of engineering best practices, continuous integration, and incremental delivery.
  • Strong analytical skills, debugging and troubleshooting skills, product line analysis.
  • Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
  • Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
  • Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
  • Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
  • Knowledge about versioning like Git and deployment processes like CICD.
Read more

Into designing a generic ML platform team as a product

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹20L - ₹25L / yr
Python
AWS Lambda
Spark

Requirements

  • 3+ years work experience with production-grade python. Contribution to open source repos is preferred
  • Experience writing concurrent and distributed programs, AWS lambda, Kubernetes, Docker, Spark is preferred.
  • Experience with one relational & 1 non-relational DB is preferred
  • Prior work in the ML domain will be a big boost

What You’ll Do

  • Help realize the product vision: Production-ready machine learning models with monitoring within moments, not months.
  • Help companies deploy their machine learning models at scale across a wide range of use-cases and sectors.
  • Build integrations with other platforms to make it easy for our customers to use our product without changing their workflow.
  • Write maintainable, scalable performant python code
  • Building gRPC, rest API servers
  • Working with Thrift, Protobufs, etc.
Read more

a global provider of Business Process Management company

Agency job
via Jobdost by Saida Jabbar
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹15L - ₹22L / yr
SQL Azure
ADF
Business process management
Windows Azure
SQL
+12 more

Desired Competencies:

 

Ø  Expertise in Azure Data Factory V2

Ø  Expertise in other Azure components like Data lake Store, SQL Database, Databricks

Ø  Must have working knowledge of spark programming

Ø  Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules

Ø  Strong knowledge of CICD Process

Ø  Experience in building power BI reports

Ø  Understanding of different components like Pipelines, activities, datasets & linked services

Ø  Exposure to dynamic configuration of pipelines using data sets and linked Services

Ø  Experience in designing, developing and deploying pipelines to higher environments

Ø  Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)

Ø  Strong knowledge in SQL queries

Ø  Must have worked in full life-cycle development from functional design to deployment

Ø  Should have working knowledge of GIT, SVN

Ø  Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.

Ø  Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview

Ø  Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred

 

Preferred Qualifications:

Ø  Bachelor's degree in Computer Science or Technology

Ø  Proven success in contributing to a team-oriented environment

Ø  Proven ability to work creatively and analytically in a problem-solving environment

Ø  Excellent communication (written and oral) and interpersonal skills

Qualifications

BE/BTECH

KEY RESPONSIBILITIES :

You will join a team designing and building a data warehouse covering both relational and dimensional models, developing reports, data marts and other extracts and delivering these via SSIS, SSRS, SSAS, and PowerBI. It is seen as playing a vital role in delivering a single version of the truth on Client’s data and delivering MI & BI that will feature in enabling both operational and strategic decision making.

You will be able to take responsibility for projects over the entire software lifecycle and work with minimum supervision. This would include technical analysis, design, development, and test support as well as managing the delivery to production.

The initial project being resourced is around the development and implementation of a Data Warehouse and associated MI/BI functions.

 

Principal Activities:

1.       Interpret written business requirements documents

2.       Specify (High Level Design and Tech Spec), code and write automated unit tests for new aspects of MI/BI Service.

3.       Write clear and concise supporting documentation for deliverable items.

4.       Become a member of the skilled development team willing to contribute and share experiences and learn as appropriate.

5.       Review and contribute to requirements documentation.

6.       Provide third line support for internally developed software.

7.       Create and maintain continuous deployment pipelines.

8.       Help maintain Development Team standards and principles.

9.       Contribute and share learning and experiences with the greater Development team.

10.   Work within the company’s approved processes, including design and service transition.

11.   Collaborate with other teams and departments across the firm.

12.   Be willing to travel to other offices when required.
13.You agree to comply with any reasonable instructions or regulations issued by the Company from time to time including those set out in the terms of the dealing and other manuals, including staff handbooks and all other group policies


Location
– Bangalore

 

Read more

UAE Client

Agency job
via Fragma Data Systems by Harpreet kour
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹15L - ₹20L / yr
Apache Kafka
kafka
Spark
Hadoop
confluence
Experience in Apache Kafka,
Good communication skills
Good to have Hadoop and spark
Hands on with Apache kafka and confluence kafka
Read more

Leading Sales Platform

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more

AI-powered cloud-based SaaS solution

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹15L - ₹50L / yr
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
+18 more
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹5L - ₹15L / yr
Java
Python
Spark
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Cloud Software Engineer

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure.  We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms.  Our engagement service is built on the DevOps standard practice and SRE model.

 

We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Roles and Responsibilities

· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms

· Apply continuous delivery practices to deliver high-quality software and value as early as possible.

· Work in collaborative teams to build new experiences

· Participate in the entire cycle of software consulting and delivery from ideation to deployment

· Integrating multiple software products across cloud and hybrid environments

· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud

· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices

 

Desired Candidate Profile : *** freshers can also apply ***

 

· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.

· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.

· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.

· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives

· Driven to develop technical skills for oneself and team-mates

· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.

· Possess at least one cloud-related certification from AWS, Azure, or equivalent

· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns

· Past experience quickly learning new languages and frameworks

· Ability to work with a high degree of autonomy and self-direction

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹7L - ₹15L / yr
Data engineering
Data Engineer
Data modeling
Scheme
Data Analytics
+18 more

Role description

  • Work with the application development team to implement database schema, data strategies, build data flows
  • Should have expertise across the database. Responsible for gathering and analyzing data requirements and data flows to other systems
  • Perform database modeling and SQL scripting

 

Background/ Experience

  • 8+ years of experience in Database modeling and SQL scripting
  • Experience in OLTP and OLAP database modeling
  • Experience in Data modeling for Relational (PostgreSQL) and Non-relational Databases
  • Strong scripting experience P-SQL (Views, Store procedures), Python, Spark.
  • Develop notebooks & jobs using Python language
  • Big Data stack: Spark, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka, Storm
  • Should have good understanding on Azure Cosmos DB, Azure DW

Education

  • Bachelors Master’s degree in information technology/Computer Science, Engineering (or equivalent)

 

Read more

world’s fastest growing consumer internet company

Agency job
via Hunt & Badge Consulting Pvt Ltd by Chandramohan Subramanian
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹35L / yr
Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
+5 more

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more

AI Based SAAS company

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
12 - 22 yrs
icon
₹50L - ₹99L / yr
Engineering Management
Engineering Manager
Engineering head
Technical Architecture
Technical lead
+20 more

Location: Bangalore

Function: Software Engineering → Backend Development

 

We are looking for an extraordinary and dynamic Director of Engineering to be part of its Engineering team in Bangalore. You must have a good record of architecting scalable solutions, hiring and mentoring talented teams and working with product managers to build great products. You must be highly analytical and a good problem solver. You will be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work.

 

Responsibilities:

  • Own the overall solution design and implementation for backend systems. This includes requirement analysis, scope discussion, design, architecture, implementation, delivery and resolving production issues related to engineering.
  • Owner of the technology roadmap of our products from core back end engineering perspective.
  • Ability to guide the team in debugging production issues and write best-of- the breed code.
  • Drive engineering excellence (defects, productivity through automation, performance of products etc) through clearly defined metrics.
  • Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers.
  • Hiring, mentoring, and retaining a very talented team.

 

Requirements:

  • 12 - 20 years of strong experience in product development.
  • Strong experience in building data engineering (no SQL DBs, HDFS, Kafka, cassandra, Elasticsearch, Spark etc) intensive backend.
  • Excellent track record of designing and delivering System architecture, implementation and deployment of successful solutions in a custome facing role
  • Strong in problem solving and analytical skills.
  • Ability to influence decision making through data and be metric driven.
  • Strong understanding of non-functional requirements like security, test automation etc.
  • Fluency in Java, Spring, Hibernate, J2EE, REST Services.
  • Ability to hire, mentor and retain best-of-the-breed engineers.
  • Exposure to Agile development methodologies.
  • Ability to collaborate across teams and strong interpersonal skills.
  • SAAS experience a plus.

 

Read more
DP
Posted by Apurva kalsotra
icon
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
icon
3 - 8 yrs
icon
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more

A FinTech NBFC dedicated to driving Financial inclusion

Agency job
via Jobdost by Mamatha A
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹20L - ₹25L / yr
Data engineering
Spark
Big Data
Data engineer
Hadoop
+13 more
  • Play a critical role as a member of the leadership team in shaping and supporting our overall company vision, day-to-day operations, and culture.
  • Set the technical vision and build the technical product roadmap from launch to scale; including defining long-term goals and strategies
  • Define best practices around coding methodologies, software development, and quality assurance
  • Define innovative technical requirements and systems while balancing time, feasibility, cost and customer experience
  • Build and support production products
  • Ensure our internal processes and services comply with privacy and security regulations
  • Establish a high performing, inclusive engineering culture focused on innovation, execution, growth and development
  • Set a high bar for our overall engineering practices in support of our mission and goals
  • Develop goals, roadmaps and delivery dates to help us scale quickly and sustainably
  • Collaborate closely with Product, Business, Marketing and Data Science
  • Experience with financial and transactional systems
  • Experience engineering for large volumes of data at scale
  • Experience with financial audit and compliance is a plus
  • Experience building a successful consumer facing web and mobile apps at scale
Read more

UAE Client

Agency job
via Fragma Data Systems by Evelyn Charles
icon
Remote, Bengaluru (Bangalore), Hyderabad
icon
6 - 10 yrs
icon
₹15L - ₹22L / yr
Informatica
Big Data
SQL
Hadoop
Apache Spark
+1 more

Skills- Informatica with Big Data Management

 

1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more

Looking to hire Data Engineers for a client in Bangalore.

Agency job
via Artifex HR by Maria Theyos
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹8L - ₹10L / yr
Big Data
Hadoop
Apache Spark
Spark
Apache Kafka
+11 more

We are looking for a savvy Data Engineer to join our growing team of analytics experts. 

 

The hire will be responsible for:

- Expanding and optimizing our data and data pipeline architecture

- Optimizing data flow and collection for cross functional teams.

- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates

- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.

Nice to have experience with :

- Big data tools: Hadoop, Spark and Kafka

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow

- Stream-processing systems: Storm

Database : SQL DB

Programming languages : PL/SQL, Spark SQL

Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.

The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.

Read more

They provide both wholesale and retail funding. PM1

Agency job
via Multi Recruit by Kavitha S
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹15L - ₹20L / yr
Spark
Big Data
Data Engineer
Hadoop
Apache Kafka
+4 more
  • 1-5 years of experience in building and maintaining robust data pipelines, enriching data, low-latency/highly-performance data analytics applications.
  • Experience handling complex, high volume, multi-dimensional data and architecting data products in streaming, serverless, and microservices-based Architecture and platform.
  • Experience in Data warehousing, Data modeling, and Data architecture.
  • Expert level proficiency with the relational and NoSQL databases.
  • Expert level proficiency in Python, and PySpark.
  • Familiarity with Big Data technologies and utilities (Spark, Hive, Kafka, Airflow).
  • Familiarity with cloud services (preferable AWS)
  • Familiarity with MLOps processes such as data labeling, model deployment, data-model feedback loop, data drift.

Key Roles/Responsibilities:

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences.
  • Identifying and solving issues with data pipelines regarding consistency, integrity, and completeness.
  • Lead data initiatives, architecture design discussions, and implementation of next-generation BI solutions.
  • Partner with data scientists, tech architects to build advanced, scalable, efficient self-service BI infrastructure.
  • Provide thought leadership and mentor data engineers in information presentation and delivery.

 

 

Read more

India's best Short Video App

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
icon
Remote, Bengaluru (Bangalore)
icon
3.5 - 8 yrs
icon
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more

Curl

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹25L / yr
Data Visualization
PowerBI
ETL
Business Intelligence (BI)
Data Analytics
+6 more
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more

Curl Analytics

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
ETL
Big Data
Data engineering
Apache Kafka
PySpark
+11 more
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more

Ai Product Company in Energy Domain

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹40L - ₹80L / yr
Engineering Management
Engineering Manager
Engineering Head
Technical Architecture
Technical lead
+17 more
Job Description

The client is looking for an extraordinary and dynamic Engineering Manager to be
part of its Engineering team in Bangalore. You must have a good record of
architecting scalable solutions, hiring and mentoring talented teams, and working
with product managers to build great products. You must be highly analytical
and a good problem solver. You will be part of a highly energetic and innovative
team that believes nothing is impossible with some creativity and hard work.

Responsibilities
● Own the overall solution design and implementation of the core infrastructure
for backend systems. This includes requirement analysis, scope discussion,
design, architecture, implementation, delivery, and resolving production issues
related to engineering. The core back-end system is a large-scale data platform
that ingests data, applies ML models, and streams the output to the Data lake and
serving layer. As of today, we ingest 2 Bn data points every day, which need to
scale to handling 200 Bn data points every single day.
● End-end backend engineering infra charter includes Dev ops, Global
deployment, Security, and compliances according to latest practices.
● Ability to guide the team in debugging production issues and write best-of-the
breed code.
● Drive “engineering excellence” (defects, productivity through automation,
the performance of products, etc) through clearly defined metrics.
● Stay current with the latest tools, technology ideas, and methodologies; share
knowledge by clearly articulating results and ideas to key decision-makers.
● Hiring, mentoring, and retaining a very talented team.

Requirements
● 8-12 years of strong experience in product development.
● Strong experience in building data engineering (no SQL DBs, HDFS, Kafka,
Cassandra, Elasticsearch, Spark, etc) intensive backend
● Experience with DAG-based data processing is highly desirable
● Excellent track record of designing and delivering System architecture,
implementation and deployment of successful solutions in a customer-facing
role.
● Strong problem-solving and analytical skills.
● Ability to influence decision making through data and be metric driven
● Strong understanding of non-functional requirements like security, test
automation etc
● Fluency in Java, Spring, Hibernate, J2EE, REST Services
● Ability to hire, mentor and retain best-of-the-breed engineers
● Exposure to Agile development methodologies
● Ability to collaborate across teams and strong interpersonal skills
● SAAS experience a plus
Read more
Agency job
via zyoin by RAKESH RANJAN
icon
Remote, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/

Crunchbase : Rakuten has raised a total of $42.4M in funding over 2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more

Digital Banking Firm

Agency job
via Qrata by Prajakta Kulkarni
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹20L - ₹40L / yr
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
+5 more
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort