Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Spark Jobs in Bangalore (Bengaluru)

Explore top Spark Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Engineering Manager - Data

Founded 2014
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
6 - 12 years
Salary icon
Best in industryBest in industry

Why are we building Urbancomapny?   Organized service commerce is a large yet young industry in India. While India is a very large market for a home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today.   The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional's journeys are removed - and the experiences made more meaningful and joyful.   The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce they way Amazon transformed product commerce.   Job Description :   Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day. We adopted platform thinking pretty at the very early stage of UC. We started building central platform teams who are dedicated solve for core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for an Engineering Manager for the Data Engineering team currently. A person who loves solving standardization, have strong platform thinking, opinions, have solved for Data Engineering, Data Science and analytics platform.   Job Responsibilities   Building high octane teams with high opinions and strong platform thinking Working on complex design and architectural problems. Solving funnel analytics, product insights and building a highly scalable data platform Experience in building Data Science Platform  Highly productive data-driven models to contribute to product success and building Visioning out the roadmap and thought process behind taking current tech stack to next level Building and maintaining the high NPS of 70% of platform products Strong decision-maker with hands-on experience Think about abstractions, systems and services and write high-quality code. Have an understanding of loopholes in current systems/architecture that can potentially break in the future and push towards solving them with other stakeholders. Think through complex architecture to build robust platforms to serve together all the categories and flows, solve for scale, and work on internally build services to cater to our growing needs.   Job Requirements   At least 1-2+ Years of experience in managing teams 5-8 years of experience in the industry solving complex problems from scratch and have graduate/post-graduate degrees from top-tier universities. A thinker with strong opinions and the ability to get those opinions into reality Prior experience of creating complex systems in the past. Ability to build scalable, sustainable, reliable, and secure products based on past experience and leading teams and projects by themselves. Ability to bring new practices, architectural choices, and new initiatives onto the table to make the overall tech stack more robust. History and familiarity with server-side architecture based on APIs, databases, infrastructure, and systems. Ability to own the technical road map for systems/components.   What can you expect?   A phenomenal work environment, with massive ownership and growth opportunities. A high performance, high-velocity environment at the cutting edge of growth. Strong ownership expectation and freedom to fail. Quick iterations and deployments – fail-fast attitude. Opportunity to work on cutting edge technologies. The massive, and direct impact of the work you do on the lives of people.

Job posted by
apply for job
apply for job
Mohit Agrawal picture
Mohit Agrawal
Job posted by
Mohit Agrawal picture
Mohit Agrawal
Apply for job
apply for job

Software Engineer

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Key Skills: Big Data, Hadoop, Spark, Scala, Strong Java programming · Extensive Experience in Hadoop, Hive, HBase and Spark. · Hands-on Development Experience in Java and Spark with Scala using Maven. · Clear understanding of Hadoop DFS & Map Reduce Internal Operations · Clear understanding of Internal execution mechanism of Spark · In-depth understanding of Hive on Spark engine and clear understanding of internals of HBase · Strong Java programming concepts and clear design patterns understanding. · Experienced in implementing data munging, transformation and processing solutions using Spark. · Experienced in developing performance optimized Analytical Hive Queries executing against huge datasets. · Experience in HBase Data Model Designing and Hive Physical Storage Model Designing.

Job posted by
apply for job
apply for job
Staffio HR picture
Staffio HR
Job posted by
Staffio HR picture
Staffio HR
Apply for job
apply for job

Azure Data Architect

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1600000, max: 2800000, duration: "undefined", currency: "INR", equity: false})}}

Technology Skills:   Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.  Designing and implementing data engineering, ingestion, and transformation functions     Good to Have:  Experience with Azure Analysis Services Experience in Power BI Experience with third-party solutions like Attunity/Stream sets, Informatica Experience with PreSales activities (Responding to RFPs, Executing Quick POCs) Capacity Planning and Performance Tuning on Azure Stack and Spark.

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Data Engineer (PySpark)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark  • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must-Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Big Data Developer

Founded 2000
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai, Pune
Experience icon
4 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Role Summary/Purpose:We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.   Requirements: The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment. Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc… Excellent knowledge in SQL & Linux Shell scripting Bachelors/Master’s/Engineering Degree from a well-reputed university. Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment Ability to manage a diverse and challenging stakeholder community Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.   Responsibilities Should works as a senior developer/individual contributor based on situations Should be part of SCRUM discussions and to take requirements Adhere to SCRUM timeline and deliver accordingly Participate in a team environment for the design, development and implementation Should take L3 activities on need basis Prepare Unit/SIT/UAT testcase and log the results Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time. Quality delivery and automation should be a top priority Co-ordinate change and deployment in time Should create healthy harmony within the team Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders

Job posted by
apply for job
apply for job
Rashmi Poovaiah picture
Rashmi Poovaiah
Job posted by
Rashmi Poovaiah picture
Rashmi Poovaiah
Apply for job
apply for job

Data Platform Engineer (SDE 1/2/3)

Founded 2014
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
3 - 8 years
Salary icon
Best in industryBest in industry

Why are we building Urban Company?   Organized service commerce is a large yet young industry in India. While India is a very large market for home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today.   The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional journeys are removed - and the experiences made more meaningful and joyful.   The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce the way Amazon transformed product commerce.   Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day.  We adopted the platform thinking pretty early stage of UC. We started building central platform teams who are dedicated to solving core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for Data Engineers, a person who loves solving standardization, has strong platform thinking, opinions, and has solved for Data Engineering, Data Science and analytics platforms.   Job Responsibilities Platform first approach to engineering problems. Creating highly autonomous systems with minimal manual intervention. Frameworks which can be extended to larger audiences through open source. Extending and modifying the open source projects to adopt as per Urban Company use case. Developer productivity. Highly abstracted and standardized frameworks like micro services, event-driven architecture, etc.   Job Requirements/Potential Backgrounds Bachelors/master’s in computer science form top-tier Engineering School. Experience with Data pipeline and workflow management tools like Luigi, Airflow etc. Proven ability to work in a fast paced environment. History and Familiarity of server-side development of APIs, databases, dev-ops and systems. Fanatic about building scalable, reliable data products. Experience with Big data tools: Hadoop, Kafka/Kinesis, Flume, etc. is an added advantage. Experience with Relational SQL and NO SQL databases like HBase, Cassandra etc. Experience with stream processing engines like Spark, Link, Storm, etc. is an added advantage.   What UC has in store for you   A phenomenal work environment, with massive ownership and growth opportunities. A high performance, high-velocity environment at the cutting edge of growth. Strong ownership expectation and freedom to fail. Quick iterations and deployments – fail-fast attitude. Opportunity to work on cutting edge technologies. The massive, and direct impact of the work you do on the lives of people.

Job posted by
apply for job
apply for job
Mohit Agrawal picture
Mohit Agrawal
Job posted by
Mohit Agrawal picture
Mohit Agrawal
Apply for job
apply for job

Technical Lead

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 2500000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

About Vymo Vymo is a Sanfrancisco-based next-generation Sales productivity SaaS company with offices in 7 locations. Vymo is funded by top tier VC firms like Emergence Capital and Sequoia Capital. Vymo is a category creator, an intelligent Personal Sales Assistant who captures sales activities automatically, learns from top performers, and predicts ‘next best actions’ contextually. Vymo has 100,000 users in 60+ large enterprises such as AXA, Allianz, Generali.Vymo has seen 3x annual growth over the last few years and aspires to do even better this year by building up the team globally. What is the Personal Sales Assistant A game-changer! We thrive in the CRM space where every company is struggling to deliver meaningful engagement to their Sales teams and IT systems. Vymo was engineered with a mobile-first philosophy. The platform through AI/ML detects, predicts, and learns how to make Sales Representatives more productive through nudges and suggestions on a mobile device. Explore Vymo https://getvymo.com/ What you will do at Vymo From young open source enthusiasts to experienced Googlers, this team develops products like Lead Management System, Intelligent Allocations & Route mapping, Intelligent Interventions, that help improve the effectiveness of the sales teams manifold. These products power the "Personal Assistant" app that automates the sales force activities, leveraging our cutting edge location based technology and intelligent routing algorithms. A Day in your Life  Design, develop and maintain robust data platforms on top of Kafka, Spark, ES etc. Provide leadership to a group of engineers in an innovative and fast-paced environment. Manage and drive complex technical projects from the planning stage through execution. What you would have done   B.E (or equivalent) in Computer Sciences 6-9 years of experience building enterprise class products/platforms. Knowledge of Big data systems and/or Data pipeline building experience is preferred. 2-3 years of relevant work experience as technical lead or technical management experience. Excellent coding skills in one of Core Java or NodeJS Demonstrated problem solving skills in previous roles. Good communication skills.

Job posted by
apply for job
apply for job
Nisha Kini Apte picture
Nisha Kini Apte
Job posted by
Nisha Kini Apte picture
Nisha Kini Apte
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune, Mumbai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

The Person:- Articulate High Energy Passion to learn High sense of ownership Ability to work in a fast-paced and deadline driven environment Loves technology Highly skilled at Data Interpretation Problem solver Must be able to see how the technology and people together can create stickiness for long term engagements Skills to work in a challenging, complex project environment Need you to be naturally curious and have a passion for understanding consumer behavior A high level of motivation, passion and high sense of ownership Excellent communication skills needed to manage an incredibly diverse slate of work and team personalities Will need to manage multiple projects and deadline-driven fast-paced environment Ability to work in ambiguity and manage chaos Requirement:- Expertise in Python, PySpark, MySQL and AWS  2+ years of recent experience in Data Engineering Tech. or Equivalent degree in CS/CE/IT/ECE/EEE Responsibility:-  Build a data pipeline to ingest structured and unstructured data. Candidates should be comfortable implementing an end-to-end ETL pipeline. Must be comfortable with well known JDBC connectors, like MySQL, PostgreSQL, etc. Must be comfortable with both spark and python scripting. Must have extensive experience in AWS Glue, crawler and catalog databases. Candidates should know how triggers work in AWS Glue. Must be comfortable with SQL and HQL(Hive Query Language). Experience with AWS lambda and API Gateway is a plus. Experience with CDI/CDP platforms like segment, mixpanel etc, is a plus. Good to have Data Wrangler, GLUE dynamic dataframe and pyspark workloads on EMR clusters and AWS Step Functions Se/Deserialization techniques Other AWS services such as DMS, Data pipeline, SCT experience

Job posted by
apply for job
apply for job
Anila Nair picture
Anila Nair
Job posted by
Anila Nair picture
Anila Nair
Apply for job
apply for job

Data Engineer (Remote/ Bengaluru)

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 200000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job Title: Data Engineer (Remote)   Job Description You will work on:   We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights.    What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines.  You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners   What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive   Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc.   Advantage Cognologix:  Higher degree of autonomy, startup culture & small teams  Opportunities to become expert in emerging technologies  Remote working options for the right maturity level  Competitive salary & family benefits  Performance based career advancement     About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.   Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA   Thanks & regards, Cognologix- HR Dept.

Job posted by
apply for job
apply for job
Rupa Kadam picture
Rupa Kadam
Job posted by
Rupa Kadam picture
Rupa Kadam
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark  • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills

Job posted by
apply for job
apply for job
Sudarshini K picture
Sudarshini K
Job posted by
Sudarshini K picture
Sudarshini K
Apply for job
apply for job

Data Engineer

Founded 2002
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
via PayU
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Role: Data Engineer  Company: PayU Location: Bangalore/ Mumbai Experience : 2-5 yrs About Company:PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services. Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services. India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments.  PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants.  Job responsibilities: Design infrastructure for data, especially for but not limited to consumption in machine learning applications  Define database architecture needed to combine and link data, and ensure integrity across different sources  Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems  Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed  Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack. Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions Requirements to be successful in this role:  Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica. Strong experience with scalable compute solutions such as in Kafka, Snowflake Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc.  Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL)  A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks  Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI)  Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale

Job posted by
apply for job
apply for job
Vishakha Sonde picture
Vishakha Sonde
Job posted by
Vishakha Sonde picture
Vishakha Sonde
Apply for job
apply for job

Data Engineer

Founded 2019
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Be an integral part of large scale client business development and delivery engagements Develop the software and systems needed for end-to-end execution on large projects Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions Build the knowledge base required to deliver increasingly complex technology projectsObject-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET) Database programming using any flavours of SQL Expertise in relational and dimensional modelling, including big data technologies Exposure across all the SDLC process, including testing and deployment Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc. Good knowledge of Python and Spark are required Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus

Job posted by
apply for job
apply for job
Karthik N picture
Karthik N
Job posted by
Karthik N picture
Karthik N
Apply for job
apply for job