Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Spark Jobs in Mumbai

Explore top Spark Job opportunities in Mumbai for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer - AWS

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune, Mumbai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

The Person:- Articulate High Energy Passion to learn High sense of ownership Ability to work in a fast-paced and deadline driven environment Loves technology Highly skilled at Data Interpretation Problem solver Must be able to see how the technology and people together can create stickiness for long term engagements Skills to work in a challenging, complex project environment Need you to be naturally curious and have a passion for understanding consumer behavior A high level of motivation, passion and high sense of ownership Excellent communication skills needed to manage an incredibly diverse slate of work and team personalities Will need to manage multiple projects and deadline-driven fast-paced environment Ability to work in ambiguity and manage chaos Requirement:- Expertise in Python, PySpark, MySQL and AWS  2+ years of recent experience in Data Engineering Tech. or Equivalent degree in CS/CE/IT/ECE/EEE Responsibility:-  Build a data pipeline to ingest structured and unstructured data. Candidates should be comfortable implementing an end-to-end ETL pipeline. Must be comfortable with well known JDBC connectors, like MySQL, PostgreSQL, etc. Must be comfortable with both spark and python scripting. Must have extensive experience in AWS Glue, crawler and catalog databases. Candidates should know how triggers work in AWS Glue. Must be comfortable with SQL and HQL(Hive Query Language). Experience with AWS lambda and API Gateway is a plus. Experience with CDI/CDP platforms like segment, mixpanel etc, is a plus. Good to have Data Wrangler, GLUE dynamic dataframe and pyspark workloads on EMR clusters and AWS Step Functions Se/Deserialization techniques Other AWS services such as DMS, Data pipeline, SCT experience

Job posted by
apply for job
apply for job
Anila Nair picture
Anila Nair
Job posted by
Anila Nair picture
Anila Nair
Apply for job
apply for job

Data Architect

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1100000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Who we are? Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud. What we believe? Best practices are overrated Implementing best practices can only make one n . Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. Responsibilities :  As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. What You’ll Do Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Collaborate with many teams - engineering and business, to build better data products Manage team and handle delivery of 2-3 projects  What We’re Looking For Over 4-7 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Presto) Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Hands-on experience in cloud platforms like AWS, GCP etc. Good communication skills and strong analytical skills Experience in team handling and project delivery

Job posted by
apply for job
apply for job
Nikita Rathi picture
Nikita Rathi
Job posted by
Nikita Rathi picture
Nikita Rathi
Apply for job
apply for job

Data Engineer

Founded 2019
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Be an integral part of large scale client business development and delivery engagements Develop the software and systems needed for end-to-end execution on large projects Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions Build the knowledge base required to deliver increasingly complex technology projectsObject-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET) Database programming using any flavours of SQL Expertise in relational and dimensional modelling, including big data technologies Exposure across all the SDLC process, including testing and deployment Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc. Good knowledge of Python and Spark are required Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus

Job posted by
apply for job
apply for job
Karthik N picture
Karthik N
Job posted by
Karthik N picture
Karthik N
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.   Who are you? Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions You use cloud for more than just provisioning a Virtual Machine Vim is friendly to you and you know how to exit Nano You check logs before screaming about an error You are a solid engineer who writes modular code and commits in GIT You are a doer who doesn’t say “no” without first understanding You understand the value of documentation of your work You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines   Eligibility Experience At least 2 years of Data Engineering Experience Have interacted with Customers Must Have Skills Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES Apache Spark Python Scala PostgreSQL Git Linux Good to have Skills Apache NiFi Apache Kafka Apache Hive Docker Amazon Certification

Job posted by
apply for job
apply for job
Seema Pahwa picture
Seema Pahwa
Job posted by
Seema Pahwa picture
Seema Pahwa
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Job Overview :Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.Responsibilities and Duties :- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutionsEducation level :- Bachelor's degree in Computer Science or equivalentExperience :- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development- Expertise in application, data and infrastructure architecture disciplines- Expert designing data integrations using ETL and other data integration patterns- Advanced knowledge of architecture, design and business processes Proficiency in :- Modern programming languages like Java, Python, Scala- Big Data technologies Hadoop, Spark, HIVE, Kafka- Writing decently optimized SQL queries- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensionalmodeling, and Meta data modeling practices.- Experience generating physical data models and the associated DDL from logical data models.- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,and data rationalization artifacts.- Experience enforcing data modeling standards and procedures.- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goalsSkills :Must Know :- Core big-data concepts- Spark - PySpark/Scala- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)- Handling of various file formats- Cloud platform - AWS/Azure/GCP- Orchestration tool - Airflow

Job posted by
apply for job
apply for job
Aishwarya Hire picture
Aishwarya Hire
Job posted by
Aishwarya Hire picture
Aishwarya Hire
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Job Overview :Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.Responsibilities and Duties :- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutionsEducation level :- Bachelor's degree in Computer Science or equivalentExperience :- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development- Expertise in application, data and infrastructure architecture disciplines- Expert designing data integrations using ETL and other data integration patterns- Advanced knowledge of architecture, design and business processes Proficiency in :- Modern programming languages like Java, Python, Scala- Big Data technologies Hadoop, Spark, HIVE, Kafka- Writing decently optimized SQL queries- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensionalmodeling, and Meta data modeling practices.- Experience generating physical data models and the associated DDL from logical data models.- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,and data rationalization artifacts.- Experience enforcing data modeling standards and procedures.- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goalsSkills :Must Know :- Core big-data concepts- Spark - PySpark/Scala- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)- Handling of various file formats- Cloud platform - AWS/Azure/GCP- Orchestration tool - Airflow

Job posted by
apply for job
apply for job
Aishwarya Hire picture
Aishwarya Hire
Job posted by
Aishwarya Hire picture
Aishwarya Hire
Apply for job
apply for job

Core Java Developer

Founded 2003
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for a Java developer for one of our major investment banking client- who can take ownership for the whole end to end delivery, performing analysis, design, coding, testing and maintenance of large- scale and distributed applications. Please find JD for your reference . Job Profile : Java Developer : Location : Mumbai Description: A core Java developer is required for a Tier 1 investment bank supporting the Delta One Structured Products IT group. This is a global front-office team that supports the global OTC Equity Swap Portfolio, Single Name, and Index derivative businesses. We are designing a complete restructure of the Equity Swaps trading platform, and this particular role is within the core cash flow and valuations area. The role will require the candidate to work closely with the cash flow engines team to solve problems that combine both finance and technology. This is an exciting hands-on role for a self-starter who has a thirst for new challenges as well as new technologies. The candidate should possess good analytical skills, strong software engineering skills, a logical approach to problem-solving, be able to work in a fast paced environment liaising with demanding stakeholders to understand complex requirements and be able to prioritize work under pressure with minimal supervision. The candidate should be a problem solver, and be able to bring with them some positivity and enthusiasm in trying to think about and offer potential solutions for architectural considerations. Position Profile: We are looking for someone to help own problems and be able to demonstrate leadership and responsibility for the delivery of new features. As part of the development cycle, you would be expected to write quality unit tests, supply documentation if relevant for new feature build-outs, and be involved in the test cycle (UAT, integration, regression) for the delivery and fixing of bugs for your new features. Although the role is predominantly Java, we require someone who is flexible with the development environment, as some days you might be writing Java, and other days you might be fixing stored procedures or Perl scripts. You would be expected to get involved in the Level 3 production support rota which is shared between our developers on a monthly cycle, and to occasionally help with weekend deployment activities to deploy and verify any code changes you have been involved in. Team Profile: The team and role are ideal for someone looking for a strong career development path with many opportunities to grow, learn and develop. The role requires someone who is flexible and able to respond to a dynamic business environment. The candidate must be adaptable to work across multiple technologies and disciplines, with a focus on delivering quality solutions for the business in a timely fashion. This role suits people experienced in complex data domains. Required Skills: * Experience of agile and scrum methodologies. * Core Java. * Unix shell scripting. * SQL and Relational Databases such as DB2. * Integration technologies - MQ/Xml/SOAP/JSON/Protocol Buffers/Spring. * Enterprise Architecture Patterns, GoF design * Build & agile - Ant, Gradle/Maven, Sonar, Jenkins/Hudson, GIT/perforce. * Sound understanding of Object Oriented Analysis, Design and Programming. * Strong communication and stakeholder management skills * Scala / spark or bigdata will be an added advantage * Candidate must have good experience in database. * Excellent communication and problem solving skill. Desired Skills: * Experience in banking and regulatory reporting (SFTR, MAS/ASIC etc.) * Knowledge of OTC, listed and cash products * Domain driven design and micro-services

Job posted by
apply for job
apply for job
Risha P picture
Risha P
Job posted by
Risha P picture
Risha P
Apply for job
apply for job

Hadoop Developer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 100 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Looking for Big data Developers in Mumbai Location

Job posted by
apply for job
apply for job
Sheela P picture
Sheela P
Job posted by
Sheela P picture
Sheela P
Apply for job
apply for job

Technical Architect/CTO

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 11 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Job posted by
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Apply for job
apply for job

Hadoop Developer

Founded 2009
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 14 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

US based Multinational Company Hands on Hadoop

Job posted by
apply for job
apply for job
Neha Mayekar picture
Neha Mayekar
Job posted by
Neha Mayekar picture
Neha Mayekar
Apply for job
apply for job

Product Tech Lead

Founded 2007
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Mumbai
Experience icon
3 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.

Job posted by
apply for job
apply for job
Uma Venkataraman picture
Uma Venkataraman
Job posted by
Uma Venkataraman picture
Uma Venkataraman
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done