Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Spark Jobs

Explore top Spark Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Specialist - BigData Engineering

Founded 2000
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 35 lacs/annum

Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business.​​​​​​​ Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you.​​​​​​​ Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification:​​​​​​​ B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent

Job posted by
apply for job
apply for job
Jesvin Varghese picture
Jesvin Varghese
Job posted by
Jesvin Varghese picture
Jesvin Varghese
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 14 years
Experience icon
10 - 28 lacs/annum

ITTStar global services is subsidiary unit in Bengaluru with head office in Atlanta, Georgia. We are primarily into data management and data life cycle solutions, which includes machine learning and artificial intelligence. For further info, visit ITTstar.com . As discussed over the call, I am forwarding the job description. We are looking for enthusiastic and experienced data engineers to be part of our bustling team of professionals for our Bengaluru location. JOB DESCRIPTION: 1. Experience in Spark & Big Data is mandatory. 2. Strong Programming Skills in Python / Java / Scala /Node.js. 3. Hands on experience handling multiple data types JSON/XML/Delimited/Unstructured. 4. Hands on experience working at least one Relational and/or NoSQL Databases. 5. Knowledge on SQL Queries and Data Modeling. 6. Hands on experience working in ETL Use cases either in On-premise or Cloud. 7. Experience in any Cloud Platform (AWS, Azure, GCP, Alibaba). 8. Knowledge in one or more AWS Services like Kinesis, EC2, EMR, Hive Integration, Athena, FireHose, Lambda, S3, Glue Crawler, Redshift, RDS is a plus. 9. Good Communication Skills and Self Driven - should be able to deliver the projects with minimum instructions from Client.

Job posted by
apply for job
apply for job
Thatchinamoorthy Arumugam picture
Thatchinamoorthy Arumugam
Job posted by
Thatchinamoorthy Arumugam picture
Thatchinamoorthy Arumugam
Apply for job
apply for job

Software Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Kochi (Cochin)
Experience icon
2 - 7 years
Experience icon
5 - 25 lacs/annum

Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Big Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Navi Mumbai, Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 2 years
Experience icon
4 - 10 lacs/annum

Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background

Job posted by
apply for job
apply for job
Sneha Pandey picture
Sneha Pandey
Job posted by
Sneha Pandey picture
Sneha Pandey
Apply for job
apply for job

Data Scientist

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai, Hyderabad
Experience icon
3 - 7 years
Experience icon
15 - 45 lacs/annum

Software Engineer – ML at Indix provides an opportunity to design and build systems that crunch large amounts of data everyday What We’re Looking For- 3+ years of experience Ability to propose hypothesis and design experiments in the context of specific problems. Should come from a strong engineering background Good overlap with Indix Data tech stack such as Hadoop, MapReduce, HDFS, Spark, Scalding, Scala/Python/C++ Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting experiments. Creativity in model and algorithm development. An obsession to develop algorithms/models that directly impact business. Master’s/Phd. in Computer Science/Statistics is a plus Job Expectations Experience working in text mining and python libraries like scikit-learn, numpy, etc Collect relevant data from production systems/Use crawling and parsing infrastructure to put together data sets. Survey academic literature and identify potential approaches for exploration. Craft, conduct and analyze experiments to evaluate models/algorithms. Communicate findings and take algorithms/models to production with end to end ownership.

Job posted by
apply for job
apply for job
Sri Devi picture
Sri Devi
Job posted by
Sri Devi picture
Sri Devi
Apply for job
apply for job

Senior Software Engineer - Data Engineering

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 8 years
Experience icon
20 - 40 lacs/annum

At Hotstar, we have over 100 million users and capture close to a billion click stream messages daily. The engineering team at Hotstar is at the centre of the action and is responsible for creating the unmatched user experience. Our engineers solve real-life complex problems and create compelling experiences for our customers. As a Data Engineer in the Data Infrastructure team, you will build platforms and tools that churn through, process & analyze petabytes of data and lead a robust team. You will work on technologies such as Apache Kafka, Apache Spark, Aerospike, Redshift to build a scalable infrastructure that delivers recommendations to our users in real-time. The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, join us! Your Key Responsibilities • You will work closely with Software Engineers & ML engineers to build the data infrastructure that fuels the needs of multiple teams, systems and products • You will automate manual processes, optimize data-delivery and build the infrastructure required for optimal extraction, transformation and loading of data required for a wide variety of use-cases using SQL/Spark • You will build stream processing pipelines and tools to support a vast variety of analytics and audit use-cases • You will continuously evaluate relevant technologies, influence and drive architecture and design discussions • You will work in a cross-functional team and collaborate with peers during entire SDLC What to Bring • BE/B.Tech/BS/MS/PhD in Computer Science or a related field (ideal) • Minimum 2+ years of work experience building data warehouse and BI systems • Strong Java skills • Experience in either Go or Python (plus to have) • Experience in Apache Spark, Hadoop, Redshift, Athena • Strong understanding of database and storage fundamentals • Experience with the AWS stack • Ability to create data-flow design and write complex SQL / Spark based transformations • Experience working on real-time streaming data pipelines using Spark Streaming or Storm

Job posted by
apply for job
apply for job
Deepayan Mallick picture
Deepayan Mallick
Job posted by
Deepayan Mallick picture
Deepayan Mallick
Apply for job
apply for job

Tech Lead Backend

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 50 lacs/annum

RESPONSIBILITIES:   1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalization Engine.  4. Building Data Network Effects Engine to increase Engagement & Virality.  5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimization & network connectivity optimization for the next Billion Indians.  7. Orchestrating complicated workflows, asynchronous actions, and higher order components.  8. Work directly with Product and Design teams. REQUIREMENTS:   1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience.  4. Strong experience in memory management, performance tuning and resource optimizations.  5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelors degree from IIT/BITS/NIT. P.S. If you don't fulfill one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Shubham Maheshwari picture
Shubham Maheshwari
Job posted by
Shubham Maheshwari picture
Shubham Maheshwari
Apply for job
apply for job

Bigdata Lead

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 5 years
Experience icon
1 - 18 lacs/annum

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Sr. Data Analyst

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
6 - 11 years
Experience icon
1 - 12 lacs/annum

Description Requirements: Overall experience of 10 years with minimum 6 years data analysis experience MBA Finance or Similar background profile Ability to lead projects and work independently Must have the ability to write complex SQL, doing cohort analysis, comparative analysis etc . Experience working directly with business users to build reports, dashboards and solving business questions with data Experience with doing analysis using Python and Spark is a plus Experience with MicroStrategy or Tableau is a plu

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Senior Software Engineer - Backend

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
17 - 25 lacs/annum

Responsibilities Ensure timely and top-quality product delivery Ensure that the end product is fully and correctly defined and documented Ensure implementation/continuous improvement of formal processes to support product development activities Drive the architecture/design decisions needed to achieve cost-effective and high-performance results Conduct feasibility analysis, produce functional and design specifications of proposed new features. · Provide helpful and productive code reviews for peers and junior members of the team. Troubleshoot complex issues discovered in-house as well as in customer environments. Qualifications · Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc. · Expertise in Java, Object Oriented Programming, Design Patterns · Experience in coding and implementing scalable solutions in a large-scale distributed environment · Working experience in a Linux/UNIX environment is good to have · Experience with relational databases and database concepts, preferably MySQL · Experience with SQL and Java optimization for real-time systems · Familiarity with version control systems Git and build tools like Maven · Excellent interpersonal, written, and verbal communication skills · BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent

Job posted by
apply for job
apply for job
Sourabh Gandhe picture
Sourabh Gandhe
Job posted by
Sourabh Gandhe picture
Sourabh Gandhe
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 5 years
Experience icon
6 - 18 lacs/annum

We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques

Job posted by
apply for job
apply for job
Raghavendra Mishra picture
Raghavendra Mishra
Job posted by
Raghavendra Mishra picture
Raghavendra Mishra
Apply for job
apply for job

Senior/Lead Data Scientist for Applift @ Bangalore (4-8yrs Experience)

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
15 - 40 lacs/annum

AppLift is a data-driven technology company that empowers mobile app advertisers to acquire and re-engage quality users at scale on a performance basis. AppLift’s programmatic media buying platform DataLift provides access to all automated supply sources in the market, reaching over a billion users. The technology leverages first- and third-party data to optimize media buys across all stages of the conversion funnel and, through its proprietary LTV optimization technology, enables ROI-maximized user acquisition. AppLift is trusted by 500+ leading global advertisers across all verticals, such as King, Zynga, OLX, Glu Mobile, Myntra, Paltalk, Nexon, and Tap4Fun. Experience:4-8yrs Your Responsibilities: You are hands on with data, implementation and methodologies You are able to implement, measure and evaluate different algorithmic approaches, along with great problem solving skills and strong theoretical foundation You have expertise in implementing machine-learning and algorithmic concepts. Proficiency in coding and understanding of engineering trade-offs You innovate and develop approaches to improve click/conversion rate, eliminate impression/click fraud and enhance bidding strategies You perform statistical analysis, data mining to model user behaviour and improve ad-relevance You are able to work independently and deliver practical results on real data with high accountability Our requirements: 4+ years of experience in applied research or industry work experience Degree in statistics, applied mathematics, machine learning, or other highly quantitative experience Experience in working with technologies and tools like R, Graphlab, Hadoop, Hive, Spark, Pig Coding proficiency in at-least one language like python or Java Prior experience in Ad-Tech, a plus What do we offer? You get valuable insights into mobile marketing/entrepreneurship and have a high impact on shaping the expansion and success of AppLift across India Profit from working with European Serial Entrepreneurs who co-founded over 10 successful companies within the last 8 years and get access to a well-established network and be able to build your own top-tier network & reputation Learn and grow in an environment characterized by flat hierarchy, entrepreneurial drive and fun You experience an excellent learning culture Competitive remuneration package and much more! if interested mail your resume at divya.pushpa<at>applift.com Candidates are preferred from Tier 1 colleages

Job posted by
apply for job
apply for job
Divya Pushpa picture
Divya Pushpa
Job posted by
Divya Pushpa picture
Divya Pushpa
Apply for job
apply for job

Data Architect

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
9 - 13 years
Experience icon
10 - 23 lacs/annum

Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop

Job posted by
apply for job
apply for job
Sravanthi Alamuri picture
Sravanthi Alamuri
Job posted by
Sravanthi Alamuri picture
Sravanthi Alamuri
Apply for job
apply for job

Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Ahmedabad
Experience icon
1 - 5 years
Experience icon
5 - 15 lacs/annum

Byte Prophecy is looking for Data Engineers to build a critical piece of the enterprise data pipeline in our platform MonitorFirst. Candidates should - Have at least 1-2 years of relevant experience in any of the following technologies in our Data Pipeline [#ETL Tools, #Kafka, #Spark, #Cassandra, #Scala or #Python] - Be hands-on and proficient in #Java, #Scala and #SQL - Have strong fundamentals in data structures, algorithms and distributed systems - Experienced in product engineering and production-ready data pipelines is preferred Candidates will - Work in an agile environment as small focused teams - Need to be proactive and goal-oriented - Enjoy working as a team-player About Byte Prophecy We are an enterprise analytics platform company that helps some of the largest companies in India make key business decisions every day. As a unique single platform encompassing collection, transformation, processing, augmented analytics and automated alerts, we've been getting great traction from key stakeholders in the enterprise ecosystem. For our next round of growth, we are looking to hire Data Engineers and Product Analysts for our office in Ahmedabad. Please send your CVs to work@byteprophecy.com Thanks!

Job posted by
apply for job
apply for job
Adityavijay Rathore picture
Adityavijay Rathore
Job posted by
Adityavijay Rathore picture
Adityavijay Rathore
Apply for job
apply for job

Java Application Developer (4+ Yrs of Workex), Graph Based Product Dev

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
4 - 9 years
Experience icon
4 - 12 lacs/annum

We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.

Job posted by
apply for job
apply for job
Neha Ambastha picture
Neha Ambastha
Job posted by
Neha Ambastha picture
Neha Ambastha
Apply for job
apply for job

Hadoop Developer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 20+ years
Experience icon
4 - 15 lacs/annum

Looking for Big data Developers in Mumbai Location

Job posted by
apply for job
apply for job
Sheela P picture
Sheela P
Job posted by
Sheela P picture
Sheela P
Apply for job
apply for job

Hadoop Lead Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
27 - 34 lacs/annum

Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Senior Data Engineer (SDE II)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
15 - 30 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Lead Data Engineer (SDE III)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
25 - 55 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Senior Backend Developer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
15 - 40 lacs/annum

RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Shubham Maheshwari picture
Shubham Maheshwari
Job posted by
Shubham Maheshwari picture
Shubham Maheshwari
Apply for job
apply for job

Technical Architect/CTO

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Job posted by
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Apply for job
apply for job

Hadoop Developer

Founded 2009
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 14 years
Experience icon
8 - 18 lacs/annum

US based Multinational Company Hands on Hadoop

Job posted by
apply for job
apply for job
Neha Mayekar picture
Neha Mayekar
Job posted by
Neha Mayekar picture
Neha Mayekar
Apply for job
apply for job

Big Data Evangelist

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
apply for job
Suchit Majumdar picture
Suchit Majumdar
Job posted by
Suchit Majumdar picture
Suchit Majumdar
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job

Big Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 7 years
Experience icon
5 - 12 lacs/annum

Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.

Job posted by
apply for job
apply for job
Harsh Choudhary picture
Harsh Choudhary
Job posted by
Harsh Choudhary picture
Harsh Choudhary
Apply for job
apply for job

Java/Python Big Data Experienced Developer (3-8 Years)

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 8 years
Experience icon
3 - 11 lacs/annum

JOB RESPONSIBILITIES - Architecture, design and development of reusable server components for web & mobile applications - Work with engineering team in the development of new features and applications. - Rapid prototyping of applications based on requirements - Maintain and optimize new and existing code with an emphasis on quality and re-usability - Key contributor for in technical design and architecture processes - Provide technical guidance and solutions to technical problems that may arise - Collaborate with other team members and stakeholders - Production application management, including Dev-ops, support and troubleshooting - Perform peer code review REQUIRED BEHAVIORAL SKILLS - Commitment to work and deliver under pressure - Team player - Enthusiasm for solving challenging problems and good analytical skills REQUIRED TECHNICAL SKILLS - Strong computer science fundamentals. - Extensive experience as a backend developer in Java/Python. Experience in both languages is a plus. - Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Python. - Must know how to scale application through AWS, IBM SoftLayer (or similar platforms). - Knowledge and experience in programming NOSQL databases like OrientDB, MongoDB, Titan, Cassandra, etc. - Experience with Bigdata tools like Spark, Storm, flume, etc, Query tools like SQL, Hive, Pig. - Web Framework (Django, Spring, etc.). - Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture). - Strong understanding and hands-on experience in Unix/Linux shell. DESIRED TECHNICAL SKILLS - Experience in Machine Learning Models, Data mining algorithms, probabilistic algorithms. - HTML, CSS and Javascript - Experience working with agile management tool (e.g., JIRA). - Experience with AWS or IBM SoftLayer/Bluemix. - UI/UX development experience - Experience working with asset management and broker dealer institutions ABOUT GALAXIA SOLUTIONS (www.galaxiasol.com) Galaxia Solutions is a privately held company that provides business, technology, and data solutions to the world’s leading asset managers, hedge funds, Third Party Administrator (TPA), prime brokers, and broker-dealers and provide them with investment operations and technology improvement services that enhance their overall business performance. Our solutions are targeted to achieve better controls, gain process efficiencies, and deploy holistic technology and data solutions that reflects end to end perspective, thereby helping firms manage risk and reduce cost. CONTACT.: Email us your resume at recruit@galaxiasol.com

Job posted by
apply for job
apply for job
Mihir Shah picture
Mihir Shah
Job posted by
Mihir Shah picture
Mihir Shah
Apply for job
apply for job

Python Developer

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
2 - 7 years
Experience icon
6 - 18 lacs/annum

Full Stack Developer for Big Data Practice. Will include everything from architecture to ETL to model building to visualization.

Job posted by
apply for job
apply for job
Bavani T picture
Bavani T
Job posted by
Bavani T picture
Bavani T
Apply for job
apply for job

Big Data

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
5 - 5 lacs/annum

We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.

Job posted by
apply for job
apply for job
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Job posted by
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Apply for job
apply for job

Senior Software Engineer

via zeotap
Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 8 years
Experience icon
5 - 30 lacs/annum

zeotap helps telecom operators unlock the potential of their data safely across industries using privacy-by-design technology http://www.zeotap.com

Job posted by
apply for job
apply for job
Ameya Agnihotri picture
Ameya Agnihotri
Job posted by
Ameya Agnihotri picture
Ameya Agnihotri
Apply for job
apply for job

Senior Software Engineer

via zeotap
Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Projjol Banerjea picture
Projjol Banerjea
Job posted by
Projjol Banerjea picture
Projjol Banerjea
Apply for job
apply for job

Product Tech Lead

Founded 2007
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Mumbai
Experience icon
3 - 9 years
Experience icon
5 - 14 lacs/annum

Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.

Job posted by
apply for job
apply for job
Uma Venkataraman picture
Uma Venkataraman
Job posted by
Uma Venkataraman picture
Uma Venkataraman
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.