Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Spark Jobs in Bangalore (Bengaluru)

Explore top Spark Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Software Engineer - Data Engineering

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 8 years
Experience icon
20 - 40 lacs/annum

At Hotstar, we have over 100 million users and capture close to a billion click stream messages daily. The engineering team at Hotstar is at the centre of the action and is responsible for creating the unmatched user experience. Our engineers solve real-life complex problems and create compelling experiences for our customers. As a Data Engineer in the Data Infrastructure team, you will build platforms and tools that churn through, process & analyze petabytes of data and lead a robust team. You will work on technologies such as Apache Kafka, Apache Spark, Aerospike, Redshift to build a scalable infrastructure that delivers recommendations to our users in real-time. The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, join us! Your Key Responsibilities • You will work closely with Software Engineers & ML engineers to build the data infrastructure that fuels the needs of multiple teams, systems and products • You will automate manual processes, optimize data-delivery and build the infrastructure required for optimal extraction, transformation and loading of data required for a wide variety of use-cases using SQL/Spark • You will build stream processing pipelines and tools to support a vast variety of analytics and audit use-cases • You will continuously evaluate relevant technologies, influence and drive architecture and design discussions • You will work in a cross-functional team and collaborate with peers during entire SDLC What to Bring • BE/B.Tech/BS/MS/PhD in Computer Science or a related field (ideal) • Minimum 2+ years of work experience building data warehouse and BI systems • Strong Java skills • Experience in either Go or Python (plus to have) • Experience in Apache Spark, Hadoop, Redshift, Athena • Strong understanding of database and storage fundamentals • Experience with the AWS stack • Ability to create data-flow design and write complex SQL / Spark based transformations • Experience working on real-time streaming data pipelines using Spark Streaming or Storm

Job posted by
apply for job
apply for job
Deepayan Mallick picture
Deepayan Mallick
Job posted by
Deepayan Mallick picture
Deepayan Mallick
Apply for job
apply for job

Tech Lead Backend

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 50 lacs/annum

RESPONSIBILITIES:   1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalization Engine.  4. Building Data Network Effects Engine to increase Engagement & Virality.  5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimization & network connectivity optimization for the next Billion Indians.  7. Orchestrating complicated workflows, asynchronous actions, and higher order components.  8. Work directly with Product and Design teams. REQUIREMENTS:   1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience.  4. Strong experience in memory management, performance tuning and resource optimizations.  5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelors degree from IIT/BITS/NIT. P.S. If you don't fulfill one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Shubham Maheshwari picture
Shubham Maheshwari
Job posted by
Shubham Maheshwari picture
Shubham Maheshwari
Apply for job
apply for job

Data Scientist/ ML engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
10 - 25 lacs/annum

HackerEarth provides enterprise software solutions that help organisations in their innovation management and talent assessment needs. HackerEarth Recruit is a talent assessment platform that helps in efficient technical talent screening allowing organisations to build strong, proficient teams. HackerEarth Sprint is an innovation management software that helps organisations drive innovation through internal and external talent pools, including HackerEarth’s global community of 2M+ developers. Today, HackerEarth serves 750+ organizations, including leading Fortune 500 companies from around the world. General Electric, IBM, Amazon, Apple, Wipro, Walmart Labs and Bosch are some of the brands that trust HackerEarth in helping them drive growth. Job Description We are looking for an ML Engineer that will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality models integrated with our products. You will be primarily working on recommendation engines, text classification, automated tagging of documents, lexical similarity, semantic similarity and similar problems to start with. Responsibilities Selecting features, building and optimizing classifiers using machine learning techniques Data mining using state-of-the-art methods Extending the company’s data with third-party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing the ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Develop custom data models and algorithms to apply to data sets Assess the effectiveness and accuracy of new data sources and data gathering techniques Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Skills and Qualifications 4+ years of experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in using query languages such as SQL, Hive, Pig Experience with distributed data/computing tools: MapReduce, Hadoop, Hive, Spark, etc. Experience using web services: Redshift, S3, etc. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. Experience working with and creating data architectures. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. You should be creative, enthusiastic, and take pride in the work that you produce. Above all, you should love to build and ship solutions that real people will use every day.

Job posted by
apply for job
apply for job
Partha Dewri picture
Partha Dewri
Job posted by
Partha Dewri picture
Partha Dewri
Apply for job
apply for job

Senior/Lead Data Scientist for Applift @ Bangalore (4-8yrs Experience)

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
15 - 40 lacs/annum

AppLift is a data-driven technology company that empowers mobile app advertisers to acquire and re-engage quality users at scale on a performance basis. AppLift’s programmatic media buying platform DataLift provides access to all automated supply sources in the market, reaching over a billion users. The technology leverages first- and third-party data to optimize media buys across all stages of the conversion funnel and, through its proprietary LTV optimization technology, enables ROI-maximized user acquisition. AppLift is trusted by 500+ leading global advertisers across all verticals, such as King, Zynga, OLX, Glu Mobile, Myntra, Paltalk, Nexon, and Tap4Fun. Experience:4-8yrs Your Responsibilities: You are hands on with data, implementation and methodologies You are able to implement, measure and evaluate different algorithmic approaches, along with great problem solving skills and strong theoretical foundation You have expertise in implementing machine-learning and algorithmic concepts. Proficiency in coding and understanding of engineering trade-offs You innovate and develop approaches to improve click/conversion rate, eliminate impression/click fraud and enhance bidding strategies You perform statistical analysis, data mining to model user behaviour and improve ad-relevance You are able to work independently and deliver practical results on real data with high accountability Our requirements: 4+ years of experience in applied research or industry work experience Degree in statistics, applied mathematics, machine learning, or other highly quantitative experience Experience in working with technologies and tools like R, Graphlab, Hadoop, Hive, Spark, Pig Coding proficiency in at-least one language like python or Java Prior experience in Ad-Tech, a plus What do we offer? You get valuable insights into mobile marketing/entrepreneurship and have a high impact on shaping the expansion and success of AppLift across India Profit from working with European Serial Entrepreneurs who co-founded over 10 successful companies within the last 8 years and get access to a well-established network and be able to build your own top-tier network & reputation Learn and grow in an environment characterized by flat hierarchy, entrepreneurial drive and fun You experience an excellent learning culture Competitive remuneration package and much more! if interested mail your resume at divya.pushpa<at>applift.com Candidates are preferred from Tier 1 colleages

Job posted by
apply for job
apply for job
Divya Pushpa picture
Divya Pushpa
Job posted by
Divya Pushpa picture
Divya Pushpa
Apply for job
apply for job

Hadoop Lead Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
27 - 34 lacs/annum

Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Senior Data Engineer (SDE II)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
15 - 30 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Lead Data Engineer (SDE III)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
25 - 55 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Senior Backend Developer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
15 - 40 lacs/annum

RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Shubham Maheshwari picture
Shubham Maheshwari
Job posted by
Shubham Maheshwari picture
Shubham Maheshwari
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job

Senior Software Engineer

via zeotap
Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 8 years
Experience icon
5 - 30 lacs/annum

zeotap helps telecom operators unlock the potential of their data safely across industries using privacy-by-design technology http://www.zeotap.com

Job posted by
apply for job
apply for job
Ameya Agnihotri picture
Ameya Agnihotri
Job posted by
Ameya Agnihotri picture
Ameya Agnihotri
Apply for job
apply for job

Senior Software Engineer

via zeotap
Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Projjol Banerjea picture
Projjol Banerjea
Job posted by
Projjol Banerjea picture
Projjol Banerjea
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.