Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Spark Jobs

Explore top Spark Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Engineer - Java/Hadoop

Founded 2012
Product
250+ employees
Profitable
Java
Hadoop
NOSQL Databases
Cassandra
Spark
MapReduce
J2EE
Data Structures
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
20 - 50 lacs/annum

Our Company We help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. @ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. Our Team The Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc. Our team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support. Your Opportunity As part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers. Your Responsibility • Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale • Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios. • Identify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums • Develop and/or Contribute to add features that enable customer analytics at Walmart scale • Deploy and monitor products on Cloud platforms • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs Our Ideal Candidate You have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. Your Qualifications • Bachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field • Expertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc. • Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc). • Experience in building scalable/highly available distributed systems in production. • Understanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm. • Experience with SOA. • Knowledge of graph database neo4j, Titan is definitely a plus. • Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system.

Job posted by
apply for job
apply for job
Job poster profile picture - Lakshman Dornala
Lakshman Dornala
Job posted by
Job poster profile picture - Lakshman Dornala
Lakshman Dornala

Data Science Engineer

Founded 2017
Product
6-50 employees
Profitable
Data Structures
Algorithms
Scala
Machine Learning (ML)
Deep Learning
Spark
Big Data
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Staff Engineer - Java/Hadoop

Founded 2012
Product
250+ employees
Profitable
Java
Hadoop
Spark
Big Data
J2EE
Data Structures
NOSQL Databases
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 14 years
Experience icon
30 - 70 lacs/annum

Our Company We help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. @ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. Our Team The Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc. Our team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support. Your Opportunity As part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers. Your Responsibility • Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale • Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios. • Identify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums • Develop and/or Contribute to add features that enable customer analytics at Walmart scale • Deploy and monitor products on Cloud platforms • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs Our Ideal Candidate You have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. Your Qualifications • Bachelor's Degree and 8+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field • Expertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc. • Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc). • Experience in building scalable/highly available distributed systems in production. • Understanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm. • Experience with SOA. • Knowledge of graph database neo4j, Titan is definitely a plus. • Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system.

Job posted by
apply for job
apply for job
Job poster profile picture - Lakshman Dornala
Lakshman Dornala
Job posted by
Job poster profile picture - Lakshman Dornala
Lakshman Dornala

Lead Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Spark
Apache Kafka
Hadoop
TensorFlow
Scala
Machine Learning (ML)
OpenStack
MxNet
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
25 - 50 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Senior Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Shell Scripting
Apache Kafka
TensorFlow
Spark
Hadoop
Elastic Search
MXNet
SMACK
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
20 - 40 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for multiple different roles with 2-8 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes or ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Senior Software Engineer

Founded 2008
Product
250+ employees
Profitable
Python
Spark
Machine Learning (ML)
Scala
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 14 years
Experience icon
1 - 500 lacs/annum

(Python OR Java OR scala) AND "machine learning" AND Spark

Job posted by
apply for job
apply for job
Job poster profile picture - Hai Anh Nguyen
Hai Anh Nguyen
Job posted by
Job poster profile picture - Hai Anh Nguyen
Hai Anh Nguyen

Lead Engineer

Founded 2011
Product
250+ employees
Raised funding
Java
React.js
AngularJS
MongoDB
PostgreSQL
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 13 years
Experience icon
35 - 50 lacs/annum

As one of the fastest growing e-commerce and logistics companies in Asia, RedMart offers an unparalleled experience scaling a startup. Our culture: entrepreneurial, fiercely intelligent, team-oriented, deeply creative, and whatever you add to it! We’re fanatical about improving our customer experience and providing a “wow” customer service. We’re interested in talented, creative, and passionate people joining our All-Star team who believe in our mission: To save our customers time and money for the important things in life! As a Lead Software Engineer you will: Provide technical leadership on a team of software engineers that has full ownership of TOPS technology and products at RedMart. You’ll need to have superb communication skills and thrive in a collaborative environment and be committed to the success of the team as a whole. Set the technical architecture and roadmap for the team. But also dive in and get your hands into the code. (Most of the team’s systems are Scala or Java microservices.) Align the team on an effective and continuously improving development process. Learn the business and get close to users and customers. Build close relationships with the internal business stakeholders for your domain. Design, implement and test robust technical solutions that our 24/7 store and operations can rely on. Write clean code that’s testable, maintainable, solves the right problem and does it well. Code you can be proud of. Champion engineering excellence. Influence and drive best engineering practices within your team and organization. Mentor more junior engineers, improving their skills, their knowledge of our systems, and their ability to get things done. Have the opportunity to and be expected to innovate and demonstrate your creativity. Do you have ideas on how to improve TOPS or effectively use a new technology? Can you find a way to do what others thought impossible?

Job posted by
apply for job
apply for job
Job poster profile picture - Shalini Vijay
Shalini Vijay
Job posted by
Job poster profile picture - Shalini Vijay
Shalini Vijay

Senior Backend Developer

Founded 2016
Product
6-50 employees
Raised funding
NOSQL Databases
Java
Google App Engine (GAE)
Firebase
Cassandra
Aerospike
Spark
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
15 - 40 lacs/annum

RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari
Job posted by
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari

Technical Architect/CTO

Founded 2016
Products and services
1-5 employees
Bootstrapped
Python
C/C++
Big Data
Cloud Computing
Technical Architecture
Hadoop
Spark
Cassandra
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi
Job posted by
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
AWS CloudFormation
Spark
Apache Kafka
Hadoop
HDFS
Location icon
Noida
Experience icon
1 - 7 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Hadoop Developer

Founded 2009
Products and services
250+ employees
Profitable
HDFS
Hbase
Spark
Flume
hive
Sqoop
Scala
Location icon
Mumbai
Experience icon
5 - 14 years
Experience icon
8 - 18 lacs/annum

US based Multinational Company Hands on Hadoop

Job posted by
apply for job
apply for job
Job poster profile picture - Neha Mayekar
Neha Mayekar
Job posted by
Job poster profile picture - Neha Mayekar
Neha Mayekar

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning (ML)
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
20 - 30 lacs/annum

## Responsibilities * Exp 4~8 years * Design and build the initial version of the off-line product by using Machine Learning to recommend video contents to 1M+ User Profiles. * Design personalized recommendation algorithm and optimize the model * Develop the feature of the recommendation system * Analyze user behavior, build up user portrait and tag system ## Desired Skills and Experience * B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background * 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) * Experience in Machine Learning and Prediction & Recommendation techniques * Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala * Proficiency in a major programming language (e.g. C/C++/Scala) and/or a scripting language (Python/R) * Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search * Experience with cloud solutions/AWS, strong knowledge in Linux and Apache * Experience with any map-reduce SPARK/EMR * Experience in building reports and/or data visualization * Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
apply for job
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Apache Storm
Spark
Apache Kafka
Hadoop
Zookeeper
Kubernetes
Docker
Amazon Web Services (AWS)
Location icon
Noida
Experience icon
2 - 7 years
Experience icon
5 - 12 lacs/annum

Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.

Job posted by
apply for job
apply for job
Job poster profile picture - Harsh Choudhary
Harsh Choudhary
Job posted by
Job poster profile picture - Harsh Choudhary
Harsh Choudhary
Java
Python
Linux/Unix
Graph Databases
Amazon Web Services (AWS)
Agile/Scrum
Spark
Apache Storm
Location icon
Mumbai
Experience icon
3 - 8 years
Experience icon
3 - 11 lacs/annum

JOB RESPONSIBILITIES - Architecture, design and development of reusable server components for web & mobile applications - Work with engineering team in the development of new features and applications. - Rapid prototyping of applications based on requirements - Maintain and optimize new and existing code with an emphasis on quality and re-usability - Key contributor for in technical design and architecture processes - Provide technical guidance and solutions to technical problems that may arise - Collaborate with other team members and stakeholders - Production application management, including Dev-ops, support and troubleshooting - Perform peer code review REQUIRED BEHAVIORAL SKILLS - Commitment to work and deliver under pressure - Team player - Enthusiasm for solving challenging problems and good analytical skills REQUIRED TECHNICAL SKILLS - Strong computer science fundamentals. - Extensive experience as a backend developer in Java/Python. Experience in both languages is a plus. - Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Python. - Must know how to scale application through AWS, IBM SoftLayer (or similar platforms). - Knowledge and experience in programming NOSQL databases like OrientDB, MongoDB, Titan, Cassandra, etc. - Experience with Bigdata tools like Spark, Storm, flume, etc, Query tools like SQL, Hive, Pig. - Web Framework (Django, Spring, etc.). - Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture). - Strong understanding and hands-on experience in Unix/Linux shell. DESIRED TECHNICAL SKILLS - Experience in Machine Learning Models, Data mining algorithms, probabilistic algorithms. - HTML, CSS and Javascript - Experience working with agile management tool (e.g., JIRA). - Experience with AWS or IBM SoftLayer/Bluemix. - UI/UX development experience - Experience working with asset management and broker dealer institutions ABOUT GALAXIA SOLUTIONS (www.galaxiasol.com) Galaxia Solutions is a privately held company that provides business, technology, and data solutions to the world’s leading asset managers, hedge funds, Third Party Administrator (TPA), prime brokers, and broker-dealers and provide them with investment operations and technology improvement services that enhance their overall business performance. Our solutions are targeted to achieve better controls, gain process efficiencies, and deploy holistic technology and data solutions that reflects end to end perspective, thereby helping firms manage risk and reduce cost. CONTACT.: Email us your resume at recruit@galaxiasol.com

Job posted by
apply for job
apply for job
Job poster profile picture - Mihir Shah
Mihir Shah
Job posted by
Job poster profile picture - Mihir Shah
Mihir Shah

Python Developer

Founded
employees
MySQL
MongoDB
Spark
Apache Hive
Location icon
Chennai
Experience icon
2 - 7 years
Experience icon
6 - 18 lacs/annum

Full Stack Developer for Big Data Practice. Will include everything from architecture to ETL to model building to visualization.

Job posted by
apply for job
apply for job
Job poster profile picture - Bavani T
Bavani T
Job posted by
Job poster profile picture - Bavani T
Bavani T

Big Data Engineer,

Founded 2014
Products and services
51-250 employees
Profitable
Spark Streamimng
spark sql
Java
Hadoop
Scala
Spark
Location icon
Pune
Experience icon
4 - 8 years
Experience icon
5 - 16 lacs/annum

Greetings from Info Vision labs InfoVision was founded in 1995 by technology professionals with a vision to provide quality and cost-effective IT solutions worldwide. InfoVision is a global IT Services and Solutions company with primary focus on Strategic Resources, Enterprise Applications and Technology Solutions. Our core practice areas include Applications Security, Business Analytics, Visualization & Collaboration and Wireless & IP Communications. Our IT services cover the full range of needs of enterprises, from Staffing to Solutions. Over the past decade, our ability to serve our clients has steadily evolved. It now covers multiple industries, numerous geographies and flexible delivery models, as well as the state-of-the-art technologies. InfoVision opened its development and delivery center in 2014, at Pune and has been expanding with project engagements with clients based in US and India. We can offer the right individuals an industry leading package and fast career growth prospects. Please get to know about us at - http://infovisionlabs.com/about/

Job posted by
apply for job
apply for job
Job poster profile picture - Ankita Lonagre
Ankita Lonagre
Job posted by
Job poster profile picture - Ankita Lonagre
Ankita Lonagre

Big Data

Founded 2014
Products and services
51-250 employees
Profitable
Hadoop
Scala
Spark
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
5 - 5 lacs/annum

We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.

Job posted by
apply for job
apply for job
Job poster profile picture - Shekhar Singh kshatri
Shekhar Singh kshatri
Job posted by
Job poster profile picture - Shekhar Singh kshatri
Shekhar Singh kshatri

Senior Software Engineer

Founded 2014
Product
6-50 employees
Raised funding
druid
Java
Python
Go Programming (Golang)
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 8 years
Experience icon
5 - 30 lacs/annum

zeotap helps telecom operators unlock the potential of their data safely across industries using privacy-by-design technology http://www.zeotap.com

Job posted by
apply for job
apply for job
Job poster profile picture - Ameya Agnihotri
Ameya Agnihotri
Job posted by
Job poster profile picture - Ameya Agnihotri
Ameya Agnihotri

Senior Software Engineer

Founded 2014
Product
6-50 employees
Raised funding
Python
Big Data
Hadoop
Scala
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Job poster profile picture - Projjol Banerjea
Projjol Banerjea
Job posted by
Job poster profile picture - Projjol Banerjea
Projjol Banerjea

Lead Java Engineer

Founded 1905
Products and services
250+ employees
Profitable
Java
J2EE
Apache Storm
Scala
Solr
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
5 - 25 lacs/annum

Expect the Best. At Target, we have a vision: to become the best - the best culture and brand, the best place for growth and the company with the best reputation. We offer an inclusive, collaborative and energetic work environment that rewards those who perform. We deliver engaging, innovative and on-trend experiences for our team members and our guests. We invest in our team members' futures by developing leaders and providing a breadth of opportunities for professional development. It takes the best to become the best, and we are committed to building a team that does the right thing for our guests, shareholders, team members and communities. Minneapolis-based Target Corporation serves guests at stores nationwide and at Target.com. Target is committed to providing a fun and convenient shopping experience with access to unique and highly differentiated products at affordable prices. Since 1946, the corporation has given 5 percent of its income through community grants and programs like Take Charge of Education®.

Job posted by
apply for job
apply for job
Job poster profile picture - Karishma Shah
Karishma Shah
Job posted by
Job poster profile picture - Karishma Shah
Karishma Shah

Product Tech Lead

Founded 2007
Products and services
6-50 employees
Profitable
C/C++
Architecture
C#
Spark
Location icon
Pune, Mumbai
Experience icon
3 - 9 years
Experience icon
5 - 14 lacs/annum

Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.

Job posted by
apply for job
apply for job
Job poster profile picture - Uma Venkataraman
Uma Venkataraman
Job posted by
Job poster profile picture - Uma Venkataraman
Uma Venkataraman
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.