Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Big Data Evangelist
Posted by Suchit Majumdar

apply to this job

Locations

Noida

Experience

2 - 6 years

Salary

INR 4L - 12L

Skills

Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra

Job description

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

About the company

UpX Academy is an online training institute for courses in Big Data Analytics. Learn from the best instructors in the world & become an expert.

Founded

2016

Type

Products & Services

Size

6-50 employees

Stage

Profitable
View company

Similar jobs

"Hadoop Developer"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
23 - 30 lacs/annum

"Position Description\n\nDemonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans\nModels compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity\nProvides and supports the implementation of business solutions\nProvides support to the business\nTroubleshoots business, production issues and on call support.\n \n\nMinimum Qualifications\n\nBS/MS in Computer Science or related field 5+ years’ experience building web applications \nSolid understanding of computer science principles \nExcellent Soft Skills\nUnderstanding the major algorithms like searching and sorting\nStrong skills in writing clean code using languages like Java and J2EE technologies. \nUnderstanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate\nDeep knowledge of web technologies such as HTML5, CSS, JSON\nGood understanding of continuous integration tools and frameworks like Jenkins\nExperience in working with the Agile environments, like Scrum and Kanban.\nExperience in dealing with the performance tuning for very large-scale apps.\nExperience in writing scripting using Perl, Python and Shell scripting.\nExperience in writing jobs using Open source cluster computing frameworks like Spark\nRelational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive.\nAptitude for writing clean, succinct and efficient code.\nAttitude to thrive in a fun, fast-paced start-up like environment"

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

"Senior Engineer - Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
25 - 50 lacs/annum

"Senior Engineer - Development\n\nOur Company\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \nYour Qualifications\n•\tBachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Software Developer (SPARK)"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 4 years
Experience icon
4 - 7 lacs/annum

"- 3+ years of appropriate technical experience\n- Strong proficiency with Core Java or with Scala on Spark (Hadoop)\n- Database experience preferably with DB2, Sybase, or Oracle\n- Complete SDLC process and Agile Methodology (Scrum)\n- Strong oral and written communication skills\n- Excellent interpersonal skills and professional approach"

Job posted by
apply for job
apply for job
Minal Patange picture
Minal Patange
Job posted by
Minal Patange picture
Minal Patange
Apply for job
apply for job

"Hadoop Lead Engineers"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
27 - 34 lacs/annum

"Position Description\n\nAssists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects\nDemonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans\nGenerate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams.\nProactively identify issues, identify root cause for the critical issues.\nWork with cross functional teams, Setup KT sessions and mentor the team members.\nCo-ordinate with Sunnyvale and Bentonville teams.\nModels compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity\nProvides and supports the implementation of business solutions\nProvides support to the business\nTroubleshoots business, production issues and on call support.\n \n\nMinimum Qualifications\n\nBS/MS in Computer Science or related field 8+ years’ experience building web applications \nSolid understanding of computer science principles \nExcellent Soft Skills\nUnderstanding the major algorithms like searching and sorting\nStrong skills in writing clean code using languages like Java and J2EE technologies. \nUnderstanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate\nDeep knowledge of web technologies such as HTML5, CSS, JSON\nGood understanding of continuous integration tools and frameworks like Jenkins\nExperience in working with the Agile environments, like Scrum and Kanban.\nExperience in dealing with the performance tuning for very large-scale apps.\nExperience in writing scripting using Perl, Python and Shell scripting.\nExperience in writing jobs using Open source cluster computing frameworks like Spark\nRelational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive.\nAptitude for writing clean, succinct and efficient code.\nAttitude to thrive in a fun, fast-paced start-up like environment"

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

"Bigdata"

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
0 - 1 years
Experience icon
1 - 1 lacs/annum

"Bigdata, Business intelligence , python, R with their skills"

Job posted by
apply for job
apply for job
Jasmine Shaik picture
Jasmine Shaik
Job posted by
Jasmine Shaik picture
Jasmine Shaik
Apply for job
apply for job

"Principal Member Technical Staff (SDE3)"

Founded 2005
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 15 years
Experience icon
25 - 50 lacs/annum

"About [24]7 Innovation Labs \n\nData is changing human lives at the core - we collect so much data about everything, use it to learn many things and apply the learnings in all aspects of our lives. [24]7 is at the fore front of applying data and machine learning to the world of customer acquisition and customer engagement. Our customer acquisition cloud uses best of ML and AI to get the right audiences and our engagement cloud powers the interactions for best experience. We service Fortune 100 enterprises globally and hundreds of millions of their customers every year. We enable 1.5B customer interactions every year. \n\nWe work on several challenging problems in the world of data processing, machine learning and use artificial intelligence to power Smart Agents. \n \nHow do you process millions of events in a stream to derive intelligence?\nHow do you learn from troves of data applying scalable machine learning algorithms?\nHow do you switch the learnings with real time streams to make decisions in sub 300 msec at scale?\n \nWe work with the best of open source technologies - Akka, Scala, Undertow, Spark, Spark ML, Hadoop, Cassandra, Mongo. Platform scale and real time are in our DNA and we work hard every day to change the game in customer engagement.\n \nWe believe in empowering smart people to work on larger than life problems with the best of technologies and like-minded people.\nWe are a Pre-IPO Silicon Valley based company with many global brands as our customers – Hilton, eBay, Time Warner Cable, Best Buy, Target, American Express, Capital One and United Airlines. We touch more than 300 M visitors online every month with our technologies. We have one of the best work environments in Bangalore.\n \n\nOpportunity\nPrincipal Member of Technical Staff is one of our distinguished individual contributors who can takes on problems of size and scale. You will be responsible for working with a team of smart and highly capable engineers to design a solution and work closely in the implementation, testing, deployment and runtime operation 24x7 with 99.99% uptime. You will have to demonstrate your technical mettle and influence and inspire the engineers to build things right. You will be working on the problems in one or more areas of : \n\nData Collection: Horizontally scalable platform to collect Billions of events from around the world in as little as 50 msec. \nIntelligent Campaign Engines: Make real time decisions using the events on best experience to display in as little as 200 msec. \nReal time Stream Computation: Compute thousands of metrics on the incoming billions of events to make it available for decisioning and analytics. \nData Pipeline: Scaleable data transport layer using Apache Kafka running across hundreds of servers and transporting billions of events in real time. \nData Analysis: Distributed OLAP engines on Hadoop or Spark to provide real time analytics on the data \nLarge scale Machine Learning: Supervised and Unsupervised learning on Hadoop and Spark using the best of open source frameworks. \n\nIn this role, you will be presenting your work at Meetup events, Conferences worldwide and contributing to Open Source. You will be helping with attracting the right talent and grooming the engineers to shape up to be the best. \n\n\nMust Have\n\nEngineering\n• Strong foundation in Computer Science - through education and/or experience - Data Structures, Algorithms, Design thinking, Optimizations.\n• Should have been an outstanding technical contributor with accomplishments include building products and platforms of scale.\n• Outstanding technical acumen and deep understanding of problems with distributed systems and scale with strong orientation towards open source.\n• Experience building platforms that have 99.99% uptime requirements and have scale.\n• Experience in working in a fast paced environment with attention to detail and incremental delivery through automation.\n• Loves to code than to talk.\n• 10+ years of experience in building software systems or able to demonstrate such maturity without the years under the belt.\n\nBehavioral\n• Loads of energy and can-do attitude to take BIG problems by their horns and solve them.\n• Entrepreneurial spirit to conceive ideas, turn challenges into opportunities and build products.\n• Ability to inspire other engineers to do the unimagined and go beyond their comfort lines.\n• Be a role model for upcoming engineers in the organization especially new college grads.\n\n\nTechnology background\n• Strong preference with experience in open source technologies: working with various Java application servers or Scala \n• Experience in deploying web applications, services that run across thousands of servers globally with very low latency and high uptime."

Job posted by
apply for job
apply for job
Achappa Bheemaiah picture
Achappa Bheemaiah
Job posted by
Achappa Bheemaiah picture
Achappa Bheemaiah
Apply for job
apply for job

"Senior Engineer - Java/Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
20 - 50 lacs/annum

"Our Company\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \nYour Qualifications\n•\tBachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Data Science Engineer (SDE I)"

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nFor this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server.\nTier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further."

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

"Staff Engineer - Java/Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 14 years
Experience icon
30 - 70 lacs/annum

"Our Company\n\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\n\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\n\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\n\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \n\nYour Qualifications\n•\tBachelor's Degree and 8+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Lead Data Engineer (SDE III)"

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
25 - 55 lacs/annum

"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nFor this role, credible display of innovation in past projects is a must.\nWe are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with:\n- Proven expertise in Spark, Kafka, and Hadoop ecosystem.\n- Rock-solid algorithmic capabilities.\n- Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search.\n- Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack.\n- Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production.\n\nTier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. \n\nLet us know if this interests you to explore the profile further."

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job
Want to apply for this role at UpX Academy?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.