Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Hadoop Jobs

Explore top Hadoop Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

"Hadoop Developer"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
23 - 30 lacs/annum

"Position Description\n\nDemonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans\nModels compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity\nProvides and supports the implementation of business solutions\nProvides support to the business\nTroubleshoots business, production issues and on call support.\n \n\nMinimum Qualifications\n\nBS/MS in Computer Science or related field 5+ years’ experience building web applications \nSolid understanding of computer science principles \nExcellent Soft Skills\nUnderstanding the major algorithms like searching and sorting\nStrong skills in writing clean code using languages like Java and J2EE technologies. \nUnderstanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate\nDeep knowledge of web technologies such as HTML5, CSS, JSON\nGood understanding of continuous integration tools and frameworks like Jenkins\nExperience in working with the Agile environments, like Scrum and Kanban.\nExperience in dealing with the performance tuning for very large-scale apps.\nExperience in writing scripting using Perl, Python and Shell scripting.\nExperience in writing jobs using Open source cluster computing frameworks like Spark\nRelational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive.\nAptitude for writing clean, succinct and efficient code.\nAttitude to thrive in a fun, fast-paced start-up like environment"

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

"Hadoop Engineers"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
24 - 30 lacs/annum

"Position Description\n\nDemonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans\nModels compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity\nProvides and supports the implementation of business solutions\nProvides support to the business\nTroubleshoots business, production issues and on call support.\n \n\nMinimum Qualifications\n\nBS/MS in Computer Science or related field 5+ years’ experience building web applications \nSolid understanding of computer science principles \nExcellent Soft Skills\nUnderstanding the major algorithms like searching and sorting\nStrong skills in writing clean code using languages like Java and J2EE technologies. \nUnderstanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate\nDeep knowledge of web technologies such as HTML5, CSS, JSON\nGood understanding of continuous integration tools and frameworks like Jenkins\nExperience in working with the Agile environments, like Scrum and Kanban.\nExperience in dealing with the performance tuning for very large-scale apps.\nExperience in writing scripting using Perl, Python and Shell scripting.\nExperience in writing jobs using Open source cluster computing frameworks like Spark\nRelational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive.\nAptitude for writing clean, succinct and efficient code.\nAttitude to thrive in a fun, fast-paced start-up like environment"

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

"Senior Engineer - Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
25 - 50 lacs/annum

"Senior Engineer - Development\n\nOur Company\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \nYour Qualifications\n•\tBachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Software Developer (SPARK)"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 4 years
Experience icon
4 - 7 lacs/annum

"- 3+ years of appropriate technical experience\n- Strong proficiency with Core Java or with Scala on Spark (Hadoop)\n- Database experience preferably with DB2, Sybase, or Oracle\n- Complete SDLC process and Agile Methodology (Scrum)\n- Strong oral and written communication skills\n- Excellent interpersonal skills and professional approach"

Job posted by
apply for job
apply for job
Minal Patange picture
Minal Patange
Job posted by
Minal Patange picture
Minal Patange
Apply for job
apply for job

"Hadoop Lead Engineers"

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
27 - 34 lacs/annum

"Position Description\n\nAssists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects\nDemonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans\nGenerate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams.\nProactively identify issues, identify root cause for the critical issues.\nWork with cross functional teams, Setup KT sessions and mentor the team members.\nCo-ordinate with Sunnyvale and Bentonville teams.\nModels compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity\nProvides and supports the implementation of business solutions\nProvides support to the business\nTroubleshoots business, production issues and on call support.\n \n\nMinimum Qualifications\n\nBS/MS in Computer Science or related field 8+ years’ experience building web applications \nSolid understanding of computer science principles \nExcellent Soft Skills\nUnderstanding the major algorithms like searching and sorting\nStrong skills in writing clean code using languages like Java and J2EE technologies. \nUnderstanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate\nDeep knowledge of web technologies such as HTML5, CSS, JSON\nGood understanding of continuous integration tools and frameworks like Jenkins\nExperience in working with the Agile environments, like Scrum and Kanban.\nExperience in dealing with the performance tuning for very large-scale apps.\nExperience in writing scripting using Perl, Python and Shell scripting.\nExperience in writing jobs using Open source cluster computing frameworks like Spark\nRelational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive.\nAptitude for writing clean, succinct and efficient code.\nAttitude to thrive in a fun, fast-paced start-up like environment"

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

"Senior Engineer - Java/Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
20 - 50 lacs/annum

"Our Company\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \nYour Qualifications\n•\tBachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Data Science Engineer (SDE I)"

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nFor this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server.\nTier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further."

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

"Staff Engineer - Java/Hadoop"

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 14 years
Experience icon
30 - 70 lacs/annum

"Our Company\n\nWe help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. \n\n@ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. \n\nOur Team\n\nThe Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc.\n \nOur team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support.\n \nYour Opportunity\n\nAs part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers.\n\nYour Responsibility\n\n•\tDesign, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale \n•\tInteract with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community.\n•\tEngage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios.\n•\tIdentify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums\n•\tDevelop and/or Contribute to add features that enable customer analytics at Walmart scale\n•\tDeploy and monitor products on Cloud platforms\n•\tDevelop and implement best-in-class monitoring processes to enable data applications meet SLAs \n\nOur Ideal Candidate\n\nYou have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. \n\nYour Qualifications\n•\tBachelor's Degree and 8+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field \n•\tExpertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc.\n•\tExpertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).\n•\tExperience in building scalable/highly available distributed systems in production.\n•\tUnderstanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm.\n•\tExperience with SOA. \n•\tKnowledge of graph database neo4j, Titan is definitely a plus. \n•\tKnowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system."

Job posted by
apply for job
apply for job
Lakshman Dornala picture
Lakshman Dornala
Job posted by
Lakshman Dornala picture
Lakshman Dornala
Apply for job
apply for job

"Senior Data Engineer (SDE II)"

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
15 - 30 lacs/annum

"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nThe founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering.\n\nWe are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with:\n- Rock-solid algorithmic capabilities.\n- Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search.\n- Or credible research experience in innovating new ML algorithms and neural nets.\n\nGithub profile link is highly valued.\n\nFor right fit into the Couture.ai family, compensation is no bar."

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

"Lead Data Engineer (SDE III)"

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
25 - 55 lacs/annum

"Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users.\n\nFor this role, credible display of innovation in past projects is a must.\nWe are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with:\n- Proven expertise in Spark, Kafka, and Hadoop ecosystem.\n- Rock-solid algorithmic capabilities.\n- Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search.\n- Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack.\n- Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production.\n\nTier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. \n\nLet us know if this interests you to explore the profile further."

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

"Big Data- Hadoop"

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 30 lacs/annum

"Technical Lead – Big data analytics\nWe are looking for a senior engineer to work on our next generation marketing analytics platform. The engineer should have working experience in handling big sets of raw data and transforming them into meaningful insights using any of these tools - Hive/Presto/Spark, Redshift, Kafka/Kinesis etc. \n\nLeadSquared is a leading customer acquisition SaaS platform used by over 15,000 users across 25 countries to run their sales and marketing processes. Our goal is to have million+ users on our platform in the next 5 years, which is an extraordinary and exciting challenge for Engineering team to work on.\n\nThe Role\n\nLeadSquared is looking for a senior engineer to be part of Marketing Analytics platform where we are building a system to gather multi-channel customer behavior data and generate meaningful insights and actions to eventually accelerate revenues. \nThe individual will work in a small team to build the system to ingest large volumes of data, and setup ways to transform the data to generate insights as well as real-time interactive analytics. \nRequirements\n•\tPassion for building and delivering great software. \n•\tAbility to work in a small team and take full ownership and responsibility of critical projects \n•\t5+ years of experience in data-driven environment designing and building business applications\n•\tStrong software development skills in one or more programming languages (Python, Java or C#)\n•\tAtleast 1-year experience in distributed analytic processing technologies such as Hadoop, Hive, Pig, Presto, MapReduce, Kafka, Spark etc.\nBasic Qualifications\n•\tStrong understanding of Distributed Computing Principles\n•\tProficiency with Distributed file\\object storage systems like HDFS\n•\tHands-on experience with computation frameworks like Spark Streaming, MapReduce V2\n•\tEffectively implemented one of big data ingestion and transformation pipelines e.g Kafka, Kinesis, Fluentd, LogStash, ELK stack\n•\tDatabase proficiency and strong experience in one of NoSQL data store systems e.g MongoDB, HBase, Cassandra\n•\tHands-on working knowledge of data warehouse systems e.g Hive, AWS Redshift\n•\tParticipated in scaling and processing of large sets of data [in the order of Petabytes]\nPreferred Qualifications\n•\tExpert level proficiency in SQL. Ability to perform complex data analysis with large volumes of data\n•\tUnderstanding of ad-hoc interactive query engines like Apache Drill, Presto, Google Big Query, AWS Athena\n•\tExposure to one or more search stores like Solr, ElasticSearch is a plus\n•\tExperience working with distributed messaging systems like RabbitMQ\n•\tExposure to infrastructure automation tools like Chef"

Job posted by
apply for job
apply for job
Vish As picture
Vish As
Job posted by
Vish As picture
Vish As
Apply for job
apply for job

"Technical Architect/CTO"

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

"ABOUT US:\nArque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance.\n\nPROFILE DESCRIPTION:\n1. Get the \"Tech\" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets\n2. Build, manage, and validate a Tech Roadmap for our Products\n3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow.\n4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products\n5. Product Innovation through Research and continuous improvement\n6. Build out the Technological Infrastructure for the Hedge Fund\n7. Hiring and building out the Technology team\n8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud\n9. Ensure company-wide security and IP protection\n\nREQUIREMENTS:\nComputer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU)\n5-10 years of relevant Technology experience (no infra or database persons)\nExpertise in Python and C++ (3+ years minimum)\n2+ years experience of building and managing Big Data projects\nExperience with technical design & architecture (1+ years minimum)\nExperience with High performance computing - OPTIONAL\nExperience as a Tech Lead, IT Manager, Director, VP, or CTO\n1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL\nAbility to work in an unstructured environment\nLooking to work in a small, start-up type environment based out of Mumbai\n\nCOMPENSATION:\nCo-Founder status and Equity partnership"

Job posted by
apply for job
apply for job
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Job posted by
Hrishabh Sanghvi picture
Hrishabh Sanghvi
Apply for job
apply for job

"Big Data Engineer"

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida
Experience icon
1 - 6 years
Experience icon
4 - 9 lacs/annum

"We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us."

Job posted by
apply for job
apply for job
Sneha Pandey picture
Sneha Pandey
Job posted by
Sneha Pandey picture
Sneha Pandey
Apply for job
apply for job

"Big Data Engineer"

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida
Experience icon
1 - 7 years
Experience icon
4 - 9 lacs/annum

"We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us."

Job posted by
apply for job
apply for job
Sneha Pandey picture
Sneha Pandey
Job posted by
Sneha Pandey picture
Sneha Pandey
Apply for job
apply for job

"Big Data Evangelist"

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

"Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn."

Job posted by
apply for job
apply for job
Suchit Majumdar picture
Suchit Majumdar
Job posted by
Suchit Majumdar picture
Suchit Majumdar
Apply for job
apply for job

"Database Architect"

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

"candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake.\n\nKey Responsibilities :\n\n- Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC\n\n- Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality\n\n- Manage All ETL and Data Model Update Routines\n\n- Integrate new data sources into DWH\n\n- Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure\n\nSkills Needed :\n\n- Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc)\n\n- Experience with UNIX and comfortable working with the shell (bash or KRON preferred)\n\n- Good understanding of Data warehousing concepts.\n\nBig data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce\n\n- Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.\n\n- Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users.\n\n- Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools.\n\n- Performance tuning of Hadoop clusters and Hadoop MapReduce routines.\n\n- Screen Hadoop cluster job performances and capacity planning\n\n- Monitor Hadoop cluster connectivity and security\n\n- File system management and monitoring.\n\n- HDFS support and maintenance.\n\n- Collaborating with application teams to install operating system and\n\n- Hadoop updates, patches, version upgrades when required.\n\n- Defines, develops, documents and maintains Hive based ETL mappings and scripts"

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job

"Hadoop Administrator"

Founded 2008
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
5 - 15 lacs/annum

"Securonix is a security analytics product company. Our product provides real-time behavior analytics capabilities and uses the following Hadoop components - Kafka, Spark, Impala, HBase. We support very large customers for all our customers globally, with full access to the cluster. Cloudera Certification is a big plus."

Job posted by
apply for job
apply for job
Ramakrishna Murthy picture
Ramakrishna Murthy
Job posted by
Ramakrishna Murthy picture
Ramakrishna Murthy
Apply for job
apply for job

"Hadoop Developer"

Founded 2008
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
10 - 15 lacs/annum

"Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data."

Job posted by
apply for job
apply for job
Ramakrishna Murthy picture
Ramakrishna Murthy
Job posted by
Ramakrishna Murthy picture
Ramakrishna Murthy
Apply for job
apply for job

"Big Data Engineer"

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida
Experience icon
2 - 7 years
Experience icon
5 - 12 lacs/annum

"Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world.\n\nWhile working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc."

Job posted by
apply for job
apply for job
Harsh Choudhary picture
Harsh Choudhary
Job posted by
Harsh Choudhary picture
Harsh Choudhary
Apply for job
apply for job

"HBase Architect Developer"

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
6 - 20 lacs/annum

"www.aaknet.co.in/careers/careers-at-aaknet.html\n\nYou are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet?\n\nCome play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! \n\nPlease be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :)\n\nBest wishes & regards,\nTeam Aak!"

Job posted by
apply for job
apply for job
Debdas Sinha picture
Debdas Sinha
Job posted by
Debdas Sinha picture
Debdas Sinha
Apply for job
apply for job

"Freelance Faculty"

Founded 2009
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Anywhere, United States, Canada
Experience icon
3 - 10 years
Experience icon
2 - 10 lacs/annum

"To introduce myself I head Global Faculty Acquisition for Simplilearn.\n\nAbout My Company:\nSIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : \n\nOur Official website : www.simplilearn.com \n\nIf you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us.\n\nOnboarding Process \n•\tUpdated CV needs to be sent to my email id , with relevant certificate copy. \n•\tSample ELearning access will be shared with 15days trail post your registration in our website. \n•\tMy Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes \n•\tCommercial Discussion. \n•\tWe will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. \n•\tA Demo will be conducted to check your training style, Internet connectivity. \n•\tFreelancer Master Service Agreement \n\nPayment Process : \n•\tOnce a workshop/ Last day of the training for the batch is completed you have to share your invoice. \n•\tAn automated Tracking Id will be shared from our automated ticketing system. \n•\tOur Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. \n•\tPayment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. \n\nPlease share your updated CV to get this for next step of on-boarding process."

Job posted by
apply for job
apply for job
STEVEN JOHN picture
STEVEN JOHN
Job posted by
STEVEN JOHN picture
STEVEN JOHN
Apply for job
apply for job

"Big Data Engineer,"

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
4 - 8 years
Experience icon
5 - 16 lacs/annum

"Greetings from Info Vision labs\nInfoVision was founded in 1995 by technology professionals with a vision to provide quality and cost-effective IT solutions worldwide. InfoVision is a global IT Services and Solutions company with primary focus on Strategic Resources, Enterprise Applications and Technology Solutions. Our core practice areas include Applications Security, Business Analytics, Visualization & Collaboration and Wireless & IP Communications. Our IT services cover the full range of needs of enterprises, from Staffing to Solutions. Over the past decade, our ability to serve our clients has steadily evolved. It now covers multiple industries, numerous geographies and flexible delivery models, as well as the state-of-the-art technologies. \nInfoVision opened its development and delivery center in 2014, at Pune and has been expanding with project engagements with clients based in US and India. \nWe can offer the right individuals an industry leading package and fast career growth prospects.\n\nPlease get to know about us at - http://infovisionlabs.com/about/"

Job posted by
apply for job
apply for job
Ankita Lonagre picture
Ankita Lonagre
Job posted by
Ankita Lonagre picture
Ankita Lonagre
Apply for job
apply for job

"Big Data"

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
5 - 5 lacs/annum

"We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions.\n\nJob Responsibilities:\n◾Minimum 3 years of experience in Big Data skills required.\n◾Complete life cycle experience with Big Data is highly preferred\n◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala\n◾Excellent communication skills\n◾Ability to work independently with no-supervision."

Job posted by
apply for job
apply for job
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Job posted by
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Apply for job
apply for job

"Data Scientist"

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Ahmedabad
Experience icon
3 - 7 years
Experience icon
5 - 12 lacs/annum

"Job Role\nDevelop and refine algorithms for machine learning from large datasets.\nWrite offline as well as efficient runtime programs for meaning extraction and real-time response systems.\nDevelop and improve Ad-Targeting based on various criteria like demographics, location, user-interests and many more.\nDesign and develop techniques for handling real-time budget and campaign updates.\nBe open to learning new technologies.\nCollaborate with team members in building products\nSkills Required\nMS/PhD in Computer Science or other highly quantitative field\nMinimum 8 - 10 yrs of hands on experience in different machine-learning techniques\nStrong expertise in Big-data processing\n(Combination of the technologies you should be familiar with Kafka, Storm, Logstash, ElasticSearch, Hadoop, Spark)\nStrong coding skills in at-least one object-oriented programming language (e.g. Java, Python)\nStrong problem solving and analytical ability\nPrior 3+ year experience in advertising technology is preferred"

Job posted by
apply for job
apply for job
Ankit Vyas picture
Ankit Vyas
Job posted by
Ankit Vyas picture
Ankit Vyas
Apply for job
apply for job

"Senior Software Engineer"

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

"Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0"

Job posted by
apply for job
apply for job
Projjol Banerjea picture
Projjol Banerjea
Job posted by
Projjol Banerjea picture
Projjol Banerjea
Apply for job
apply for job

"Big Data Engineer"

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
16 - 35 lacs/annum

"- Passion to build analytics & personalisation platform at scale\n- 4 to 9 years of software engineering experience with product based company in data analytics/big data domain\n- Passion for the Designing and development from the scratch. \n- Expert level Java programming and experience leading full lifecycle of application Dev.\n- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage\n- Strong communication skills, verbal and written"

Job posted by
apply for job
apply for job
Vijaya Kiran picture
Vijaya Kiran
Job posted by
Vijaya Kiran picture
Vijaya Kiran
Apply for job
apply for job

"Sr. Tech Lead"

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
8 - 10 years
Experience icon
17 - 20 lacs/annum

"Responsibilities:\n\nResponsible for all aspects of development and support for internally created or supported application software, including: the development methodologies, technologies (language, databases, support tools), development and testing hardware/software environments, and management of the application development staff and project workload for the agency. Your job is to manage a project and manage a set of engineers. You are responsible for making your team happy and productive, helping them manage their careers. You are responsible for delivering great product on time and with quality. \n\nESSENTIAL DUTIES AND RESPONSIBILITIES\n•\tSupervise the projects and responsibilities of the Web and Software Developers.\n•\tResponsible for the prioritization of projects assigned to the Application Development team.\n•\tResponsible for the complete development lifecycle of the agency software systems; including gathering requirements, database management, software development, testing, implementation, user follow up, support and Project Management.\n•\tResponsible for the Integrity, Maintenance and changes to the Application Development Servers and Databases. (DBA)\n•\tResponsible for developing and implementing change control processes for the development team to follow.\n•\tProvides ad-hoc reporting and decision support required for management decision processes. \n•\tMakes technology decisions that effect Software Development. \n•\tWorks on special I.T. projects as needed.\n\nFamiliarity with Technologies:\n•\tJava, Spring, Hibernate, Laravel\n•\tMySQL, MongoDB, Amazon RedShift, Hadoop\n•\tAngular.js, Boostrap\n•\tAWS cloud infrastructure\n\nQUALIFICATIONS\n•\tBachelor’s degree in Information Science or Computer Science required.\n•\t8-10 years of Application Development Experience required.\n•\tFive plus years of Database Design and Analysis required.\n•\tStrong verbal communication skills required."

Job posted by
apply for job
apply for job
Aditya Bhelande picture
Aditya Bhelande
Job posted by
Aditya Bhelande picture
Aditya Bhelande
Apply for job
apply for job

"Data Scientist"

Founded 2014
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Navi Mumbai
Experience icon
4 - 8 years
Experience icon
5 - 15 lacs/annum

"Nextalytics is an offshore research, development and consulting company based in India that focuses on high quality and cost effective software development and data science solutions. At Nextalytics, we have developed a culture that encourages employees to be creative, innovative, and playful. We reward intelligence, dedication and out-of-the-box thinking; if you have these, Nextalytics will be the perfect launch pad for your dreams. Nextalytics is looking for smart, driven and energetic new team members."

Job posted by
apply for job
apply for job
Harshal Patni picture
Harshal Patni
Job posted by
Harshal Patni picture
Harshal Patni
Apply for job
apply for job

"Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc."

Founded 2012
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, New Yor, Chicago, Hyderabad
Experience icon
1 - 15 years
Experience icon
5 - 10 lacs/annum

"Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean \"Immensely Powerful and Agile\") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. \n\nExusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India).\n\nOur mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives.\n\nAt the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions."

Job posted by
apply for job
apply for job
Dhaval Upadhyay picture
Dhaval Upadhyay
Job posted by
Dhaval Upadhyay picture
Dhaval Upadhyay
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.