Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Apache Kafka Jobs in Bangalore (Bengaluru)

Explore top Apache Kafka Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Software Engineer

Founded 2017
Product
6-50 employees
Profitable
Java
Data Structures
Algorithms
Scala
Apache Kafka
RabbitMQ
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
10 - 15 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Lead Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Spark
Apache Kafka
Hadoop
TensorFlow
Scala
Machine Learning (ML)
OpenStack
MxNet
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
25 - 50 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Senior Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Shell Scripting
Apache Kafka
TensorFlow
Spark
Hadoop
Elastic Search
MXNet
SMACK
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
20 - 40 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for multiple different roles with 2-8 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes or ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

DevOps Engineer

Founded 2014
Product
51-250 employees
Raised funding
DevOps
Python
Kubernetes
Apache Kafka
Amazon Web Services (AWS)
puppet
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Experience icon
5 - 10 lacs/annum

Requirements: - Strong background in Linux fundamentals and system administration - A good command on coding with scripting languages like Python and Shell scripting - Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos - Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure - Experience with automation/configuration management using either Puppet, Chef or an equivalent - Experience in managing In-memory databases like Redis, Memcached - Experience with messaging systems like Kafka, RabbitMQ or ActiveMQ - Knowledge of best practices and IT operations in an always-up, always-available service - Good experience with monitoring and alerting systems like Nagios, Zabbix - Experience with CI and CD tools Job Description: - Manage the Cloud Infrastructure. - Explore new and more efficient opensource modules and bring them into the BOT detection infrastructure. - Explore new kubernetes features and implement them to build scalable infrastructure. - Implement effective monitoring tools and proactively automate the day to day activities. - Linux Administration - Managing Office Network and hardware. - Implement policies dictated by compliance team to secure Shieldswuare backend. - Follow the tracker and resolve defects. - Understanding the Product and Managing the staging / production setup.

Job posted by
apply for job
apply for job
Job poster profile picture - Vinoth Kumar
Vinoth Kumar
Job posted by
Job poster profile picture - Vinoth Kumar
Vinoth Kumar

Python Developer

Founded 2015
Product
6-50 employees
Raised funding
Python
Django
Flask
MongoDB
Apache Kafka
Apache HBase
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
7 - 9 lacs/annum

We are looking for a full time senior resource to lead the python driven dev team. Candidate will be responsible for design, dev and taking to production highly scalable applications. Opportunity to work on leading ML frameworks as well as custom built frameworks for enhanced financial analytics. Crediwatch is an automated & intelligent data curation platform which helps businesses make faster and smarter decisions. Crediwatch aids sophisticated credit and other risk assessment models by providing data intelligence, predictive analysis, decision enabling technologies which maximises customer profitability and performance. Crediwatch has received accolades in Citibank Tech4Integrity challenge (worldwide), Barclays Rise accelerator and Tech30 by YourStory to name a few. We are based in the heart of Bangalore and are growing fast.

Job posted by
apply for job
apply for job
Job poster profile picture - Hemanth G C
Hemanth G C
Job posted by
Job poster profile picture - Hemanth G C
Hemanth G C

DevOps lead

Founded 2014
Product
51-250 employees
Raised funding
Linux/Unix
Python
Shell Scripting
Kubernetes
Apache Kafka
RabbitMQ
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
10 - 20 lacs/annum

Experience: 7+ years Job Responsibilities: Managing the cloud deployment with thousands of VM’s and Containers with 100% uptime Budgeting the infra costs and plan for continued cost optimisation Managing and motivating the team members Designing the architecture to scale the back-end to meet the business requirements Requirements: Strong background in Linux fundamentals and system administration A good command on coding with scripting languages like Python and Shell scripting Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure Experience with automation/configuration management using either Puppet, Chef or an equivalent Experience in managing In-memory databases like Redis, Memcached Experience with messaging systems like Kafka, RabbitMQ or ActiveMQ Knowledge of best practices and IT operations in an always-up, always-available service Good team management skills and communications skills Good experience with monitoring and alerting systems like Nagios, Zabbix Experience with CI and CD tools

Job posted by
apply for job
apply for job
Job poster profile picture - Vinoth Kumar
Vinoth Kumar
Job posted by
Job poster profile picture - Vinoth Kumar
Vinoth Kumar

Senior Backend Developer

Founded 2016
Product
6-50 employees
Raised funding
NOSQL Databases
Java
Google App Engine (GAE)
Firebase
Cassandra
Aerospike
Spark
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
15 - 40 lacs/annum

RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari
Job posted by
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari

Big Data- Hadoop

Founded 2011
Product
51-250 employees
Bootstrapped
Apache Hive
Hadoop
Java
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 30 lacs/annum

Technical Lead – Big data analytics We are looking for a senior engineer to work on our next generation marketing analytics platform. The engineer should have working experience in handling big sets of raw data and transforming them into meaningful insights using any of these tools - Hive/Presto/Spark, Redshift, Kafka/Kinesis etc. LeadSquared is a leading customer acquisition SaaS platform used by over 15,000 users across 25 countries to run their sales and marketing processes. Our goal is to have million+ users on our platform in the next 5 years, which is an extraordinary and exciting challenge for Engineering team to work on. The Role LeadSquared is looking for a senior engineer to be part of Marketing Analytics platform where we are building a system to gather multi-channel customer behavior data and generate meaningful insights and actions to eventually accelerate revenues. The individual will work in a small team to build the system to ingest large volumes of data, and setup ways to transform the data to generate insights as well as real-time interactive analytics. Requirements • Passion for building and delivering great software. • Ability to work in a small team and take full ownership and responsibility of critical projects • 5+ years of experience in data-driven environment designing and building business applications • Strong software development skills in one or more programming languages (Python, Java or C#) • Atleast 1-year experience in distributed analytic processing technologies such as Hadoop, Hive, Pig, Presto, MapReduce, Kafka, Spark etc. Basic Qualifications • Strong understanding of Distributed Computing Principles • Proficiency with Distributed file\object storage systems like HDFS • Hands-on experience with computation frameworks like Spark Streaming, MapReduce V2 • Effectively implemented one of big data ingestion and transformation pipelines e.g Kafka, Kinesis, Fluentd, LogStash, ELK stack • Database proficiency and strong experience in one of NoSQL data store systems e.g MongoDB, HBase, Cassandra • Hands-on working knowledge of data warehouse systems e.g Hive, AWS Redshift • Participated in scaling and processing of large sets of data [in the order of Petabytes] Preferred Qualifications • Expert level proficiency in SQL. Ability to perform complex data analysis with large volumes of data • Understanding of ad-hoc interactive query engines like Apache Drill, Presto, Google Big Query, AWS Athena • Exposure to one or more search stores like Solr, ElasticSearch is a plus • Experience working with distributed messaging systems like RabbitMQ • Exposure to infrastructure automation tools like Chef

Job posted by
apply for job
apply for job
Job poster profile picture - Vish As
Vish As
Job posted by
Job poster profile picture - Vish As
Vish As

Backend Engineer (Python/Distributed system)

Founded 2015
Products and services
6-50 employees
Profitable
Shell Scripting
NodeJS (Node.js)
Javascript
Java
Cassandra
Apache Kafka
NOSQL Databases
Python
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
6 - 12 lacs/annum

Systems Engineer About Intellicar Telematics Pvt Ltd Intellicar Telematics Private Limited is a vehicular telematics organization founded in 2015 with the vision of connecting businesses and customers to their vehicles in a meaningful way. We provide vehicle owners with the ability to connect and diagnose vehicles remotely in real-time. Our team consists of individuals with an in-depth knowledge and understanding in automotive engineering, driver analytics and information technology. By leveraging our expertise in the automotive domain, we have created solutions to reduce operational and maintenance costs of large fleets, and ensure safety at all times. Solutions: Enterprise Fleet Management, GPS Tracking Remote engine diagnostics, Driver behavior & training Technology Integration: GIS, GPS, GPRS, OBD, WEB, Accelerometer, RFID, On-board Storage. Intellicar’s team of accomplished automotive Engineers, hardware manufacturers, Software Developers and Data Scientists have developed the best solutions to track vehicles and drivers, and ensure optimum performance, utilization and safety at all times. We cater to the needs of our clients across various industries such as: Self drive cars, Taxi cab rentals, Taxi cab aggregators, Logistics, Driver training, Bike Rentals, Construction, ecommerce, armored trucks, Manufacturing, dealership and more. Desired skills as a developer ● Education: BE/B.Tech in Computer Science or related field. ● 4+ years of experience with scalable distributed systems applications and building scalable multi-threaded server applications. ● Strong programming skills in Java, Python on Linux or a Unix based OS. ● Create new features from scratch, enhance existing features and optimize existing functionality, from conception and design through testing and deployment. ● Work on projects that make our network more stable, faster, and secure. ● Work with our development QA and system QA teams to come up with regression tests that cover new changes to our software. Desired skills for Storage and Database management systems ● Understanding of Distributed systems like Cassandra, Kafka ● Experience working with Oracle or MySQL ● Experience in database design and normalization. ● Create databases, tables, views ● Writing SQL queries and creating stored procedures and triggers Desired skills for automating operations ● Maintain/enhance/develop test tools and automation frameworks. ● Scripting experience using Bash, Python/Perl. ● Benchmark various server metrics, across releases/hardware, to ensure quality and high performance. ● Investigate and analyze root causes of technical issues / performance bottlenecks. ● Follow good QA methodology, including collaboration with development and support teams to successfully deploy new system components. ● Work with operations support to troubleshoot complex problems in our network for our customers. Desired skills for UI development (good to have) ● Design and develop next-generation UI using latest technologies. ● Strong experience with JavaScript, REST API, Node.js. ● Experience in Information Architecture, Data Visualization and UI prototyping is a plus. ● Help manage change to existing customer applications. ● Design and develop new customer-facing web applications in Java. ● Create a superb user experience focused on usability, performance, and robustness

Job posted by
apply for job
apply for job
Job poster profile picture - Shajo Kalliath
Shajo Kalliath
Job posted by
Job poster profile picture - Shajo Kalliath
Shajo Kalliath

Elastic Search and NoSql Engineers

Founded 2017
Products and services
6-50 employees
Bootstrapped
Elastic Search
Apache Kafka
Solr
Apache HBase
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 2 years
Experience icon
6 - 18 lacs/annum

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
apply for job
apply for job
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha

Data Engineer

Founded 2012
Product
51-250 employees
Profitable
Python
numpy
scipy
cython
scikit learn
MapReduce
Apache Kafka
Pig
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Do apply if you meet most of the following requirements.  Very strong in Python, Java or Scala experience, especially in an open source, data-intensive, distributed environments  Work experience in Libraries like Scikit-learn, numpy, scipy, cython.  Expert in Spark, MapReduce, Pig, Hive, Kafka, Storm, etc. including performance tuning.  Implemented complex projects dealing with the considerable data size and with high complexity  Good understanding of algorithms, data structure, and performance optimization techniques.  Excellent problem solver, analytical thinker, and a quick learner.  Search capabilities such as ElasticSearch with experience in MongoDB Nice to have:  Must have excellent written and verbal communication skills  Have experience writing Spark and/or Map Reduce V2  Be able to translate from requirements and or specifications to code that is relatively bug-free.  Write unit and integration tests  Knowledge of c++.  Knowledge of Theano, Tensorflow, Caffe, Torch etc.

Job posted by
apply for job
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki

Big Data Developer

Founded 2008
Product
6-50 employees
Raised funding
Spark Streaming
Aero spike
Cassandra
Apache Kafka
Big Data
Elastic Search
Scala
Location icon
Bangalore, Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
0 - 0 lacs/annum

Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms

Job posted by
apply for job
apply for job
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
Job posted by
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.