Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Big data Jobs in Bangalore (Bengaluru)

Explore top Big data Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Backend Developer

Founded 2016
Product
6-50 employees
Raised funding
NOSQL Databases
Java
Google App Engine (GAE)
Firebase
Cassandra
Aerospike
Spark
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
15 - 40 lacs/annum

RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.

Job posted by
apply for job
apply for job
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari
Job posted by
Job poster profile picture - Shubham Maheshwari
Shubham Maheshwari

Python Developer

Founded 2015
Products and services
250+ employees
Profitable
Python
Apache Hive
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
12 - 25 lacs/annum

• Good experience in Python and SQL • Plus will be experience in Hive / Presto • Strong skills in using Python / R for building data pipelines and analysis • Good programming background - o Writing efficient and re-usable code o Comfort with working on the CLI and with tools like GitHub etc. Other softer aspects that are important - • Fast learner - No matter how much programming a person has done in the past, willing to learn new tools is the key • An eye for standardization and scalability of processes - the person will not need to do this alone but it will help us for everyone on the team to have this orientation • A generalist mindset - Everyone on the team will need to also work on front-end tools (Tableau and Unidash) so openness to playing a little outside the comfort zone

Job posted by
apply for job
apply for job
Job poster profile picture - Jiten Chanana
Jiten Chanana
Job posted by
Job poster profile picture - Jiten Chanana
Jiten Chanana

Python ,SQL , Hive,Presto Analytics Background

Founded 2015
Products and services
250+ employees
Profitable
Python
SQL server
Apache Hive
Location icon
Bangalore
Experience icon
4 - 8 years
Experience icon
12 - 20 lacs/annum

Good programming background, writing efficient and re-usable code , comfort with working on CLI & GitHub

Job posted by
apply for job
apply for job
Job poster profile picture - Shankar Raman
Shankar Raman
Job posted by
Job poster profile picture - Shankar Raman
Shankar Raman

Big Data- Hadoop

Founded 2011
Product
51-250 employees
Bootstrapped
Apache Hive
Hadoop
Java
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 30 lacs/annum

Technical Lead – Big data analytics We are looking for a senior engineer to work on our next generation marketing analytics platform. The engineer should have working experience in handling big sets of raw data and transforming them into meaningful insights using any of these tools - Hive/Presto/Spark, Redshift, Kafka/Kinesis etc. LeadSquared is a leading customer acquisition SaaS platform used by over 15,000 users across 25 countries to run their sales and marketing processes. Our goal is to have million+ users on our platform in the next 5 years, which is an extraordinary and exciting challenge for Engineering team to work on. The Role LeadSquared is looking for a senior engineer to be part of Marketing Analytics platform where we are building a system to gather multi-channel customer behavior data and generate meaningful insights and actions to eventually accelerate revenues. The individual will work in a small team to build the system to ingest large volumes of data, and setup ways to transform the data to generate insights as well as real-time interactive analytics. Requirements • Passion for building and delivering great software. • Ability to work in a small team and take full ownership and responsibility of critical projects • 5+ years of experience in data-driven environment designing and building business applications • Strong software development skills in one or more programming languages (Python, Java or C#) • Atleast 1-year experience in distributed analytic processing technologies such as Hadoop, Hive, Pig, Presto, MapReduce, Kafka, Spark etc. Basic Qualifications • Strong understanding of Distributed Computing Principles • Proficiency with Distributed file\object storage systems like HDFS • Hands-on experience with computation frameworks like Spark Streaming, MapReduce V2 • Effectively implemented one of big data ingestion and transformation pipelines e.g Kafka, Kinesis, Fluentd, LogStash, ELK stack • Database proficiency and strong experience in one of NoSQL data store systems e.g MongoDB, HBase, Cassandra • Hands-on working knowledge of data warehouse systems e.g Hive, AWS Redshift • Participated in scaling and processing of large sets of data [in the order of Petabytes] Preferred Qualifications • Expert level proficiency in SQL. Ability to perform complex data analysis with large volumes of data • Understanding of ad-hoc interactive query engines like Apache Drill, Presto, Google Big Query, AWS Athena • Exposure to one or more search stores like Solr, ElasticSearch is a plus • Experience working with distributed messaging systems like RabbitMQ • Exposure to infrastructure automation tools like Chef

Job posted by
apply for job
apply for job
Job poster profile picture - Vish As
Vish As
Job posted by
Job poster profile picture - Vish As
Vish As

Technical Content Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Cloud Computing
Big Data
Kubernetes
Machine Learning
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 10 years
Experience icon
3 - 35 lacs/annum

About Loonycorn: We are a technical content studio - we make technical videos on topics such as machine learning, cloud computing and big data and monetize these on platforms such as Udemy, Pluralsight and StackCommerce. On Udemy alone we have ~75 courses, and 50k+ students, as well as 35 courses on Pluralsight. Our co-founders are ex-Google, Stanford-educated. Loonycorn is highly profitable. About the Role: We are looking for folks to make technical content similar to what you'd find at the links below: https://www.udemy.com/u/janani-ravi-2/ https://www.pluralsight.com/search?q=janani+ravi https://www.pluralsight.com/search?q=vitthal+srinivasan This involves: - learning a new technology from scratch - building realistic, pretty complicated programs in that technology - creating clear, creative slides or animations explaining the concepts behind that technology - combining these into a polished video that you will voice-over What is important to us: - Grit - Perseverance in working on hard problems. Technical video-making is difficult and detail-oriented (that's why it is a highly profitable business) - Craftsmanship - Our video-making is quite artisanal - lots of hard work and small details. There are many excellent roles where doing smart 80-20 trade-offs is the way to succeed - this is not one of them. - Clarity - Talking and thinking in direct, clear ways is super-important in what we do. Folks who use a lot of jargon or cliches, or who over-complicate technical problems will not enjoy the work. - Creativity - Analogies, technical metaphors and other artistic elements are an important part of what we do. What is not all that important to us: - Your school or labels: Perfectly fine whatever college or company you are applying from - English vocabulary or pronunciation: You don't need to 'talk well' or be flashy to make good content

Job posted by
apply for job
apply for job
Job poster profile picture - Vitthal Srinivasan
Vitthal Srinivasan
Job posted by
Job poster profile picture - Vitthal Srinivasan
Vitthal Srinivasan

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 25 lacs/annum

## Responsibilities * Exp 2~5 years * Design and build the initial version of the off-line product by using Machine Learning to recommend video contents to 1M+ User Profiles. * Design personalized recommendation algorithm and optimize the model * Develop the feature of the recommendation system * Analyze user behavior, build up user portrait and tag system ## Desired Skills and Experience * B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background * 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) * Experience in Machine Learning and Prediction & Recommendation techniques * Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala * Proficiency in a major programming language (e.g. C/C++/Scala) and/or a scripting language (Python/R) * Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search * Experience with cloud solutions/AWS, strong knowledge in Linux and Apache * Experience with any map-reduce SPARK/EMR * Experience in building reports and/or data visualization * Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani

Senior Technologist @ Intelligent Travel Search startup

Founded 2016
Product
1-5 employees
Raised funding
Big Data
Fullstack Developer
Technical Architecture
Web Development
Mobile App Development
Databases
NOSQL Databases
Amazon Web Services (AWS)
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 15 years
Experience icon
6 - 18 lacs/annum

Key Skills Expected • Will be expected to architect, develop and maintain large-scale distributed systems • Should have excellent coding skills and good understanding of MVC frameworks • Strong understanding & experience in building efficient search & recommendation algorithms; experience in Machine/ Deep Learning would be beneficial • Experience in Python-Django would be a plus • Strong knowledge of hosting webservices like AWS, Google Cloud Platform, etc is critical. • Sound understanding of Front-end web technologies such as HTML, CSS, JavaScript, jQuery, AngularJS etc We are looking for self-starters who are looking to solve hard problems.

Job posted by
apply for job
apply for job
Job poster profile picture - Varun Gupta
Varun Gupta
Job posted by
Job poster profile picture - Varun Gupta
Varun Gupta

Back end Developer (Python, Nodejs, Nosql,RestAPI, Distributed system)

Founded 2015
Products and services
6-50 employees
Profitable
Shell Scripting
NodeJS (Node.js)
Javascript
Java
Cassandra
Apache Kafka
NOSQL Databases
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
6 - 12 lacs/annum

Systems Engineer About Intellicar Telematics Pvt Ltd Intellicar Telematics Private Limited is a vehicular telematics organization founded in 2015 with the vision of connecting businesses and customers to their vehicles in a meaningful way. We provide vehicle owners with the ability to connect and diagnose vehicles remotely in real-time. Our team consists of individuals with an in-depth knowledge and understanding in automotive engineering, driver analytics and information technology. By leveraging our expertise in the automotive domain, we have created solutions to reduce operational and maintenance costs of large fleets, and ensure safety at all times. Solutions: Enterprise Fleet Management, GPS Tracking Remote engine diagnostics, Driver behavior & training Technology Integration: GIS, GPS, GPRS, OBD, WEB, Accelerometer, RFID, On-board Storage. Intellicar’s team of accomplished automotive Engineers, hardware manufacturers, Software Developers and Data Scientists have developed the best solutions to track vehicles and drivers, and ensure optimum performance, utilization and safety at all times. We cater to the needs of our clients across various industries such as: Self drive cars, Taxi cab rentals, Taxi cab aggregators, Logistics, Driver training, Bike Rentals, Construction, ecommerce, armored trucks, Manufacturing, dealership and more. Desired skills as a developer ● Education: BE/B.Tech in Computer Science or related field. ● 4+ years of experience with scalable distributed systems applications and building scalable multi-threaded server applications. ● Strong programming skills in Java, Node.js, Python on Linux or a Unix based OS. ● Create new features from scratch, enhance existing features and optimize existing functionality, from conception and design through testing and deployment. ● Work on projects that make our network more stable, faster, and secure. ● Work with our development QA and system QA teams to come up with regression tests that cover new changes to our software. Desired skills for Storage and Database management systems ● Understanding of Distributed systems like Cassandra, Kafka ● Experience working with Oracle or MySQL ● Experience in database design and normalization. ● Create databases, tables, views ● Writing SQL queries and creating stored procedures and triggers Desired skills for automating operations ● Maintain/enhance/develop test tools and automation frameworks. ● Scripting experience using Bash, Python/Perl. ● Benchmark various server metrics, across releases/hardware, to ensure quality and high performance. ● Investigate and analyze root causes of technical issues / performance bottlenecks. ● Follow good QA methodology, including collaboration with development and support teams to successfully deploy new system components. ● Work with operations support to troubleshoot complex problems in our network for our customers. Desired skills for UI development (good to have) ● Design and develop next-generation UI using latest technologies. ● Strong experience with JavaScript, REST API, Node.js. ● Experience in Information Architecture, Data Visualization and UI prototyping is a plus. ● Help manage change to existing customer applications. ● Design and develop new customer-facing web applications in Java. ● Create a superb user experience focused on usability, performance, and robustness

Job posted by
apply for job
apply for job
Job poster profile picture - Shajo Kalliath
Shajo Kalliath
Job posted by
Job poster profile picture - Shajo Kalliath
Shajo Kalliath

Cassandra Engineer/Developer/Architect

Founded 2017
Products and services
6-50 employees
Bootstrapped
Cassandra
Linux/Unix
JVM
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
6 - 20 lacs/annum

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
apply for job
apply for job
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha

Data Engineer

Founded 2012
Product
51-250 employees
Profitable
Python
numpy
scipy
cython
scikit learn
MapReduce
Apache Kafka
Pig
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Do apply if you meet most of the following requirements.  Very strong in Python, Java or Scala experience, especially in an open source, data-intensive, distributed environments  Work experience in Libraries like Scikit-learn, numpy, scipy, cython.  Expert in Spark, MapReduce, Pig, Hive, Kafka, Storm, etc. including performance tuning.  Implemented complex projects dealing with the considerable data size and with high complexity  Good understanding of algorithms, data structure, and performance optimization techniques.  Excellent problem solver, analytical thinker, and a quick learner.  Search capabilities such as ElasticSearch with experience in MongoDB Nice to have:  Must have excellent written and verbal communication skills  Have experience writing Spark and/or Map Reduce V2  Be able to translate from requirements and or specifications to code that is relatively bug-free.  Write unit and integration tests  Knowledge of c++.  Knowledge of Theano, Tensorflow, Caffe, Torch etc.

Job posted by
apply for job
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki

Data Crawler

Founded 2012
Product
51-250 employees
Profitable
Python
Selenium Web driver
Scrapy
Web crawling
Apache Nutch
output.io
Crawlera
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
5 - 20 lacs/annum

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Summary of the Role: We are looking for a skilled and enthusiastic Data Procurement Specialist for web crawling and public data scraping.  Design, build and improve our distributed system of web crawlers.  Integrate with the third-party API's to improve results.  Integrate the data crawled and scraped into our databases.  Create more/better ways to crawl relevant information.  Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, RegEx)  Good knowledge of Linux command tools  Experienced in Python, with knowledge of Scrapy framework  Strong knowledge of Selenium (Selenium WebDriver is a must)  Familiarity with web frontiers like Frontera  Familiarity with distributed messaging middleware (Kafka) Desired:  Practical, hands-on experience with modern Agile development methodologies  Ability thrive in a fast paced, test driven, collaborative and iterative programming environment.  Experience with web crawling projects  Experience with NoSQL databases (HBase, Cassandra, MongoDB, etc)  Experience with CI Tools (Git, Jenkins, etc);  Experience with distributed systems  Familiarity with data loading tools like Flume

Job posted by
apply for job
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki

Data Scientist

Founded 2012
Product
51-250 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations

Job posted by
apply for job
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki

ML/NLP Engineer

Founded 2007
Product
250+ employees
Raised funding
Machine Learning
Natural Language Processing (NLP)
Python
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 35 lacs/annum

What we do Building India's largest, hyperlocal, mobile-first application and back-end platforms that will serve over 100 million monthly active, Indian local language users, scaling to over 1 billion PageViews a day. Currently powers 5 billion PageViews a month, serving a user-base of 90 million installs, spread across 800 cities in India, who consume services in 15 Indian local languages. What You'll Do: Work in cohesion with the R&D team towards building new products and enriching existing ones on ML/NLP Desired Skills:  Prog Languages : Java , Python, R  Tools and Frameworks : NLTK, Mahout , GATE , Stanford NLP suite , Weka , Scikit-learn  Deep Learning : Understanding of deep learning models applied to NLP.  Neural Networks, Word embeddings, sequence learning, RNN  NLP : Statistical NLP Models , Pos Tagging , Parsing , Sequence Tagging, Word Sense Disambiguation , Language Models , Topic Modelling , NER  ML : Linear regression , Logistic Regression , Naive Bayes , SVM , Decision Trees , Random Forest , Boosting , Bagging , HMM , CRF  LSI/LDA , Clustering , UnSupervised / SemiSupervised Methods  DS/Algo/Prob : Efficient Data Structures , Object oriented design, Algorithms , Probability & Statistics , Optimization Methods

Job posted by
apply for job
apply for job
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
Job posted by
Job poster profile picture - Vijaya Kiran
Vijaya Kiran

Senior Software Engineer / Technical Architect

Founded 2015
Product
6-50 employees
Raised funding
Ruby on Rails (ROR)
MongoDB
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
15 - 25 lacs/annum

About Social Frontier: Social Frontier is a comprehensive SaaS offering for automation and optimization of social media marketing channels. We help businesses increase reach & engagement on social media to maximize website traffic or application installs. Social Frontier empowers businesses to monitor and manage all their social media platforms effectively and efficiently. This means simplifying the process of running large and complex ad campaigns for the in-house digital marketer. Social Frontier arms marketing teams with intuitive technology to take control of their digital presence through sophisticated workflow automation and predictive optimization. In other words, Social Frontier makes sure that your posts and campaigns get the attention of people. And not just any people, but potential customers of your business. And not just once, but constantly and consistently. Social Frontier is funded by Growth Story, a Bangalore based incubator which has previously funded companies like Tutorvista, Big Basket, Bluestone, Must See India, Fresh Menu, Portea Medical & Housejoy. About the Role: As a Senior Dev, you would have to partner closely with product management to influence and prioritize roadmaps, drive engineering excellence within the technology team, come up with architectures and designs, work closely with engineers in the team to do review designs and contribute individually to code when required. You would be expected to contribute in the following ways: Translate complex functional and technical requirements into detailed architecture, design and code. Take ownership of your module, maintain, fix bugs and improve code performance. Work with team members to manage the day-to-day development activities, participate in designs, design review, code review, and implementation. Maintain current technical knowledge to support rapidly changing technology, always on a lookout for new technologies and work with the team in bringing in new technologies. Skills Required: Should have strong Computer Science Fundamentals with a minimum BE/BTech degree in Comp Sc from a prestigious institute. Should have worked in a company in the internet domain and should have faced non trivial scaling challenges. Experience and expertise in building full stack systems: front-end, web applications, back-end services and data systems. Experience in Ruby on Rails & in Big Data systems such as MongoDB preferred. Should be willing to learn & understand the domain, which is Digital Marketing. Work Experience: 4-8 years Location: Bangalore (Indiranagar) Tech Stack: Ruby on Rails, MongoDB How to Apply: Send your resume, current & expected CTC, notice period to careers@socialfrontier.com to apply. Founder’s Bio: Sanjay Goel - Sanjay comes with more than 14 years of technical & 8 years of entrepreneurial experience. He handles technology at Social Frontier. Abdulla Basha - Basha is the marketing whizkid of Social Frontier. Currently he is helping generate a traffic of more than 1 billion hits per month, across multiple clients. Anand Rao - With more than 20 years of Enterprise Sales experience, Anand handles sales for Social Frontier.

Job posted by
apply for job
apply for job
Job poster profile picture - Bhargavi N
Bhargavi N
Job posted by
Job poster profile picture - Bhargavi N
Bhargavi N

Principal Software Engineer

Founded
employees
Java
Big Data
Scala
Location icon
Bengaluru (Bangalore)
Experience icon
9 - 12 years
Experience icon
20 - 40 lacs/annum

Scienaptic (www.scienaptic.com) is a new age technology and analytics company based in NY and Bangalore. Our mission is to infuse robust decision science into organizations. Our mantra to achieve our mission is to - reduce friction- among technology, processes and humans. We believe that good design thinking needs to permeate all aspects of our activities so that our customers get the best possible aesthetic and least frictious experience of our software and services. As a Prinicipal Software development Engineer you will be responsible for the development and augmentation of the software components which will be used to solve the analytics problems of large enterprises. These components are highly scalable, connect with multiple data sources and implement some of the complex algorithms We are funded by very senior and eminent business leaders in India and US. Our lead investor is Pramod Bhasin, who is known as a pioneer of ITES revolution. We have the working environment of a new age, cool startup. We are firm believers that the best talent grounds will be non-hierarchical in structure and spirit. We expect you to enjoy, thrive and empower others by progressing that culture. Requirements : - Candidate should have all round experience in developing and delivering large-scale business applications in scale-up systems as well as scale-out distributed systems. - Identify the appropriate software technology / tools based on the requirements and design elements contained in a system specification - Should implement complex algorithms in a scalable fashion. - Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Qualifications/Experience : - Bachelor's or Master's degree in computer science or related field - 10 to 12 years of experience in core Java programming: JDK 1.7/JDK 1.8 and Familiarity with Big data systems like Hadoop and Spark is an added bonus - Familiarity with dependency injection, Concurrency, Guice/Spring - Familiarity with JDBC API / Databases like MySQL, Oracle, Hadoop - Knowledge of graph databases and traversal - Knowlede of SOLR/ElasticSearch, Cloud based deployment would be preferred

Job posted by
apply for job
apply for job
Job poster profile picture - Zoheab Rehaman
Zoheab Rehaman
Job posted by
Job poster profile picture - Zoheab Rehaman
Zoheab Rehaman

ui/ux designer and developer

Founded 2007
Services
1-5 employees
Bootstrapped
ui/ux
mejento
HTML/CSS
Android App Development
Cassandra
Bootstrap
Joomla
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 1 years
Experience icon
0 - 3 lacs/annum

http://connecttosunil.esy.es please go this website you will know everything about me

Job posted by
apply for job
apply for job
Job poster profile picture - Sunil Bedre
Sunil Bedre
Job posted by
Job poster profile picture - Sunil Bedre
Sunil Bedre

Freelance Faculty

Founded 2009
Products and services
250+ employees
Profitable
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
Digital Marketing
Hadoop
Location icon
Anywhere, United States, Canada
Experience icon
3 - 10 years
Experience icon
2 - 10 lacs/annum

To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.

Job posted by
apply for job
apply for job
Job poster profile picture - STEVEN JOHN
STEVEN JOHN
Job posted by
Job poster profile picture - STEVEN JOHN
STEVEN JOHN

Senior Software Engineer

Founded 2014
Product
6-50 employees
Raised funding
Python
Big Data
Hadoop
Scala
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Job poster profile picture - Projjol Banerjea
Projjol Banerjea
Job posted by
Job poster profile picture - Projjol Banerjea
Projjol Banerjea

Big Data Engineer

Founded 2007
Product
250+ employees
Raised funding
Java
Cassandra
Apache Hive
Pig
Big Data
Hadoop
JSP
NodeJS (Node.js)
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
16 - 35 lacs/annum

- Passion to build analytics & personalisation platform at scale - 4 to 9 years of software engineering experience with product based company in data analytics/big data domain - Passion for the Designing and development from the scratch. - Expert level Java programming and experience leading full lifecycle of application Dev. - Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage - Strong communication skills, verbal and written

Job posted by
apply for job
apply for job
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
Job posted by
Job poster profile picture - Vijaya Kiran
Vijaya Kiran

freelance trainers

Founded 2015
Services
1-5 employees
Bootstrapped
nano electronics
vehicle dynamics
computational dynamics
Android App Development
Big Data
Industrial Design
Internet of Things (IOT)
Robotics
Location icon
Anywhere
Experience icon
8 - 11 years
Experience icon
5 - 10 lacs/annum

We are a team with a mission, A mission to create and deliver great learning experiences to engineering students through various workshops and courses. If you are an industry professional and :- See great scope of improvement in higher technical education across the country and connect with our purpose of impacting it for good. Keen on sharing your technical expertise to enhance the practical learning of students. You are innovative in your ways of creating content and delivering them. You don’t mind earning few extra bucks while doing this in your free time. Buzz us at info@monkfox.com and let us discuss how together we can take technological education in the country to new heights.

Job posted by
apply for job
apply for job
Job poster profile picture - Tanu Mehra
Tanu Mehra
Job posted by
Job poster profile picture - Tanu Mehra
Tanu Mehra

Data Scientist

Founded 2015
Product
6-50 employees
Bootstrapped
Pyomo
Watson
Python
Big Data
Data Science
Machine Learning
Location icon
Bangalore, Bengaluru (Bangalore)
Experience icon
3 - 6 years
Experience icon
3 - 8 lacs/annum

Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. ​FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data

Job posted by
apply for job
apply for job
Job poster profile picture - Thahaseen Salahuddin
Thahaseen Salahuddin
Job posted by
Job poster profile picture - Thahaseen Salahuddin
Thahaseen Salahuddin

Big Data Developer

Founded 2008
Product
6-50 employees
Raised funding
Spark Streaming
Aero spike
Cassandra
Apache Kafka
Big Data
Elastic Search
Scala
Location icon
Bangalore, Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
0 - 0 lacs/annum

Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms

Job posted by
apply for job
apply for job
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
Job posted by
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.