Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Big data Jobs

Explore top Big data Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Backend Developer

Founded 2017
Services
1-5 employees
Bootstrapped
NodeJS (Node.js)
MySQL
Cassandra
Docker
Redis
socket.io
Typescript
Java
Location icon
Mumbai, Anywhere
Experience icon
1 - 5 years
Experience icon
6 - 12 lacs/annum

Overview We’re a mass market B2C service focusing on knowledge exchange. We are an early stage startup and are currently in private Beta. - Founders: The founders both have successful backgrounds. Advait was the founding employee at UK based startup that raised $1.6M on Kickstarter - making it one of the 0.01% of crowdfunding projects that raise over a $1M. He also worked at a prestigious venture capital firm and interacted with the founders of large portfolio companies. Rishabh is highly technical, having graduated with a 1st class degree in Computer Science from Imperial College London (amongst the top 10 in global rankings and top 3 in UK from a variety of publishers). He has also worked in the startup space before and has built complex system architecture in the past. Location: We are based in Mumbai but are open to the candidate working remotely. If the compensation does not meet your requirements, we would be happy to discuss it

Job posted by
apply for job
Job poster profile picture - Advait Ruia
Advait Ruia
Job posted by
Job poster profile picture - Advait Ruia
Advait Ruia
apply for job
view job details

Enterprise Architect

Founded 2012
Products and services
51-250 employees
Raised funding
Big Data
Hadoop
HDFS
HIVE
data streaming
IAAS
Azure
net
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Experience icon
35 - 50 lacs/annum

At least 10 years of hands-on experience in migration of complex software packages and products to Azure (Cloud Service Providers CSP) IaaS and PaaS  At least 7 years of hands-on experience on programming and scripting languages (.Net, C#, WCF, MVC Web API, SQL Server, SQL Azure, Powershell).  Good to have experience in IT systems, operations, automation and configuration tools to enable continuous-integration and deployment (Jenkins)  Solid understanding of database management systems–traditional RDBMS ( MS SQL)  Ability to wear multiple hats spanning the software-development- life-cycle across Requirements, Design, Code Development, QA, Testing, and Deployment –experience working in an Agile/Scrum methodology  Analytical and Communication skills

Job posted by
apply for job
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
Job posted by
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
apply for job
view job details

Backend Engineer

Founded
employees
Java
Big Data
influxdb
Apache Mesos
Machine Learning
Spark
Location icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 20 years
Experience icon
15 - 80 lacs/annum

Key responsibilities: Architect systems capable of serving as the brains of complex distributed products Building reusable code and libraries for future use Integration of user-facing elements with server side logic Thrives in a complex and ambiguous environment, continuously adapting for business & users Maintain, contribute and adhere to programming best practices and guidelines

Job posted by
apply for job
Job poster profile picture - Jibran Khan
Jibran Khan
Job posted by
Job poster profile picture - Jibran Khan
Jibran Khan
apply for job
view job details

Artificial Intelligence Developers

Founded 2016
Product
6-50 employees
Raised funding
Artificial Intelligence (AI)
Artificial Neural Network (ANN)
Machine Learning
Python
TensorFlow
Natural Language Processing (NLP)
Data Science
Big Data
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 3 years
Experience icon
3 - 5 lacs/annum

-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.

Job posted by
apply for job
Job poster profile picture - Bharath Rao
Bharath Rao
Job posted by
Job poster profile picture - Bharath Rao
Bharath Rao
apply for job
view job details

Sr. Tech Program Manager

Founded
employees
Engineering Management
Delivery Management
Project Management
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
12 - 18 years
Experience icon
15 - 30 lacs/annum

Sigmoid is a fast growing Product Based BIG DATA startup. Sequoia Funded & Backed by experienced Professionals & Advisors. Sigmoid is revolutionizing business intelligence and analytics by providing unified tools for historical and real-time analysis on Apache Spark. With their suite of products, Sigmoid is democratizing streaming use-cases like RTB Data Analytics, Log Analytics, Fraud Detection, Sensor Data Analytics etc. Sigmoid can enable the customers' engineering team to set up their infrastructure on Spark and ramp up their development timelines or enable the analytics team to derive insights from their data. Sigmoid has created a real-time exploratory analytics tool using on Apache SPARK which not only vastly improves performance but also reduces the cost. A user can quickly analyse huge volumes of data, filter through multiple dimensions, compare results across time periods and carry out root cause analysis in a matter of seconds. Leading organisations across industry verticals are currently using Sigmoid's platform in production to create success stories. Key Responsibilities: Own scoping, project delivery and client communication for multiple projects. Lead, motivate and coach project team members. Building and managing strong client relationships. Responsible for successful timely delivery of projects. Provide thought leadership and innovation within and across projects. Develop and implement specific frameworks and processes for smooth execution. Contribute to Sigmoid's capability building and knowledge creation. Play an active role in growth-related initiatives (e.g. recruiting, training, etc.) Minimum Qualifications: Total experience of 12 + Years. Min 7+ years experience with managing technical projects. Min 4+ Yrs of customer facing role for the end to end delivery. Strong hands-on program and project management skills. Good planning, problem-solving & debugging skills. Excellent communication, documentation and presentation skills. Familiarity with JIRA or any other project management tool/issue tracking software. Experience with source code management tools like GIT, SVN etc. Good understanding of SDLC and project management process. Preferred Qualifications: Prior experience on a big data project. Experience working with US customers/clients. Salary is not a constraint for the right talent. More@SIGMOID: https://www.sigmoid.com/careers/

Job posted by
apply for job
Job poster profile picture - Avinash Kumar nirala
Avinash Kumar nirala
Job posted by
Job poster profile picture - Avinash Kumar nirala
Avinash Kumar nirala
apply for job
view job details

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Experience icon
12 - 25 lacs/annum

Responsibilities Exp 3~5 years Build up a strong and scalable crawler system for leveraging external user & content data source from Facebook, Youtube and others internet products or service. Getting top trending keywords & topic from social media. Design and build initial version of the real-time analytics product from Machine Learning Models to recommend video contents in real time to 10M+ User Profiles independently. Architect and build Big Data infrastructures using Java, Kafka, Storm, Hadoop, Spark and other related frameworks, experience with Elastic search is a plus Excellent Analytical, Research and Problem Solving skills, in-depth knowledge of Data Structure Desired Skills and Experience B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) Experience in Machine Learning and Prediction & Recommendation techniques Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala Proficiency in a major programming language (e.g. Java/C/Scala) and/or a scripting language (Python) Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search Experience with cloud solutions/AWS, strong knowledge in Linux and Apache Experience with any map-reduce SPARK/EMR Experience in building reports and/or data visualization Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin
apply for job
view job details

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
apply for job
view job details

Big Data Architect

Founded
employees
Spark
HDFS
Cassandra
MongoDB
Apache Storm
Apache Hive
Apache Kafka
Apache HBase
Location icon
Anywhere
Experience icon
5 - 11 years
Experience icon
25 - 60 lacs/annum

Sigmoid is a fast growing Product Based BIG DATA startup. Sequoia Funded & Backed by experienced Professionals & Advisors. Sigmoid is revolutionizing business intelligence and analytics by providing unified tools for historical and real time analysis on Apache Spark. With their suite of products, Sigmoid is democratizing streaming use-cases like RTB Data Analytics, Log Analytics, Fraud Detection, Sensor Data Analytics etc. Sigmoid can enable the customers’ engineering team to set up their infrastructure on Spark and ramp up their development timelines, or enable the analytics team to derive insights from their data. Sigmoid has created a real time exploratory analytics tool using on Apache SPARK which not only vastly improves performance but also reduces the cost. A user can quickly analyse huge volumes of data, filter through multiple dimensions, compare results across time periods and carry out root cause analysis in a matter of seconds. Leading organisations across industry verticals are currently using Sigmoid’s platform in production to create success stories. ------------------------------------ What Sigmoid offers you: Work in a well-funded (Sequoia Capital) Big Data company. Deal with Terabytes of data on a regular basis. Opportunity to contribute to top big data projects. Work on complex problems faced by leading global companies in multiple areas such as fraud detection, real-time analytics, pricing modeling and so on ------------------------------------------------------ We are looking for Someone who has: 6+ years of demonstrable experience designing technological solutions to complex data problems, developing efficient and scalable code. Experience in Design, Architecture, Development of Big Data Technologies. Provides Technical leadership in Big Data space (Apache Spark, Kafka, Flink, Hadoop, MapReduce, HDFS, Hive, HBase, Flume, Sqoop, NoSQL, Cassandra, HBase) Strong understanding of databases and SQL. Defines and Drives best practices in Big Data stack. Drives operational excellence through root cause analysis and continuous improvement for Big Data technologies and processes. Operating knowledge of cloud computing platforms (AWS and/ or Azure or Google Cloud). Mentors/coaches engineers to facilitate their development and provide technical leadership to them A technologist who Loves to code and design and have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. ------------------------------------ Preferred Qualifications: Engineering Bachelors/Masters in Computer Science/IT. Top Tier Colleges (IIT, NIT, IIIT, etc) will be preferred. Salary is not a constraint for the right talent.

Job posted by
apply for job
Job poster profile picture - Karthik Selvaraj
Karthik Selvaraj
Job posted by
Job poster profile picture - Karthik Selvaraj
Karthik Selvaraj
apply for job
view job details

Server Side Engineer

Founded 2012
Products and services
51-250 employees
Profitable
Java
Python
Machine Learning
Cassandra
Scala
Apache
Apache Kafka
Location icon
Hyderabad
Experience icon
3 - 7 years
Experience icon
9 - 12 lacs/annum

Experience : Minimum of 3 years of relevant development experience Qualification : BS in Computer Science or equivalent Skills Required: • Server side developers with good server side development experience in Java AND/OR Python • Exposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus • Interested in Machine Learning will be a plus • Good to great problem solving and communication skill • Ability to deliver in an extremely fast paced development environment • Ability to handle ambiguity • Should be a good team player Job Responsibilities : • Learn the technology area where you are going to work • Develop bug free, unit tested and well documented code as per requirements • Stringently adhere to delivery timelines • Provide mentoring support to Software Engineer AND/ OR Associate Software Engineers • Any other as specified by the reporting authority

Job posted by
apply for job
Job poster profile picture - Nisha Sharma
Nisha Sharma
Job posted by
Job poster profile picture - Nisha Sharma
Nisha Sharma
apply for job
view job details

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani
apply for job
view job details

Senior Technologist @ Intelligent Travel Search startup

Founded 2016
Product
1-5 employees
Raised funding
Big Data
Fullstack Developer
Technical Architecture
Web Development
Mobile App Development
Databases
NOSQL Databases
Amazon Web Services (AWS)
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 15 years
Experience icon
6 - 18 lacs/annum

Key Skills Expected • Will be expected to architect, develop and maintain large-scale distributed systems • Should have excellent coding skills and good understanding of MVC frameworks • Strong understanding & experience in building efficient search & recommendation algorithms; experience in Machine/ Deep Learning would be beneficial • Experience in Python-Django would be a plus • Strong knowledge of hosting webservices like AWS, Google Cloud Platform, etc is critical. • Sound understanding of Front-end web technologies such as HTML, CSS, JavaScript, jQuery, AngularJS etc We are looking for self-starters who are looking to solve hard problems.

Job posted by
apply for job
Job poster profile picture - Varun Gupta
Varun Gupta
Job posted by
Job poster profile picture - Varun Gupta
Varun Gupta
apply for job
view job details

Data Engineers

Founded 2002
Products and services
6-50 employees
Profitable
Python
Cassandra
NOSQL Databases
Location icon
Goa, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
4 - 11 lacs/annum

Srijan Technologies Pvt Ltd. ​is a 14 years old enterprise web content management consulting and development company with expertise in building high-traffic websites and complex web applications. Over this period we have served over 200 clients across Asia, Europe, United States and Middle East. We are the only Acquia Enterprise partner in India. Job Description We are looking for a Data Engineer responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application; therefore, a basic understanding of front-end technologies is necessary as well. Responsibilities: ● ​Writing reusable, testable, and efficient code. ● ​Understand client’s business needs and develop a software solution with necessary validations ● Attend client calls, demonstrations to the client. ● Provide assistance, guidance and support to other developers when necessary​. Review codes of peers. ● ​Maintain appropriate documentation with code. ● Undertake quality assurance and testing for functionalities developed. Communication Responsibilities: ● Deliver engaging, informative and well-organized presentations. ● Resolves and/or escalates issues in a timely fashion. Other Responsibilities: ● Disseminate technology best practices. ● Work with senior developers in adoption of new technologies within our Technology practice Requirements, Skills, Qualifications: • Expert in Python, with knowledge of at least one Python web framework such as Django, Flask, etc depending on your technology stack. • Familiarity with some ORM (Object Relational Mapper) libraries • Able to integrate multiple data sources and databases into one system • Understanding of the threading limitations of Python, and multi-process architecture • Good understanding of server-side templating languages such as Jinja 2, Mako, etc . ● Good understanding of MySQL and relational databases. ● Experience with Cassandra or other “newSQL” databases is a plus. ● Experience with AWS - including Lambda, DynamoDB, Cognito is a major plus. ● Expertise in JavaScript and mainstream JavaScript libraries such as JQuery and working knowledge of Ajax. ● Good understanding of web technologies and HTTP. Good Linux skills HTML and CSS skills commensurate with years of experience. ● Git knowledge/ version control knowledge and skills.

Job posted by
apply for job
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
Job posted by
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
apply for job
view job details

Data Scientist

Founded 1999
Services
250+ employees
Profitable
Big Data
Data Science
Machine Learning
Python
Location icon
Hyderabad
Experience icon
3 - 7 years
Experience icon
7 - 15 lacs/annum

Skills Required : SQL, Python Data Science Stack (strongly preferred), Machine Learning and Statistics, Data Visualization , A/B Testing, Bandit problems, Recommendation systems, Reinforcement learning Roles & Responsibilities · Ability to work independently to apply technical skills to solve a business objective   · Exploratory data analysis employing large data sets involving visualization and basic statistical techniques to develop intuitions and identify key variables in data science projects   · Data extraction and joining data from multiple sources; SQL is a must; Python preferred   · Strong familiarity and experience with Machine Learning algorithms; The Python Data Science stack is ideal (e.g. sklearn, etc); Regression, Clustering, SVM, Decision Trees, DNNs, Recommendation Algorithms, etc.   · Specific experience sought: real-world A/B testing, multi-armed bandit algorithms, causal inference   · Ability to read and apply, with guidance, cutting edge Machine Learning research papers to solve business problems

Job posted by
apply for job
Job poster profile picture - Muthyala Shirish Kumar
Muthyala Shirish Kumar
Job posted by
Job poster profile picture - Muthyala Shirish Kumar
Muthyala Shirish Kumar
apply for job
view job details

Big Data Engineer

Founded 2012
Product
51-250 employees
Profitable
Spark
Apache Storm
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years
Experience icon
10 - 20 lacs/annum

Bachelor’s or Master’s degree in computer science or software engineering; Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures. Ability to architect highly scalable distributed systems, using different open source tools. Experience building high-performance algorithms. Extensive knowledge of different programming or scripting languages such as Python,Scala Apache Spark Experience with different (NoSQL or RDBMS) databases such as MongoDB Google Big Query , Cassandra, Elastic Search,HBASE,Data Pipelines,IMPALA Experience building data processing systems with Hadoop and Hive using Python. Good Exposure on AWS Lamba, Kingsis, EMR,Redshift, Kafka,

Job posted by
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
apply for job
view job details

Senior Software Engineer- Ruby On Rails

Founded 2016
Products and services
6-50 employees
Profitable
Ruby on Rails (ROR)
Javascript
MVC Framework
MongoDB
Cassandra
MySQL
Location icon
Pune
Experience icon
2 - 10 years
Experience icon
5 - 22 lacs/annum

Responsibilities Developing intelligent and scalable engineering solutions from scratch. Working on high/low-level product designs & roadmaps along with a team of ace developers. Building products using bleeding-edge technologies using Ruby on Rails. Building innovative products for customers in Cloud, DevOps, Analytics, AI/ML and lots more.

Job posted by
apply for job
Job poster profile picture - Kalpak Shah
Kalpak Shah
Job posted by
Job poster profile picture - Kalpak Shah
Kalpak Shah
apply for job
view job details

Cassandra Engineer/Developer/Architect

Founded 2017
Products and services
6-50 employees
Bootstrapped
Cassandra
Linux/Unix
JVM
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
6 - 20 lacs/annum

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
apply for job
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha
apply for job
view job details

Python Developer

Founded
employees
MySQL
MongoDB
Spark
Apache Hive
Location icon
Chennai
Experience icon
2 - 7 years
Experience icon
6 - 18 lacs/annum

Full Stack Developer for Big Data Practice. Will include everything from architecture to ETL to model building to visualization.

Job posted by
apply for job
Job poster profile picture - Bavani T
Bavani T
Job posted by
Job poster profile picture - Bavani T
Bavani T
apply for job
view job details

Data Engineer

Founded 2012
Product
51-250 employees
Profitable
Python
numpy
scipy
cython
scikit learn
MapReduce
Apache Kafka
Pig
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Do apply if you meet most of the following requirements.  Very strong in Python, Java or Scala experience, especially in an open source, data-intensive, distributed environments  Work experience in Libraries like Scikit-learn, numpy, scipy, cython.  Expert in Spark, MapReduce, Pig, Hive, Kafka, Storm, etc. including performance tuning.  Implemented complex projects dealing with the considerable data size and with high complexity  Good understanding of algorithms, data structure, and performance optimization techniques.  Excellent problem solver, analytical thinker, and a quick learner.  Search capabilities such as ElasticSearch with experience in MongoDB Nice to have:  Must have excellent written and verbal communication skills  Have experience writing Spark and/or Map Reduce V2  Be able to translate from requirements and or specifications to code that is relatively bug-free.  Write unit and integration tests  Knowledge of c++.  Knowledge of Theano, Tensorflow, Caffe, Torch etc.

Job posted by
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
apply for job
view job details

Data Crawler

Founded 2012
Product
51-250 employees
Profitable
Python
Selenium Web driver
Scrapy
Web crawling
Apache Nutch
output.io
Crawlera
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
5 - 20 lacs/annum

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Summary of the Role: We are looking for a skilled and enthusiastic Data Procurement Specialist for web crawling and public data scraping.  Design, build and improve our distributed system of web crawlers.  Integrate with the third-party API's to improve results.  Integrate the data crawled and scraped into our databases.  Create more/better ways to crawl relevant information.  Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, RegEx)  Good knowledge of Linux command tools  Experienced in Python, with knowledge of Scrapy framework  Strong knowledge of Selenium (Selenium WebDriver is a must)  Familiarity with web frontiers like Frontera  Familiarity with distributed messaging middleware (Kafka) Desired:  Practical, hands-on experience with modern Agile development methodologies  Ability thrive in a fast paced, test driven, collaborative and iterative programming environment.  Experience with web crawling projects  Experience with NoSQL databases (HBase, Cassandra, MongoDB, etc)  Experience with CI Tools (Git, Jenkins, etc);  Experience with distributed systems  Familiarity with data loading tools like Flume

Job posted by
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
apply for job
view job details

Data Scientist

Founded 2012
Product
51-250 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations

Job posted by
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
apply for job
view job details

ML/NLP Engineer

Founded 2007
Product
250+ employees
Raised funding
Machine Learning
Natural Language Processing (NLP)
Python
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 35 lacs/annum

What we do Building India's largest, hyperlocal, mobile-first application and back-end platforms that will serve over 100 million monthly active, Indian local language users, scaling to over 1 billion PageViews a day. Currently powers 5 billion PageViews a month, serving a user-base of 90 million installs, spread across 800 cities in India, who consume services in 15 Indian local languages. What You'll Do: Work in cohesion with the R&D team towards building new products and enriching existing ones on ML/NLP Desired Skills:  Prog Languages : Java , Python, R  Tools and Frameworks : NLTK, Mahout , GATE , Stanford NLP suite , Weka , Scikit-learn  Deep Learning : Understanding of deep learning models applied to NLP.  Neural Networks, Word embeddings, sequence learning, RNN  NLP : Statistical NLP Models , Pos Tagging , Parsing , Sequence Tagging, Word Sense Disambiguation , Language Models , Topic Modelling , NER  ML : Linear regression , Logistic Regression , Naive Bayes , SVM , Decision Trees , Random Forest , Boosting , Bagging , HMM , CRF  LSI/LDA , Clustering , UnSupervised / SemiSupervised Methods  DS/Algo/Prob : Efficient Data Structures , Object oriented design, Algorithms , Probability & Statistics , Optimization Methods

Job posted by
apply for job
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
Job posted by
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
apply for job
view job details

Senior Software Engineer / Technical Architect

Founded 2015
Product
6-50 employees
Raised funding
Ruby on Rails (ROR)
MongoDB
Big Data
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
15 - 25 lacs/annum

About Social Frontier: Social Frontier is a comprehensive SaaS offering for automation and optimization of social media marketing channels. We help businesses increase reach & engagement on social media to maximize website traffic or application installs. Social Frontier empowers businesses to monitor and manage all their social media platforms effectively and efficiently. This means simplifying the process of running large and complex ad campaigns for the in-house digital marketer. Social Frontier arms marketing teams with intuitive technology to take control of their digital presence through sophisticated workflow automation and predictive optimization. In other words, Social Frontier makes sure that your posts and campaigns get the attention of people. And not just any people, but potential customers of your business. And not just once, but constantly and consistently. Social Frontier is funded by Growth Story, a Bangalore based incubator which has previously funded companies like Tutorvista, Big Basket, Bluestone, Must See India, Fresh Menu, Portea Medical & Housejoy. About the Role: As a Senior Dev, you would have to partner closely with product management to influence and prioritize roadmaps, drive engineering excellence within the technology team, come up with architectures and designs, work closely with engineers in the team to do review designs and contribute individually to code when required. You would be expected to contribute in the following ways: Translate complex functional and technical requirements into detailed architecture, design and code. Take ownership of your module, maintain, fix bugs and improve code performance. Work with team members to manage the day-to-day development activities, participate in designs, design review, code review, and implementation. Maintain current technical knowledge to support rapidly changing technology, always on a lookout for new technologies and work with the team in bringing in new technologies. Skills Required: Should have strong Computer Science Fundamentals with a minimum BE/BTech degree in Comp Sc from a prestigious institute. Should have worked in a company in the internet domain and should have faced non trivial scaling challenges. Experience and expertise in building full stack systems: front-end, web applications, back-end services and data systems. Experience in Ruby on Rails & in Big Data systems such as MongoDB preferred. Should be willing to learn & understand the domain, which is Digital Marketing. Work Experience: 4-8 years Location: Bangalore (Indiranagar) Tech Stack: Ruby on Rails, MongoDB How to Apply: Send your resume, current & expected CTC, notice period to careers@socialfrontier.com to apply. Founder’s Bio: Sanjay Goel - Sanjay comes with more than 14 years of technical & 8 years of entrepreneurial experience. He handles technology at Social Frontier. Abdulla Basha - Basha is the marketing whizkid of Social Frontier. Currently he is helping generate a traffic of more than 1 billion hits per month, across multiple clients. Anand Rao - With more than 20 years of Enterprise Sales experience, Anand handles sales for Social Frontier.

Job posted by
apply for job
Job poster profile picture - Bhargavi N
Bhargavi N
Job posted by
Job poster profile picture - Bhargavi N
Bhargavi N
apply for job
view job details

Python Developer

Founded 2015
Product
51-250 employees
Profitable
Pandas
Numpy
Bash
Structured Query Language
Python
Big Data
NOSQL Databases
Location icon
Pune
Experience icon
2 - 5 years
Experience icon
5 - 10 lacs/annum

We are an early stage startup working in the space of analytics, big data, machine learning, data visualization on multiple platforms and SaaS. We have our offices in Palo Alto and WTC, Kharadi, Pune and got some marque names as our customers. We are looking for really good Python programmer who MUST have scientific programming experience (Python, etc.) Hands-on with numpy and the Python scientific stack is a must. Demonstrated ability to track and work with 100s-1000s of files and GB-TB of data. Exposure to ML and Data mining algorithms. Need to be comfortable working in a Unix environment and SQL. You will be required to do following: Using command line tools to perform data conversion and analysis Supporting other team members in retrieving and archiving experimental results Quickly writing scripts to automate routine analysis tasks Creating insightful, simple graphics to represent complex trends Explore/design/invent new tools and design patterns to solve complex big data problems Experience working on a long-term, lab-based project (academic experience acceptable)

Job posted by
apply for job
Job poster profile picture - Nischal Vohra
Nischal Vohra
Job posted by
Job poster profile picture - Nischal Vohra
Nischal Vohra
apply for job
view job details

Power Business Intelligent Developer

Founded 2014
Services
1-5 employees
Bootstrapped
Exp. in dashboard
Should have exp.in visualization
Working knowledge on Microsoft power pivot tables
working exp. in BI Desktop
Big Data
Location icon
Pune, Kharadi
Experience icon
2 - 5 years
Experience icon
4 - 7 lacs/annum

Should be able to create AWesome Dashboards - Should have hands on knowledge on all of the following: > Visualizations > Datasets > Reports > Dashboards > Tiles - Excellent Querying Skills using TSQL. - Should have prior exposure to SSRS and/or SSAS - Working knowledge of Microsoft Power Pivot, Power View, and Power BI Desktop.

Job posted by
apply for job
Job poster profile picture - Yogita Purandare
Yogita Purandare
Job posted by
Job poster profile picture - Yogita Purandare
Yogita Purandare
apply for job
view job details

Jr. Data Scientist at SquadRun, Inc.

Founded
employees
Big Data
Data Analytics
Data Science
Databases
Location icon
Noida
Experience icon
1 - 4 years
Experience icon
8 - 12 lacs/annum

SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. Please see this overview deck for more context. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We’re looking for a growth hungry early member who would like to apply data science to a real product right from the ground-up. Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product with the Data Science Lead: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Facilitate data driven experiments to derive product insights. Participate in research in artificial intelligence and machine learning applications from time to time. Background & Key traits 1+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks, deep learning etc. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 8-12 L cash + ESOP upto 4L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here (Must read!). Mail cover letter and resume to code@squadrun.co with the subject “Jr. Data Scientist @ SquadRun Inc”. In your cover letter, tell us why you are a good fit for this role!

Job posted by
apply for job
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
Job posted by
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
apply for job
view job details
Python
Big Data
Data Science
Location icon
Noida
Experience icon
2 - 4 years
Experience icon
20 - 28 lacs/annum

SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida with some travel. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. We’re looking for a senior candidate to own and build our data science and platform product roadmap right from the ground up. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We are looking for an early member who not only is a data science wizard but has strong product chops. Sample Workflow: Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Architect & Build the work automation product layer right from the scratch by working closely with the platform engineering team. Define the long term data science platform product roadmap with focus on a strong platform layer foundation. Collaborate closely with engineering, business operations & product teams and leverage your expertise in devising appropriate measurements and metrics, designing randomized controlled experiments, architecting business intelligence tooling and tackling hard, open-ended problem, etc. Facilitate data driven experiments to derive product insights. Identify new opportunities to leverage data science to different parts of the our platform. Build a sharp data science team and culture. Background & Key traits 4+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks & deep learning. Strong communication and documentation skills. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience dealing with large datasets. Advanced degree in a relevant field (preferred). Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 20-28L cash + ESOP upto 30L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here. (Must read!). Mail cover letter and resume to code@squadrun.co with the subject “Data Science & Product Lead @ SquadRun”. In your cover letter, tell us why you are a good fit for this role!

Job posted by
apply for job
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
Job posted by
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
apply for job
view job details

Supply Data Analyst at SquadRun, Inc.

Founded
employees
Analytics
Big Data
Data Analytics
Location icon
Noida
Experience icon
1 - 3 years
Experience icon
6 - 8 lacs/annum

Supply Data Analyst at SquadRun, Inc. We’re looking for an entrepreneurial candidate with extremely strong analytical skills and with high attention to detail to join our team. Your primary efforts will focus around our contractor (supply) base, helping to make it more efficient and more productive by analysing and mapping contractors’ working trends, platform behaviour etc. Your insights will be used to develop product strategy on the supply side, as also to design and deploy productivity tools for our contractor base to use. You will also be responsible for analysis of data related to determining of rewards, driving engagement and retention and leading new growth through insights from internal research and external data. About SquadRun SquadRun helps businesses outsource operational work to a distributed mobile workforce of college students, young professionals, housewives, etc. Tasks include: Data operations: moderating and classifying catalog, tagging content for a consumer/ social app, etc. Outbound calling operations: calling contacts to collect/ update information, lead qualification, feedback surveys etc. Roles and Responsibilities Delineate, plan and update metrics that indicate holistic platform health and growth Identify and map key data points that will help increase contractor efficiency and productivity Identify and map key data points that will help promote ease of adoption and onboarding Drive increased engagement on the platform through predictive models that forecast ‘user journey and platform behaviour’ Analyze data to determine ‘mission rewards’ framework Create data maps / systems that help to match capabilities of a contractor to the nature of work (output) expected Derive insights based on internal metrics and external research, for identifying new segments for growth Share inputs for design of workflows suited to extract optimum performance from supply base Qualifications At least 1-2 years experience of working in a fast-paced environment Bachelor’s degree in Engineering, Business Administration, Management Ability to work with large amounts of data: facts, figures, and number crunching. You will need to see through the data and analyze it to find conclusions. Data analysts are often called to present their findings, or translate the data into an understandable document. You will need to write and speak clearly, easily communicating complex ideas. You must look at the numbers, trends, and data and come to new conclusions based on the findings. Attention to Detail: Data is precise. You have to make sure they are vigilant in their analysis to come to correct conclusions. Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! To Apply: Download our app, play through it to get a feel on what we are all about. Go through our deck, website, social media pages, linkedin profiles, player blog, business blog etc Read about what we’re building and our hiring framework here Mail cover letter and resume to work@squadrun.co with the subject “Supply Data Analyst at SquadRun” In your cover letter, tell us why you are perfect for this role. Include links to your past projects. If you write a blog, contribute to an open source project, or tried solving an interesting problem, we want to hear about it

Job posted by
apply for job
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
Job posted by
Job poster profile picture - Rishabh Dev Singh
Rishabh Dev Singh
apply for job
view job details

Tech Lead - Java

Founded 2014
Product
51-250 employees
Raised funding
Big Data
Hibernate (Java)
Spring
Location icon
Mumbai
Experience icon
5 - 9 years
Experience icon
17 - 25 lacs/annum

LogiNext is looking for a technically savvy and experienced technical architect to serve as the lead, and mentor for a growing team of strong developers. You will help the team grow in size and skills, optimizing their code while working on your owl. With your technical expertise you will manage priorities, deadlines and deliverables, identify and mitigate the risks. You will design and develop the products that exceed client expectations in terms of value and benefit. You have deep expertise in building secure, high-performing and scalable systems in Java. You have successfully managed complex and cross discipline Big Data projects in the past. Your design intuition inclines towards usability, elegance and simplicity. You have successfully shipped applications with beautiful front-end and intelligent backend. You have demonstrated strong interpersonal and communication skills. Responsibilities • Lead end-to-end design and development of cutting-edge products • Work with product management and engineering team to build highly scalable products • Keep an eye out for open source projects and technology trends that can be introduced in the products • Be hands-on, adopt practical approach to software and technology • Work closely with development team to explain the requirements and constantly monitor the progress • Suggest improvements in systems & processes and assist technical team with issues needing technical expertise • Create technical content such as blogs, technical specification documents and system integration requirements documents Requirements • Master’s or Bachelor’s in Computer Science, Information Technology, Info Systems, or related field • 5+ years of relevant experience in designing and developing scalable and distributed enterprise applications • Expertise in common frameworks like Spring, Hibernate, RESTful Web Services, etc. and managing and optimizing data stores such as MySQL, MongoDB, Elasticsearch, etc • Experience in front-end tools and technologies (HTML5, CSS, JavaScript, jQuery, etc) and Geographic Information System (GIS) is preferred • Experience in building the configuration details for installation, deployment and configuration of cloud automation solutions on AWS • Strong foundation in computer science, with strong competencies in data structures, algorithms and software design • Proven ability to drive large scale projects with deep understanding of Agile SDLC, high collaboration and leadership • Excellent written and oral communication skills, judgment and decision making skills, and the ability to work under continual deadline pressure • Excellent written and oral communication skills, judgment and decision making skills, and the ability to work under continual deadline pressure

Job posted by
apply for job
Job poster profile picture - Avi Sisodia
Avi Sisodia
Job posted by
Job poster profile picture - Avi Sisodia
Avi Sisodia
apply for job
view job details

DevOps Engineer

Founded 2015
Product
6-50 employees
Raised funding
MySQL
Apache
Cassandra
Amazon Web Services (AWS)
DevOps
Git
Nginx
Shell Scripting
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Experience icon
3 - 5 lacs/annum

bounty app team is seeking a full-time devops engineer well versed in Cloud/Linux based server administration. Applications accepted with candidates having minimum 2 years of relevant experience. Skills: Linux, Apache, NGINX, AWS and other Cloud Hosting, MySQL Server, Cassandra, ElasticSearch, RabbitMQ, some programming/scripting knowledge in python, java, php etc. Responsibilities: - Provide administration functions for Linux based servers hosted on cloud platform - Monitor systems for performance, health-check, utilization and security - Write scripts to setup some of the tasks as cron jobs - Apache and Nginx web server configuration and monitoring - MySQL database server administration - Cassandra and ElasticSearch administration - Perform maintenance tasks such as db backup and restore - Write process documentation/checklists to follow - Maintain and audit servers/services - Track vulnerabilities and apply appropriate patches and upgrades - Be on-call and respond quickly to system maintenance needs Eligibility Criteria: - 2 to 4 years (minimum 2 years) of relevant experience - Fluency in administering web servers and experience in IT Infrastructure - Strong knowledge of NGINX and Java Application Servers - Strong knowledge of Amazon Web Services - Strong knowledge of MySQL database administration - Well versed in scheduling and monitoring cron jobs - Working knowledge of Java Programming, Python/Shell Script Writing - Working knowledge of DNS, TCP/IP, DHCP - Familiar with MySQL queries and is capable of maintenance tasks such as db backup and restore - Ability to work in a fast-paced, start-up environment and perform well under tight schedule Please note this is not a 9-5 job. This might require working at off hours. Late night or early morning during deployments. About the Company: bounty app is the product of Nanolocal Technologies Private Limited. With bounty app, you get rewarded every time you walk into bounty partner places. All you need to do is to open the app and just register. You DO NOT have to remember to check-in when you are at specific partner place also. Whenever you are in our partner place, the app automatically recognizes that and pops up to alert you that you are in a bounty rewards zone and all you need to do is tap once. This intelligent assist is based on a multitude of factors and works even if you are NOT connected to the internet - Yes, even if you are not connected to the internet you can earn rewards. It is context-aware, hyper location-aware and will get personalized over time. No posting on FB/twitter when you check-in and you have an absolute privacy. You check-in to earn reward points that are redeemable against a host of egift cards with no tricky conditions! The successful candidate will be a major contributor to the development of bounty app Platform, a very innovative concept. Check out in more detail at www.bountyapp.in

Job posted by
apply for job
Job poster profile picture - Anita Bhat
Anita Bhat
Job posted by
Job poster profile picture - Anita Bhat
Anita Bhat
apply for job
view job details

Principal Software Engineer

Founded
employees
Java
Big Data
Scala
Location icon
Bengaluru (Bangalore)
Experience icon
9 - 12 years
Experience icon
20 - 40 lacs/annum

Scienaptic (www.scienaptic.com) is a new age technology and analytics company based in NY and Bangalore. Our mission is to infuse robust decision science into organizations. Our mantra to achieve our mission is to - reduce friction- among technology, processes and humans. We believe that good design thinking needs to permeate all aspects of our activities so that our customers get the best possible aesthetic and least frictious experience of our software and services. As a Prinicipal Software development Engineer you will be responsible for the development and augmentation of the software components which will be used to solve the analytics problems of large enterprises. These components are highly scalable, connect with multiple data sources and implement some of the complex algorithms We are funded by very senior and eminent business leaders in India and US. Our lead investor is Pramod Bhasin, who is known as a pioneer of ITES revolution. We have the working environment of a new age, cool startup. We are firm believers that the best talent grounds will be non-hierarchical in structure and spirit. We expect you to enjoy, thrive and empower others by progressing that culture. Requirements : - Candidate should have all round experience in developing and delivering large-scale business applications in scale-up systems as well as scale-out distributed systems. - Identify the appropriate software technology / tools based on the requirements and design elements contained in a system specification - Should implement complex algorithms in a scalable fashion. - Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Qualifications/Experience : - Bachelor's or Master's degree in computer science or related field - 10 to 12 years of experience in core Java programming: JDK 1.7/JDK 1.8 and Familiarity with Big data systems like Hadoop and Spark is an added bonus - Familiarity with dependency injection, Concurrency, Guice/Spring - Familiarity with JDBC API / Databases like MySQL, Oracle, Hadoop - Knowledge of graph databases and traversal - Knowlede of SOLR/ElasticSearch, Cloud based deployment would be preferred

Job posted by
apply for job
Job poster profile picture - Zoheab Rehaman
Zoheab Rehaman
Job posted by
Job poster profile picture - Zoheab Rehaman
Zoheab Rehaman
apply for job
view job details

ui/ux designer and developer

Founded 2007
Services
1-5 employees
Bootstrapped
ui/ux
mejento
HTML/CSS
Android App Development
Cassandra
Bootstrap
Joomla
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 1 years
Experience icon
0 - 3 lacs/annum

http://connecttosunil.esy.es please go this website you will know everything about me

Job posted by
apply for job
Job poster profile picture - Sunil Bedre
Sunil Bedre
Job posted by
Job poster profile picture - Sunil Bedre
Sunil Bedre
apply for job
view job details

Freelance Faculty

Founded 2009
Products and services
250+ employees
Profitable
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
Digital Marketing
Hadoop
Location icon
Anywhere, United States, Canada
Experience icon
3 - 10 years
Experience icon
2 - 10 lacs/annum

To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.

Job posted by
apply for job
Job poster profile picture - STEVEN JOHN
STEVEN JOHN
Job posted by
Job poster profile picture - STEVEN JOHN
STEVEN JOHN
apply for job
view job details

Data Scientist

Founded 2014
Product
6-50 employees
Profitable
R
Python
Big Data
Data Science
Hadoop
Machine Learning
Haskell
Location icon
Ahmedabad
Experience icon
3 - 7 years
Experience icon
5 - 12 lacs/annum

Job Role Develop and refine algorithms for machine learning from large datasets. Write offline as well as efficient runtime programs for meaning extraction and real-time response systems. Develop and improve Ad-Targeting based on various criteria like demographics, location, user-interests and many more. Design and develop techniques for handling real-time budget and campaign updates. Be open to learning new technologies. Collaborate with team members in building products Skills Required MS/PhD in Computer Science or other highly quantitative field Minimum 8 - 10 yrs of hands on experience in different machine-learning techniques Strong expertise in Big-data processing (Combination of the technologies you should be familiar with Kafka, Storm, Logstash, ElasticSearch, Hadoop, Spark) Strong coding skills in at-least one object-oriented programming language (e.g. Java, Python) Strong problem solving and analytical ability Prior 3+ year experience in advertising technology is preferred

Job posted by
apply for job
Job poster profile picture - Ankit Vyas
Ankit Vyas
Job posted by
Job poster profile picture - Ankit Vyas
Ankit Vyas
apply for job
view job details

Data Scientist

Founded 2016
Product
6-50 employees
Raised funding
R
Python
Big Data
Data Science
Machine Learning
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 6 years
Experience icon
5 - 8 lacs/annum

Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce

Job posted by
apply for job
Job poster profile picture - Shailendra Singh
Shailendra Singh
Job posted by
Job poster profile picture - Shailendra Singh
Shailendra Singh
apply for job
view job details

Data Scientist

Founded 2015
Product
6-50 employees
Raised funding
R
Python
Big Data
Data Science
Hadoop
Machine Learning
Haskell
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
14 - 21 lacs/annum

We are looking for people with experience in Machine Learning. We work on DNNs and if you are deeply interested, can spend time in training as well (given you are very well versed on other ML concepts). Work is exciting and we have an awesome team on board already!

Job posted by
apply for job
Job poster profile picture - Rohan Shravan
Rohan Shravan
Job posted by
Job poster profile picture - Rohan Shravan
Rohan Shravan
apply for job
view job details

Senior Software Engineer

Founded 2014
Product
6-50 employees
Raised funding
Python
Big Data
Hadoop
Scala
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
Job poster profile picture - Projjol Banerjea
Projjol Banerjea
Job posted by
Job poster profile picture - Projjol Banerjea
Projjol Banerjea
apply for job
view job details

Data Scientist

Founded 2009
Products and services
51-250 employees
Profitable
R
Python
Big Data
Data Science
Hadoop
Machine Learning
Haskell
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
5 - 10 lacs/annum

JSM is a data sciences company, founded in 2009 with the purpose of helping clients make data driven decisions. JSM specifically focuses on unstructured data. It is estimated that 90% of all data generated is unstructured and still mostly under-utilized for actionable insights largely due to high costs involved in speedily mining these large volumes of data. JSM is committed to creating cost effective innovative solutions in pursuit of highly actionable, easy to consume insights with a clearly defined ROI

Job posted by
apply for job
Job poster profile picture - Manas Ranjan Kar
Manas Ranjan Kar
Job posted by
Job poster profile picture - Manas Ranjan Kar
Manas Ranjan Kar
apply for job
view job details

Big Data Engineer

Founded 2007
Product
250+ employees
Raised funding
Java
Cassandra
Apache Hive
Pig
Big Data
Hadoop
JSP
NodeJS (Node.js)
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
16 - 35 lacs/annum

- Passion to build analytics & personalisation platform at scale - 4 to 9 years of software engineering experience with product based company in data analytics/big data domain - Passion for the Designing and development from the scratch. - Expert level Java programming and experience leading full lifecycle of application Dev. - Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage - Strong communication skills, verbal and written

Job posted by
apply for job
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
Job posted by
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
apply for job
view job details

Senior Systems Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Java
Cassandra
Apache Kafka
NodeJS (Node.js)
NOSQL Databases
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years
Experience icon
6 - 12 lacs/annum

Senior Systems Engineer About Intellicar Telematics Pvt Ltd Intellicar Telematics Private Limited is a vehicular telematics organization founded in 2015 with the vision of connecting businesses and customers to their vehicles in a meaningful way. We provide vehicle owners with the ability to connect and diagnose vehicles remotely in real-time. Our team consists of individuals with an in-depth knowledge and understanding in automotive engineering, driver analytics and information technology. By leveraging our expertise in the automotive domain, we have created solutions to reduce operational and maintenance costs of large fleets, and ensure safety at all times. Solutions: Enterprise Fleet Management, GPS Tracking Remote engine diagnostics, Driver behavior & training Technology Integration: GIS, GPS, GPRS, OBD, WEB, Accelerometer, RFID, On-board Storage. Management team: Accomplished Automotive Engineers, Software Developers & Data Scientists. uc Presently serving customers in the following industries: Self Drive Cars, Cab Rentals, Logistics, Driver Training and Construction. Desired skills as a developer ● Education: BE/B.Tech in Computer Science or related field. ● 4+ years of experience with scalable distributed systems applications and building scalable multi-threaded server applications. ● Strong programming skills in and / or Java, Python on Linux or a Unix based OS. ● Create new features from scratch, enhance existing features and optimize existing functionality, from conception and design through testing and deployment. ● Work on projects that make our network more stable, faster, and secure. ● Work with our development QA and system QA teams to come up with regression tests that cover new changes to our software. Desired skills for Storage and Database management systems ● Understanding of Distributed systems like Cassandra, Kafka ● Experience working with Oracle or MySQL ● Experience in database design and normalization. ● Create databases, tables, views ● Writing SQL queries and creating stored procedures and triggers Desired skills for automating operations ● Maintain/enhance/develop test tools and automation frameworks. ● Scripting experience using Bash, Python/Perl. ● Benchmark various server metrics, across releases/hardware, to ensure quality and high performance. ● Investigate and analyze root causes of technical issues / performance bottlenecks. ● Follow good QA methodology, including collaboration with development and support teams to successfully deploy new system components. ● Work with operations support to troubleshoot complex problems in our network for our customers. Desired skills for UI development (good to have) ● Design and develop next-generation UI using latest technologies. ● Strong experience with JavaScript, REST API, Node.js. ● Experience in Information Architecture, Data Visualization and UI prototyping is a plus. ● Help manage change to existing customer applications. ● Design and develop new customer-facing web applications in Java. ● Create a superb user experience focused on usability, performance, and robustness. Other important responsibilities ● Need to be an All-rounder and should be able to work on any technical issue backend or frontend. ● Work effectively in all phases of the software development from requirements gathering to design, implementation, testing, and release. ● Translate business functional requirements into technical solutions. ● Follow design specifications and standards. ● Assist with implementing development work plans. ● Design, develop, & deploy new application components. ● Execute processes to ensure application integrity and availability. ● Implementing highly transactional systems efficiently and effectively. ● Troubleshoot and resolve application defects and outages. ● Execute on other directives as needed.

Job posted by
apply for job
Job poster profile picture - Shajo Kalliath
Shajo Kalliath
Job posted by
Job poster profile picture - Shajo Kalliath
Shajo Kalliath
apply for job
view job details

freelance trainers

Founded 2015
Services
1-5 employees
Bootstrapped
nano electronics
vehicle dynamics
computational dynamics
Android App Development
Big Data
Industrial Design
Internet of Things (IOT)
Robotics
Location icon
Anywhere
Experience icon
8 - 11 years
Experience icon
5 - 10 lacs/annum

We are a team with a mission, A mission to create and deliver great learning experiences to engineering students through various workshops and courses. If you are an industry professional and :- See great scope of improvement in higher technical education across the country and connect with our purpose of impacting it for good. Keen on sharing your technical expertise to enhance the practical learning of students. You are innovative in your ways of creating content and delivering them. You don’t mind earning few extra bucks while doing this in your free time. Buzz us at info@monkfox.com and let us discuss how together we can take technological education in the country to new heights.

Job posted by
apply for job
Job poster profile picture - Tanu Mehra
Tanu Mehra
Job posted by
Job poster profile picture - Tanu Mehra
Tanu Mehra
apply for job
view job details

Data Scientist

Founded 2015
Product
6-50 employees
Bootstrapped
Pyomo
Watson
Python
Big Data
Data Science
Machine Learning
Location icon
Bangalore, Bengaluru (Bangalore)
Experience icon
3 - 6 years
Experience icon
3 - 8 lacs/annum

Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. ​FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data

Job posted by
apply for job
Job poster profile picture - Thahaseen Salahuddin
Thahaseen Salahuddin
Job posted by
Job poster profile picture - Thahaseen Salahuddin
Thahaseen Salahuddin
apply for job
view job details

Data Scientist

Founded 2014
Services
6-50 employees
Profitable
R
Artificial Neural Networks
UIMA
Python
Big Data
Hadoop
Machine Learning
Natural Language Processing (NLP)
Location icon
Navi Mumbai
Experience icon
4 - 8 years
Experience icon
5 - 15 lacs/annum

Nextalytics is an offshore research, development and consulting company based in India that focuses on high quality and cost effective software development and data science solutions. At Nextalytics, we have developed a culture that encourages employees to be creative, innovative, and playful. We reward intelligence, dedication and out-of-the-box thinking; if you have these, Nextalytics will be the perfect launch pad for your dreams. Nextalytics is looking for smart, driven and energetic new team members.

Job posted by
apply for job
Job poster profile picture - Harshal Patni
Harshal Patni
Job posted by
Job poster profile picture - Harshal Patni
Harshal Patni
apply for job
view job details

Big Data Developer

Founded 2008
Product
6-50 employees
Raised funding
Spark Streaming
Aero spike
Cassandra
Apache Kafka
Big Data
Elastic Search
Scala
Location icon
Bangalore, Bengaluru (Bangalore)
Experience icon
1 - 7 years
Experience icon
0 - 0 lacs/annum

Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms

Job posted by
apply for job
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
Job posted by
Job poster profile picture - Katreddi Kiran Kumar
Katreddi Kiran Kumar
apply for job
view job details

Fullstack Developer

Founded 2015
Product
51-250 employees
Raised funding
Javascript
PHP
Amazon Web Services (AWS)
Big Data
LAMP Stack
MongoDB
NodeJS (Node.js)
Location icon
Mumbai, NCR (Delhi | Gurgaon | Noida)
Experience icon
0 - 15 years
Experience icon
3 - 10 lacs/annum

We're busy solving some of the hardest problems while having great fun doing it. Come join us if you want to be part of a young and dynamic team who is working on bleeding edge tech.

Job posted by
apply for job
Job poster profile picture - Pandurang Nayak
Pandurang Nayak
Job posted by
Job poster profile picture - Pandurang Nayak
Pandurang Nayak
apply for job
view job details

Senior Software Engineer

Founded 2007
Products and services
51-250 employees
Bootstrapped
Java
C/C++
Agile/Scrum
Algorithms
Big Data
C#
Data Structures
Scala
Location icon
Pune
Experience icon
1 - 6 years
Experience icon
2 - 10 lacs/annum

Challenging projects Steep learning curve Fast growing and Future focused organization Leadership development opportunities

Job posted by
apply for job
Job poster profile picture - Ambika Singh
Ambika Singh
Job posted by
Job poster profile picture - Ambika Singh
Ambika Singh
apply for job
view job details
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.