Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
Who pays how much? Be informed with this salary report on Indian startups.

Locations

Bengaluru (Bangalore)

Experience

3 - 8 years

Salary

INR 10L - 20L

Skills

Spark
Apache Storm
Cassandra

Job description

Bachelor’s or Master’s degree in computer science or software engineering; Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures. Ability to architect highly scalable distributed systems, using different open source tools. Experience building high-performance algorithms. Extensive knowledge of different programming or scripting languages such as Python,Scala Apache Spark Experience with different (NoSQL or RDBMS) databases such as MongoDB Google Big Query , Cassandra, Elastic Search,HBASE,Data Pipelines,IMPALA Experience building data processing systems with Hadoop and Hive using Python. Good Exposure on AWS Lamba, Kingsis, EMR,Redshift, Kafka,

About the company

EdGE Networks offers next gen HR Technology Solutions for talent acquisition and workforce optimization powered by Data Science and Artificial Intelligence.

Founded

2012

Type

Product

Size

51-250 employees

Stage

Profitable
View company

Similar jobs

Server Side Engineer

Founded 2012
Products and services
51-250 employees
Profitable
Java
Python
Machine Learning
Cassandra
Scala
Apache
Apache Kafka
Location icon
Hyderabad
Experience icon
3 - 7 years

Experience : Minimum of 3 years of relevant development experience Qualification : BS in Computer Science or equivalent Skills Required: • Server side developers with good server side development experience in Java AND/OR Python • Exposure to Data Platforms (Cassandra, Spark, Kafka) will be a plus • Interested in Machine Learning will be a plus • Good to great problem solving and communication skill • Ability to deliver in an extremely fast paced development environment • Ability to handle ambiguity • Should be a good team player Job Responsibilities : • Learn the technology area where you are going to work • Develop bug free, unit tested and well documented code as per requirements • Stringently adhere to delivery timelines • Provide mentoring support to Software Engineer AND/ OR Associate Software Engineers • Any other as specified by the reporting authority

Job posted by
message
Job poster profile picture - Nisha Sharma
Nisha Sharma
Job posted by
Job poster profile picture - Nisha Sharma
Nisha Sharma
message now

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years

he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
message
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani
message now

Senior Developer (Data Science, Machine Learning, chatbot)

Founded 2002
Products and services
6-50 employees
Profitable
Data Science
chatbot
alexa
Machine Learning
kaggle
Cassandra
NOSQL Databases
Location icon
Bengaluru (Bangalore), Goa, NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 10 years

About Srijan Srijan Technologies Pvt Ltd. is a 15+ years old enterprise web content management consulting and development company with expertise in building high-traffic websites and complex web applications. Over this period we have served over 200 clients across Asia,Europe, United States and Middle East. We are the only Acquia Enterprise partner in India, and one of the largest code contributor to Drupal. We are also aspiring to be the largest contributors to Ruby on Rails, Data Science and JavaScript(NodeJS, AngularJS/ReactJS). Job Description In this role, the candidate will be responsible for building, testing, delivering cutting edge technology applications including scalable web interfaces (Node.js or AngularJS) and web-based chatbot (virtual assistant) solutions. Candidates are expected to play a consultative role with a comprehensive understanding of Intelligent Virtual Assistants. •Led, developed and worked in developing chatbot/ Virtual Assistant for a large enterprise successfully. •Played an active part in developing Google/ Amazon/ Microsoft/ Apple Virtual Assistant. •Experienced in ingesting large sets of unstructured text data including chat logs & voice logs and modeling it in Graph DB using semantic annotation. •Experience in Machine learning, data and text mining and predictive analysis. •Able to understand business requirement and able to deliver quick prototypes, strong communication skills •Experienced in modeling and coding NLP algorithms for conversational bot in production. *Experience with NoSQL databases like Cassandra * Experience in distributed caching frameworks like hazelcast, ignite, redis

Job posted by
message
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
Job posted by
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
message now

Data Engineers

Founded 2002
Products and services
6-50 employees
Profitable
Python
Cassandra
NOSQL Databases
Location icon
Goa, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
2 - 5 years

Srijan Technologies Pvt Ltd. ​is a 14 years old enterprise web content management consulting and development company with expertise in building high-traffic websites and complex web applications. Over this period we have served over 200 clients across Asia, Europe, United States and Middle East. We are the only Acquia Enterprise partner in India. Job Description We are looking for a Data Engineer responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application; therefore, a basic understanding of front-end technologies is necessary as well. Responsibilities: ● ​Writing reusable, testable, and efficient code. ● ​Understand client’s business needs and develop a software solution with necessary validations ● Attend client calls, demonstrations to the client. ● Provide assistance, guidance and support to other developers when necessary​. Review codes of peers. ● ​Maintain appropriate documentation with code. ● Undertake quality assurance and testing for functionalities developed. Communication Responsibilities: ● Deliver engaging, informative and well-organized presentations. ● Resolves and/or escalates issues in a timely fashion. Other Responsibilities: ● Disseminate technology best practices. ● Work with senior developers in adoption of new technologies within our Technology practice Requirements, Skills, Qualifications: • Expert in Python, with knowledge of at least one Python web framework such as Django, Flask, etc depending on your technology stack. • Familiarity with some ORM (Object Relational Mapper) libraries • Able to integrate multiple data sources and databases into one system • Understanding of the threading limitations of Python, and multi-process architecture • Good understanding of server-side templating languages such as Jinja 2, Mako, etc . ● Good understanding of MySQL and relational databases. ● Experience with Cassandra or other “newSQL” databases is a plus. ● Experience with AWS - including Lambda, DynamoDB, Cognito is a major plus. ● Expertise in JavaScript and mainstream JavaScript libraries such as JQuery and working knowledge of Ajax. ● Good understanding of web technologies and HTTP. Good Linux skills HTML and CSS skills commensurate with years of experience. ● Git knowledge/ version control knowledge and skills.

Job posted by
message
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
Job posted by
Job poster profile picture - Sayali Prabhudesai
Sayali Prabhudesai
message now

Data Engineer

Founded 2007
Products and services
6-50 employees
Raised funding
Spark
Python
Big Data
Cloud Computing
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years

We’re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level. This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms. You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions REQUIREMENTS Strong background working on large scale Data Warehousing and Data processing solutions. Strong Python and Spark programming experience. Strong experience in building big data pipelines. Very strong SQL skills are an absolute must. Good knowledge of OO, functional and procedural programming paradigms. Strong understanding of various design patterns. Strong understanding of data structures and algorithms. Strong experience with Linux operating systems. At least 2+ years of experience working as a software developer or a data-driven environment. Experience working in an agile environment. Lots of passion, motivation and drive to succeed! Highly desirable Understanding of agile principles specifically scrum. Exposure to Google cloud platform services such as BigQuery, compute engine etc. Docker, Puppet, Ansible, etc.. Understanding of digital marketing and digital advertising space would be advantageous.

Job posted by
message
Job poster profile picture - Siddharth Manuja
Siddharth Manuja
Job posted by
Job poster profile picture - Siddharth Manuja
Siddharth Manuja
message now

Cassandra Engineer/Developer/Architect

Founded 2017
Products and services
6-50 employees
Bootstrapped
Cassandra
Linux/Unix
JVM
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
message
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha
message now

Python Developer

Founded
employees
MySQL
MongoDB
Spark
Apache Hive
Location icon
Chennai
Experience icon
2 - 7 years

Full Stack Developer for Big Data Practice. Will include everything from architecture to ETL to model building to visualization.

Job posted by
message
Job poster profile picture - Bavani T
Bavani T
Job posted by
Job poster profile picture - Bavani T
Bavani T
message now

Big Data Engineer

Founded 2012
Product
51-250 employees
Profitable
Spark
Apache Storm
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years

Bachelor’s or Master’s degree in computer science or software engineering; Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures. Ability to architect highly scalable distributed systems, using different open source tools. Experience building high-performance algorithms. Extensive knowledge of different programming or scripting languages such as Python,Scala Apache Spark Experience with different (NoSQL or RDBMS) databases such as MongoDB Google Big Query , Cassandra, Elastic Search,HBASE,Data Pipelines,IMPALA Experience building data processing systems with Hadoop and Hive using Python. Good Exposure on AWS Lamba, Kingsis, EMR,Redshift, Kafka,

Job posted by
message
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
message now

Data Crawler

Founded 2012
Product
51-250 employees
Profitable
Python
Selenium Web driver
Scrapy
Web crawling
Apache Nutch
output.io
Crawlera
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years

Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Summary of the Role: We are looking for a skilled and enthusiastic Data Procurement Specialist for web crawling and public data scraping.  Design, build and improve our distributed system of web crawlers.  Integrate with the third-party API's to improve results.  Integrate the data crawled and scraped into our databases.  Create more/better ways to crawl relevant information.  Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, RegEx)  Good knowledge of Linux command tools  Experienced in Python, with knowledge of Scrapy framework  Strong knowledge of Selenium (Selenium WebDriver is a must)  Familiarity with web frontiers like Frontera  Familiarity with distributed messaging middleware (Kafka) Desired:  Practical, hands-on experience with modern Agile development methodologies  Ability thrive in a fast paced, test driven, collaborative and iterative programming environment.  Experience with web crawling projects  Experience with NoSQL databases (HBase, Cassandra, MongoDB, etc)  Experience with CI Tools (Git, Jenkins, etc);  Experience with distributed systems  Familiarity with data loading tools like Flume

Job posted by
message
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
message now

Senior Software Engineer- Ruby On Rails

Founded 2016
Products and services
6-50 employees
Profitable
Ruby on Rails (ROR)
Javascript
MVC Framework
MongoDB
Cassandra
MySQL
Location icon
Pune
Experience icon
2 - 10 years

Responsibilities Developing intelligent and scalable engineering solutions from scratch. Working on high/low-level product designs & roadmaps along with a team of ace developers. Building products using bleeding-edge technologies using Ruby on Rails. Building innovative products for customers in Cloud, DevOps, Analytics, AI/ML and lots more.

Job posted by
message
Job poster profile picture - Kalpak Shah
Kalpak Shah
Job posted by
Job poster profile picture - Kalpak Shah
Kalpak Shah
message now
Why waste time browsing all the jobs?
Just chat with Voila, your professional assistant. Find jobs, compare your salary and much more!
talk to voila
Awesome! You have connected your Facebook account. Like us on Facebook to stay updated.