Technical Skills : - DB: Neo4j(GraphDB) / Oracle DB- Programming language: Java/J2EE- GUI Skills: Angular 2+, D3.js - diagramming- Others Skills: Apache SOLR, DROOLS, Kafka integration, Spring, REST, webservices, LDAP, SSO- Domain: Telecom, OSS, Inventory (will be a bonus).Roles and Responsibilities:- Responsible for the design, development, testing, documentation and analysis of modules or features of new or upgraded software systems and products. - Develops and/or executes implementation according to project plans and priorities. - Creating / Extending GUI components like grids and reports using the core product modules- Creating network, service topology and device GUI diagrams for Ethernet and SDN products like EDI, EPL, EVPL, SDWAN etc.- Creating device and service models for networking technologies like- DWDM/OTN, SONET, MPLS, GPON, FTTH etc- Understanding the existing code and extending the core product components- Writing queries, procedures on database primarily Oracle and Neo4j- Writing Java/J2EE code
Must language: JavaNice to have: pythonMust frameworks and technologies: Springboot, kafka, MQTT, docker/kubernetes, REST APIsPersistence layer: MongoDB, Elastic Search, Any GraphDB (Neo4j/Arango), SQL, HBaseMust have: Exposure in large scale architecture (Concept of queues, micro services, functional programming)Must have: Strong Data structure and design principlesExpert in developing Node.js applications, Strong understanding of NPM and modular application development skills building, Proficiency and hands-on experience with Node.js, Express, Sockets, MongoDB/Elasticsearch/Redis/MySQL, Apache Kafka/Google PubSub, Experience of working in MEAN Stack is a plus)
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.