Job Description : We are looking for someone who can work with the platform or analytics vertical to extend and scale our product-line. Every product line has the dependency on other products within LimeTray eco-system and SSE- 2 is expected to collaborate with different internal teams to own stable and scalable releases. While every product-line has their own tech stack - different products have different technologies and it's expected that Person is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, MySQL, MongoDB, Angular, React, Kubernetes, AWS, Python Requirements : - Minimum 3-year work experience in building, managing and maintaining Python based backend applications - B.Tech/BE in CS from Tier 1/2 Institutes - Strong Fundamentals of Data Structures and Algorithms - Experience in Python & Design Patterns - Expert in git, unit tests, technical documentation and other development best practises - Worked with SQL & NoSQL databases (Cassandra, MYSQL) - Understanding of async programming. Knowledge in handling messaging services like pubsub or streaming (Eg: Kafka, ActiveMQ, RabbitMQ) - Understanding of Algorithm, Data structures & Server Management - Understanding microservice or distributed architecture - Delivered high-quality work with a significant contribution - Experience in Handling small teams - Has good debugging skills - Has good analytical & problem-solving skills What we are looking for : - Ownership Driven - Owns end to end development - Team Player - Works well in a team. Collaborates with & outside the team. - Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication including email. - Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done - Develops emotional bonding for the product and does what is good for the product. - Customer first mentality. Understands customers pain and works towards the solutions. - Honest & always keeps high standards. - Expects the same form the team - Strict on Quality and Stability of the product.
Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business. Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you. Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification: B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.