Must be willing to relocate to Chandigarh/Mohali, in our state-of-the-art office in Quark City in Mohali. We're looking for a data engineer. Responsibilities include: building out infrastructure and implementing end-to-end data flows from ingest, to data transformation, machine learning for client-side applications.• Must have Java/J2EE, Spring, Micro services, Python (Hands On) • Must have integration expert Hadoop Eco Systems and ML Expert • Knowledge of Spark, Accumulo, HDFS, YARN, MapReduce, Kafka is necessary. As a data engineer, you may have experience spanning traditional DW and ETL architectures. But for this role it is important to have industry experience working with big data ecosystems like Spark/Hadoop and Redshift. You've probably been in the industry as an engineer for 2+ years and have developed a passion for the data that drives businesses. On your first day, we'll expect you to have: • Deep understanding of big data challenges and eco-system • Experience with solution building and architecting with public cloud offerings such as Amazon Web Services, Redshift, S3, EMR/Spark, Presto/Athena • Experience with Spark and Hive • Expertise in SQL, SQL tuning, schema design, Python and ETL processes • Expertise in data pipeline with such workflow tools as Airflow, Oozie or Luigi • Solid understanding experience in building RESTful APIs and microservices, e.g. with Flask • Experience in test automation and ensuring data quality across multiple datasets used for analytical purposes • Experience with Lambda Architecture or other Big Data architectural best practices • A graduate degree in Computer Science or similar discipline • Commit code to open source projects • Experience with test automation and continuous delivery It's great, but not required, if you have: • Experience with Tableau • Experience with Machine Learning • Have worked with Data Scientists
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.