Designation: Sr. Software Engineer / Software Engineer- HADOOP/Bigdata Job Location: Gurgaon/Jaipur Job Description Knowledge in Big Data technologies Hadoop, Pig, Hive and Spark Experience working with any Hadoop platform extensively. Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Experience in programming of - MapReduce, Spark Knowledge of Hadoop Sizing and Implementation Good hands on experience of Java (core java) and J2EE technologies, Python/Scala,/Unix/Bash Scripts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Good to have Skills: Hands-on experience with AWS services from Compute, Storage, Networking and Security components Hands-on experience in setting up Cloud platforms for use-cases Position Description: Part of a team to build end-end IoT solutions for various industry problems using Hadoop. You will be responsible for building quick prototypes and/or demonstrations to help management better understand the value of various technologies especially IoT, Machine Learning, Cloud, Micro-Services, DevOps and AI. Develop various reusable components, frameworks and accelerators to reduce the development lifecycle of future IoT projects. In this role, you must be able to work with minimal direction and supervision. Skills Required: Minimum of 3 years IT experience with at least 2 years working on Cloud technologies (AWS or Azure). Must have architecture and design experience in building highly scalable enterprise grade applications. Must be able to design and setup a continuous integration environment, processes and tools in place for the target cloud platform. Proficient in Java and Spring Framework. Must have strong background on IoT concepts Connectivity, Protocols, Security and Data Stream. Must be familiar at least at the conceptual level on the emerging technologies including Big Data, NoSQL, Machine Learning, AI, Blockchain etc. A team player, who is excited by and motivated by hard technical challenges. Must be able to motivate the team and work with the constraints, challenges and deadlines.Worked in an agile environment.
Job Description: The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities. Who are you? Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions You use cloud for more than just provisioning a Virtual Machine Vim is friendly to you and you know how to exit Nano You check logs before screaming about an error You are a solid engineer who writes modular code and commits in GIT You are a doer who doesn’t say “no” without first understanding You understand the value of documentation of your work You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines Eligibility At least 2 years of Data Engineering Experience Have interacted with Customers Must Have Skills: Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES Apache Spark Python Scala PostgreSQL Git Linux Good to have Skills: Apache NiFi Apache Kafka Apache Hive Docker Amazon Certification
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques