Cutshort logo
Hadoop Jobs in Jaipur

2+ Hadoop Jobs in Jaipur | Hadoop Job openings in Jaipur

Apply to 2+ Hadoop Jobs in Jaipur on CutShort.io. Explore the latest Hadoop Job opportunities across top companies like Google, Amazon & Adobe.

icon
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
YourHRfolks

at YourHRfolks

6 recruiters
Bharat Saxena
Posted by Bharat Saxena
Remote, Jaipur, NCR (Delhi | Gurgaon | Noida), Chennai, Bangarmau
5 - 10 yrs
₹15L - ₹30L / yr
Big Data
Hadoop
Spark
Apache Kafka
Amazon Web Services (AWS)
+2 more

Position: Big Data Engineer

What You'll Do

Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners.

This role requires close collaborations with data, engineering, and product organizations. His/her job functions include

  • Work with large data sets and implement sophisticated data pipelines with both structured and structured data.
  • Collaborate with stakeholders to design scalable solutions.
  • Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few.
  • A technical leader of Punchh’s big data platform that supports AI and BI products.
  • Work with infra and operations team to monitor and optimize existing infrastructure 
  • Occasional business travels are required.

What You'll Need

  • 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions.
  • Advanced degree in computer science, engineering or other related fields.
  • Demonstrated strength in data modeling, data warehousing and SQL.
  • Extensive knowledge with cloud technologies, e.g. AWS and Azure.
  • Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow.
  • Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark).
  • Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline.
  • Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort