Cutshort logo
MapReduce Jobs in Delhi, NCR and Gurgaon

3+ MapReduce Jobs in Delhi, NCR and Gurgaon | MapReduce Job openings in Delhi, NCR and Gurgaon

Apply to 3+ MapReduce Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest MapReduce Job opportunities across top companies like Google, Amazon & Adobe.

icon
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
Tata Consultancy Services
Bengaluru (Bangalore), Hyderabad, Pune, Delhi, Kolkata, Chennai
5 - 8 yrs
₹7L - ₹30L / yr
skill iconScala
skill iconPython
PySpark
Apache Hive
Spark
+3 more

Skills and competencies:

Required:

·        Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance

Data and macro-economic data to solve business problems.

·        Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in

Credit Risk/Banking

·        Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.

  • Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
  • Experience in systems integration, web services, batch processing
  • Experience in migrating codes to PySpark/Scala is big Plus
  • The ability to act as liaison conveying information needs of the business to IT and data constraints to the business

applies equal conveyance regarding business strategy and IT strategy, business processes and work flow

·        Flexibility in approach and thought process

·        Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Read more
Paisabazaar.com

at Paisabazaar.com

3 recruiters
Amit Gupta
Posted by Amit Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹6L - ₹18L / yr
Spark
MapReduce
Hadoop
ETL
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort