Big Data Engineer

at YourHRfolks

DP
Posted by Bharat Saxena
icon
Remote, Jaipur, Delhi, Gurugram, Noida, Chennai, Bangarmau
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
icon
Full time
Skills
Big Data
Hadoop
Spark
Apache Kafka
Amazon Web Services (AWS)
MongoDB
PL/SQL

Position: Big Data Engineer

What You'll Do

Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners.

This role requires close collaborations with data, engineering, and product organizations. His/her job functions include

  • Work with large data sets and implement sophisticated data pipelines with both structured and structured data.
  • Collaborate with stakeholders to design scalable solutions.
  • Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few.
  • A technical leader of Punchh’s big data platform that supports AI and BI products.
  • Work with infra and operations team to monitor and optimize existing infrastructure 
  • Occasional business travels are required.

What You'll Need

  • 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions.
  • Advanced degree in computer science, engineering or other related fields.
  • Demonstrated strength in data modeling, data warehousing and SQL.
  • Extensive knowledge with cloud technologies, e.g. AWS and Azure.
  • Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow.
  • Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark).
  • Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline.
  • Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.
Read more

About YourHRfolks


Read more
Founded
2022
Type
Services
Size
employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Analyst

at building a cutting-edge data science department to serve the older adult community and marketplace.

Agency job
via HyrHub
Amazon Web Services (AWS)
Business Intelligence (BI)
SQL server
Tableau
SQL
PowerBI
Relational Database (RDBMS)
Qlik
Data Analytics
Data Analyst
icon
Chandigarh
icon
4 - 8 yrs
icon
₹8L - ₹12L / yr

We are currently seeking talented and highly motivated Data Analyst to lead in the development of our discovery and support platform. The successful candidate will join a small, global team of data focused associates that have successfully built, and maintained a best of class traditional, Kimball based, SQL server founded, data warehouse and Qlik Sense based BI Dashboards.  The successful candidate will lead the conversion of managing our master data set, developing reports and analytics dashboards.

To do well in this role you need a very fine eye for detail, experience as a data analyst, and deep understanding of the popular data analysis tools and databases.

 

Specific responsibilities will be to:

  • Managing master data, including creation, updates, and deletion.
  • Managing users and user roles.
  • Provide quality assurance of imported data, working with quality assurance analysts if necessary.
  • Commissioning and decommissioning of data sets.
  • Processing confidential data and information according to various compliance.
  • Helping develop reports and analysis.
  • Managing and designing the reporting environment, including data sources, security, and metadata.
  • Supporting the data warehouse in identifying and revising reporting requirements.
  • Supporting initiatives for data integrity and normalization.
  • Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
  • Generating reports from single or multiple systems.
  • Troubleshooting the reporting database environment and reports.
  • Evaluating changes and updates to source production systems.
  • Training end-users on new reports and dashboards.
  • Providing technical expertise in data storage structures, data mining, and data cleansing.

 

 

Job Requirements:

  • Master’s Degree (or equivalent experience) in computer science, data science or a scientific field that has relevance to healthcare in the United States.
  • Work experience as a data analyst or in a related field for more than 5 years.
  • Proficiency in statistics, data analysis, data visualization and research methods.
  • Strong SQL and Excel skills with ability to learn other analytic tools.
  • Experience with BI dashboard tools like Qlik Sense, Tableau, Power BI.
  • Experience with AWS services like EC2, S3, Athena and QuickSight.
  • Ability to work with stakeholders to assess potential risks.
  • Ability to analyze existing tools and databases and provide software solution recommendations.
  • Ability to translate business requirements into non-technical, lay terms.
  • High-level experience in methodologies and processes for managing large-scale databases.
  • Demonstrated experience in handling large data sets and relational databases.
  • Understanding of addressing and metadata standards.
High-level written and verbal communication skills
Read more
Job posted by
Shwetha Naik

Principal Data Engineer

at AI-powered cloud-based SaaS solution provider

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
Apache ZooKeeper
Data engineer
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon EMR
Amazon S3
Apache Spark
Java
PythonAnywhere
Test driven development (TDD)
Cloud Computing
Google Cloud Platform (GCP)
Agile/Scrum
OOD
Software design
Architecture
YARN
icon
Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹25L - ₹60L / yr
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
Job posted by
Naveen Taalanki

Data Scientist

at TVS Credit Services Ltd

Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
icon
Chennai
icon
4 - 10 yrs
icon
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
Job posted by
Vinodhkumar Panneerselvam
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
Sanjay Biswakarma

Data Engineer

at TIGI HR Solution Pvt. Ltd.

Founded 2014  •  Services  •  employees  •  Profitable
Data engineering
Hadoop
Big Data
Python
SQL
Amazon Web Services (AWS)
Windows Azure
icon
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Noida
icon
2 - 5 yrs
icon
₹10L - ₹17L / yr
Position : Data Engineer
Employee Strength : around 600 in all over India
Working days: 5 days
Working Time: Flexible
Salary : 30-40% Hike on Current CTC
As of now work from home.
 
Job description:
  • Design, implement and support an analytical data infrastructure, providing ad hoc access to large data sets and computing power.
  • Contribute to development of standards and the design and implementation of proactive processes to collect and report data and statistics on assigned systems.
  • Research opportunities for data acquisition and new uses for existing data.
  • Provide technical development expertise for designing, coding, testing, debugging, documenting and supporting data solutions.
  • Experience building data pipelines to connect analytics stacks, client data visualization tools and external data sources.
  • Experience with cloud and distributed systems principles
  • Experience with Azure/AWS/GCP cloud infrastructure
  • Experience with Databricks Clusters and Configuration
  • Experience with Python, R, sh/bash and JVM-based languages including Scala and Java.
  • Experience with Hadoop family languages including Pig and Hive.
Read more
Job posted by
Rutu Lakhani

Data Engineer

at Futurense Technologies

Founded 2020  •  Services  •  20-100 employees  •  Bootstrapped
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
Python
SQL
Amazon Web Services (AWS)
Snow flake schema
SSIS
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹6L - ₹12L / yr
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more
Job posted by
Rajendra Dasigari

Assistant Manager - Analytics & Business Intelligence

at Home Credit

Founded 2012  •  Services  •  100-1000 employees  •  Profitable
Business Intelligence (BI)
Data Analytics
Analytics
Oracle SQL Developer
PowerBI
Business Analysis
Tableau
SQL
PL/SQL
icon
NCR (Delhi | Gurgaon | Noida)
icon
3 - 5 yrs
icon
₹6L - ₹11L / yr
Role Summary - The position holder will be responsible for supporting various aspects of organization's Analytical & BI activities. As a member of team, candidate will collaborate with a multi-disciplinary team of experts and SMT group on a wide range of problems which will give him opportunities to solve critical business problems by using Analytical & Statistical techniques. Essential/Key Responsibilities - By analyzing data/reports to identify early warning signals (unusual trends, patterns, Process gaps etc.) and proactively providing feedback in order to take corrective actions by finding continuous improvement in process (improvement in performance, reducing cost, technological improvement etc.) - Will also be responsible for creating/defining BI (Business Intelligence) and AI (Analytical Intelligence) standards for Home Credit - Being a part of BICC team, expecting high level of Business Intelligence support (Regular Reports, weekly presentations etc.) to top management - Will ensure automation & centralization of BI activities for better utilization of resources - Will be responsible for supporting data driven ADHOC's & critical requirements Qualifications/Requirements: - MBA/ M. Tech / B-Tech or Bachelors in a quantitative discipline such as Computer Science, Engineering, Mathematics, Statistics, Operations Research, Economics from premier /Tier 1 Colleges with a minimum of 3 years of experience in Analytics/ Business Intelligence - Highly numerate/ Statistical knowledge - able to work with numbers and can understand the data trend - Ability to work with both business and technical communities - Good to know, financial analysis / modeling to support the various teams on specific analysis projects. Skills/ Desired Characteristics - Able to think analytically, use a systematics and logical approach to analyze data, problems and situations. - Good Database skills with exposure to Oracle (11g) systems and tools - Highly skilled in Excel, SQL, R/Python or Power BI /Tableau or VBA - Ability to manage multiple deliverables with minimum guidance and pro-actively set up communication processes with stakeholders - Willing to working in IC (Individual Contributor) role - Excellent communication skills in English - written, verbal - Good knowledge in Project Management and Program management. Who should join us - If you are willing to face new challenges and want to apply your data knowledge for growth / future of company, then Home Credit can give you this opportunity. Home Credit can provide you platform to show your skills & suggest valuable ideas to company. - Will get opportunity to work on company level platform & will be part of BI platform of company. - Opportunity to work in a team of enthusiastic professionals.
Read more
Job posted by
Garima Singh

Snowflake with Spark-ETL Developer

at service based company

Agency job
via Myna Solutions
ETL
Snowflake
Data Warehouse (DWH)
Datawarehousing
Apache Spark
Spark
Hadoop
Windows Azure
snowflake
icon
Hyderabad
icon
5 - 9 yrs
icon
₹12L - ₹14L / yr
Overall experience of 4 – 8 years of experience in DW / BI technologies.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.
Read more
Job posted by
Preethi M

Machine Learning Engineer

at SmartJoules

Founded 2015  •  Product  •  100-500 employees  •  Profitable
Machine Learning (ML)
Python
Big Data
Apache Spark
Deep Learning
icon
Remote, NCR (Delhi | Gurgaon | Noida)
icon
3 - 5 yrs
icon
₹8L - ₹12L / yr

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes. 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary. 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase. 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark. 
  • Basic use of clouds (VM’s example EC2).
  • Brownie points for Kubernetes and Task Queues.      
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Read more
Job posted by
Saksham Dutta

Data Architect

at I Base IT

Founded 2011  •  Product  •  100-500 employees  •  Raised funding
Data Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
cube building
data lake
Hadoop
Java
icon
Hyderabad
icon
9 - 13 yrs
icon
₹10L - ₹23L / yr
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Read more
Job posted by
Sravanthi Alamuri
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at YourHRfolks?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort