Cutshort logo
HCL Technologies logo
Bigdata Professional
Bigdata Professional
HCL Technologies's logo

Bigdata Professional

Agency job
5 - 10 yrs
₹5L - ₹20L / yr
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Mumbai, Kolkata
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
skill iconScala
Windows Azure
Exp- 5 + years
Skill- Spark and Scala along with Azure
Location - Pan India

Looking for someone Bigdata along with Azure
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About HCL Technologies

Founded :
2011
Type
Size :
100-1000
Stage :
Bootstrapped
About
HCL is a leading BPO support company with services and solutions aimed to deliver value to each client's specific business goals. Visit us today!
Read more
Connect with the team
Profile picture
Lina Mahurkar
Profile picture
T Teerion
Profile picture
Shyamsundar Ananthakrishnan
Company social profiles
bloginstagrampinterestlinkedintwitterfacebook

Similar jobs

Publicis Sapient
at Publicis Sapient
10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Archwell
Agency job
via AVI Consulting LLP by Sravanthi Puppala
Mysore
2 - 8 yrs
₹1L - ₹15L / yr
Snowflake
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

Title:  Data Engineer – Snowflake

 

Location: Mysore (Hybrid model)

Exp-2-8 yrs

Type: Full Time

Walk-in date: 25th Jan 2023 @Mysore 

 

Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives

 

Key Roles & Responsibilities:

  • Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
  • Design, develop and deploy scalable and efficient data pipelines.
  • Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
  • Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
  • Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
  • Create and configure appropriate cloud resources to meet the needs of the end users.
  • As needed, document topology, processes, and solution architecture.
  • Share your passion for staying on top of tech trends, experimenting with and learning new technologies

 

Qualifications & Experience

Qualification & Experience Requirements:

  • Bachelor's degree in computer science, computer engineering, or a related field.
  • 2-8 years of experience working with Snowflake
  • 2+ years of experience with the AWS services.
  • Candidate should able to write the stored procedure and function in Snowflake.
  • At least 2 years’ experience in snowflake developer.
  • Strong SQL Knowledge.
  • Data injection in snowflake using Snowflake procedure.
  • ETL Experience is Must (Could be any tool)
  • Candidate should be aware of snowflake architecture.
  • Worked on the Migration project
  • DW Concept (Optional)
  • Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
  • Experience with data pipeline and workflow management tools: Airflow, etc.
  • Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
  • Experience working with Linux and UNIX environments.
  • Experience with profiling data, with and without data definition documentation
  • Familiar with Git
  • Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
  • Experience working in an agile environment.

Desired Skills:

  • Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
  • Experience with a stream-processing system: Snowpipe
  • Working knowledge of AWS or Azure
  • Experience in migrating from on-prem to cloud systems
Read more
Hyderabad
5 - 10 yrs
₹19L - ₹25L / yr
ETL
Informatica
Data Warehouse (DWH)
Windows Azure
Microsoft Windows Azure
+4 more

A Business Transformation Organization that partners with businesses to co–create customer-centric hyper-personalized solutions to achieve exponential growth. Invente offers platforms and services that enable businesses to provide human-free customer experience, Business Process Automation.


Location: Hyderabad (WFO)

Budget: Open

Position: Azure Data Engineer

Experience: 5+ years of commercial experience


Responsibilities

●     Design and implement Azure data solutions using ADLS Gen 2.0, Azure Data Factory, Synapse, Databricks, SQL, and Power BI

●     Build and maintain data pipelines and ETL processes to ensure efficient data ingestion and processing

●     Develop and manage data warehouses and data lakes

●     Ensure data quality, integrity, and security

●     Implement from existing use cases required by the AI and analytics teams.

●     Collaborate with other teams to integrate data solutions with other systems and applications

●     Stay up-to-date with emerging data technologies and recommend new solutions to improve our data infrastructure


Read more
AppsTek Corp
Agency job
via Venaatics Consulting by Mastanvali Shaik
Gurugram, Chennai
6 - 10 yrs
Best in industry
Data management
Data modeling
skill iconPostgreSQL
SQL
MySQL
+3 more

Function : Sr. DB Developer

Location : India/Gurgaon/Tamilnadu

 

>> THE INDIVIDUAL

  • Have a strong background in data platform creation and management.
  • Possess in-depth knowledge of Data Management, Data Modelling, Ingestion  - Able to develop data models and ingestion frameworks based on client requirements and advise on system optimization.
  • Hands-on experience in SQL database (PostgreSQL) and No-SQL database (MongoDB) 
  • Hands-on experience in performance tuning of DB
  • Good to have knowledge of database setup in cluster node
  • Should be well versed with data security aspects and data governance framework
  • Hands-on experience in Spark, Airflow, ELK.
  • Good to have knowledge on any data cleansing tool like apache Griffin
  • Preferably getting involved during project implementation so have a background on business knowledge and technical requirement as well.
  • Strong analytical and problem-solving skills. Have exposure to data analytics skills and knowledge of advanced data analytical tools will be an advantage.
  • Strong written and verbal communication skills (presentation skills).
  • Certifications in the above technologies is preferred.

 

>> Qualification

 

  1. Tech /B.E. / MCA /M. Tech from a reputed institute.

Experience of Data Management, Data Modelling, Ingestion for more than 4 years. Total experience of 8-10 Years

Read more
Hyderabad
4 - 8 yrs
₹5L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
Read more
Bungee Tech India
Abigail David
Posted by Abigail David
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Big Data
Hadoop
Apache Hive
Spark
ETL
+3 more

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Read more
Scry AI
at Scry AI
1 recruiter
Siddarth Thakur
Posted by Siddarth Thakur
Remote only
3 - 8 yrs
₹15L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
1 - 5 yrs
₹5L - ₹15L / yr
Spark
PySpark
Big Data
skill iconPython
SQL
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Skandhanshi Infra Projects
Nagraj Kumar
Posted by Nagraj Kumar
Bengaluru (Bangalore)
2 - 8 yrs
₹6L - ₹25L / yr
skill iconScala
Apache Spark
Big Data
PreferredSkills- • Should have minimum 3 years of experience in Software development • Strong experience in spark Scala development • Person should have strong experience in AWS cloud platform services • Should have good knowledge and exposure in Amazon EMR, EC2 • Should be good in over databases like dynamodb, snowflake
Read more
Largest Analytical firm
Bengaluru (Bangalore)
4 - 14 yrs
₹10L - ₹28L / yr
Hadoop
Big Data
Spark
skill iconScala
skill iconPython
+2 more

·        Advanced Spark Programming Skills

·        Advanced Python Skills

·        Data Engineering ETL and ELT Skills

·        Expertise on Streaming data

·        Experience in Hadoop eco system

·        Basic understanding of Cloud Platforms

·        Technical Design Skills, Alternative approaches

·        Hands on expertise on writing UDF’s

·        Hands on expertise on streaming data ingestion

·        Be able to independently tune spark scripts

·        Advanced Debugging skills & Large Volume data handling.

·        Independently breakdown and plan technical Tasks

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos