Cutshort logo
Glue semantics jobs

5+ Glue semantics Jobs in India

Apply to 5+ Glue semantics Jobs on CutShort.io. Find your next job, effortlessly. Browse Glue semantics Jobs and apply today!

icon
GroundTruth
Laxmi Pal
Posted by Laxmi Pal
Remote only
4 - 6 yrs
₹14L - ₹16L / yr
skill iconPython
SQL
skill iconData Analytics
Data Structures
skill iconAmazon Web Services (AWS)
+4 more

Role Characteristics:

Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner.


You Will:

  • Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics
  • Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering
  • Derive insights and putting them together to build a story to solve a given problem 
  • Suggest ways for process improvements in terms of script optimization, automating repetitive tasks 
  • Create and automate reports and dashboards through Python to track certain metrics basis given requirements 
  • Automate reports and dashboards through Python


Technical Skills (Must have)

  • B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields
  •  4-6 years of experience in working with data and conducting statistical and/or numerical analysis
  •  Ability to write SQL code
  • Scripting/automation using python
  •  Hands on experience in data visualisation tool like Looker/Tableau/Quicksight
  • Basic to advance level understanding of statistics


Other Skills (Must have)

  • Be willing and able to quickly learn about new businesses, database technologies and analysis techniques
  • Strong oral and written communication 
  • Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) 
  • Experience working with large datasets
  • Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3)
  • Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark 


What we offer

At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.

  • Parental leave- Maternity and Paternity
  • Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays)
  • In Office Daily Catered Lunch
  • Fully stocked snacks/beverages
  • Health cover for any hospitalization. Covers both nuclear family and parents
  • Tele-med for free doctor consultation, discounts on health checkups and medicines
  • Wellness/Gym Reimbursement
  • Pet Expense Reimbursement
  • Childcare Expenses and reimbursements
  • Employee assistance program
  • Employee referral program
  • Education reimbursement program
  • Skill development program
  • Cell phone reimbursement (Mobile Subsidy program)
  • Internet reimbursement
  • Birthday treat reimbursement
  • Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic
  • Creche reimbursement
  • Co-working space reimbursement
  • NPS employer match
  • Meal card for tax benefit
  • Special benefits on salary account


We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Pune, Mumbai, Bengaluru (Bangalore), Chennai
4 - 7 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
Glue semantics
Amazon Redshift
+1 more

Job Overview:

We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
  • Integrate data from diverse sources and ensure its quality, consistency, and reliability.
  • Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
  • Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
  • Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Automate data validation, transformation, and loading processes to support real-time and batch data processing.
  • Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.

Required Skills:

  • 5 to 7 years of hands-on experience in data engineering roles.
  • Strong proficiency in Python and PySpark for data transformation and scripting.
  • Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
  • Solid understanding of SQL and database optimization techniques.
  • Experience working with large-scale data pipelines and high-volume data environments.
  • Good knowledge of data modeling, warehousing, and performance tuning.

Preferred/Good to Have:

  • Experience with workflow orchestration tools like Airflow or Step Functions.
  • Familiarity with CI/CD for data pipelines.
  • Knowledge of data governance and security best practices on AWS.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
7 - 15 yrs
₹5L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+20 more

Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)

Location : Gurugram

Employment Type : Full-Time


Job Summary :

Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.


Key Responsibilities :

  • Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
  • Maintain data lakes & warehouses for analytics.
  • Ensure data integrity through quality checks.
  • Collaborate with data scientists & engineers to deliver solutions.

Qualifications :

  • 7+ Years in Data Engineering.
  • Expertise in AWS services, SQL, Python, Spark, Kafka.
  • Experience with CI/CD, DevOps practices.
  • Strong problem-solving skills.

Preferred Skills :

  • Experience with Snowflake, Databricks.
  • Knowledge of BI tools (Tableau, Power BI).
  • Healthcare/Insurance domain experience is a plus.


Read more
Hyderabad
5 - 15 yrs
₹4L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
Big Data Engineer:-


-Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight.

-Experience in developing lambda functions with AWS Lambda.

-
Expertise with Spark/PySpark

– Candidate should be hands on with PySpark code and should be able to do transformations with Spark

-Should be able to code in Python and Scala.

-
Snowflake experience will be a plus
Read more
Urgent Openings with one of our client

Urgent Openings with one of our client

Hyderabad
3 - 7 yrs
₹3L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more


Experience : 3 to 7 Years
Number of Positions : 20

Job Location : Hyderabad

Notice : 30 Days

 

1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2. Experience in developing lambda functions with AWS Lambda

3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4. Should be able to code in Python and Scala.

5. Snowflake experience will be a plus

 

Hadoop and Hive requirements as good to have or understanding of is enough.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort