Cutshort logo
AWS Glue Developer
A fast growing Big Data company 's logo

AWS Glue Developer

Agency job
6 - 8 yrs
₹10L - ₹15L / yr
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
Skills
AWS Glue
SQL
skill iconPython
Spark
Hadoop
Big Data
Data engineering
PySpark
DMS
Data integration
Data Ops

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About A fast growing Big Data company

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

contract intelligence platform
Pune
12 - 20 yrs
₹50L - ₹100L / yr
skill iconData Science
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
Algorithms
skill iconPython
+5 more
Responsibilities
  • Partners with business stakeholders to translate business objectives into clearly defined analytical projects.
  • Identify opportunities for text analytics and NLP to enhance the core product platform, select the best machine learning techniques for the specific business problem and then build the models that solve the problem.
  • Own the end-end process, from recognizing the problem to implementing the solution.
  • Define the variables and their inter-relationships and extract the data from our data repositories, leveraging infrastructure including Cloud computing solutions and relational database environments.
  • Build predictive models that are accurate and robust and that help our customers to utilize the core platform to the maximum extent.

Skills and Qualification
  • 12 to 15 yrs of experience.
  • An advanced degree in predictive analytics, machine learning, artificial intelligence; or a degree in programming and significant experience with text analytics/NLP. He shall have a strong background in machine learning (unsupervised and supervised techniques). In particular, excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc.
  • Experience with text mining, parsing, and classification using state-of-the-art techniques.
  • Experience with information retrieval, Natural Language Processing, Natural Language
  • Understanding and Neural Language Modeling.
  • Ability to evaluate the quality of ML models and to define the right performance metrics for models in accordance with the requirements of the core platform.
  • Experience in the Python data science ecosystem: Pandas, NumPy, SciPy, sci-kit-learn, NLTK, Gensim, etc.
  • Excellent verbal and written communication skills, particularly possessing the ability to share technical results and recommendations to both technical and non-technical audiences.
  • Ability to perform high-level work both independently and collaboratively as a project member or leader on multiple projects.
Read more
Gurugram, Bengaluru (Bangalore), Mumbai
4 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
media analytics
SQL
skill iconPython
+4 more

Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.

 
Key Role:
  • Act as primary day-to-day contact on analytics to agency-client leads
  • Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
  • Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
  • Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
  • Ensure data quality and integrity
  • Interpret and analyse data problems
  • Build analytic systems and predictive models
  • Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
  • Visualize data and create reports
  • Experiment with new models and techniques
  • Align data projects with organizational goals


Requirements

  • Min 6 - 7 years’ experience working in Data Science
  • Prior experience as a Data Scientist within a digital media is desirable
  • Solid understanding of machine learning
  • A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
  • Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
  • A knack for statistical analysis and predictive modelling
  • Good knowledge of R, Python
  • Experience with SQL, MYSQL, PostgreSQL databases
  • Knowledge of data management and visualization techniques
  • Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
  • Evidence of technical comfort and good understanding of internet functionality desirable
  • Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
  • Proactive and results-oriented
  • A positive, can-do attitude with a thirst to continually learn new things
  • An ability to work independently and collaboratively with a wide range of teams
  • Excellent communication skills, both written and oral
Read more
Arting Digital
Pragati Bhardwaj
Posted by Pragati Bhardwaj
Navi Mumbai
6 - 10 yrs
₹15L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
sql
Aws
+3 more

Title:- Data Scientist


Experience:-6 years

 

Work Mode:- Onsite

 

Primary Skills:- Data Science, SQL, Python, Data Modelling, Azure, AWS, Banking Domain (BFSI/NBFC)

 

Qualification:- Any

 

Roles & Responsibilities:-

 

1.  Acquiring, cleaning, and preprocessing raw data for analysis.

2.  Utilizing statistical methods and tools for analyzing and interpreting complex  datasets.

3.  Developing and implementing machine learning models for predictive analysis.

4.  Creating visualizations to effectively communicate insights to both technical and   non-technical stakeholders.

5.  Collaborating with cross-functional teams, including data engineers, business   analysts, and domain experts.

6.  Evaluating and optimizing the performance of machine learning models for   accuracy and efficiency.

7.  Identifying patterns and trends within data to inform business decision-making.

8.  Staying updated on the latest advancements in data science, machine learning, and  relevant technologies.

 

Requirement:- 

 

1.  Experience with modeling techniques such as Linear Regression, clustering, and classification techniques.

2.  Must have a passion for data, structured or unstructured.  0.6 – 5 years of hands-on experience with Python and SQL is a must.

3.   Should have sound experience in data mining, data analysis and machine learning techniques.

4.  Excellent critical thinking, verbal and written communications skills.

5.  Ability and desire to work in a proactive, highly engaging, high-pressure, client service environment.

6.   Good presentation skills.


Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Graasai
Vineet A
Posted by Vineet A
Pune
3 - 7 yrs
₹10L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.


Roles & Responsibilities:

Work on implementation of real-time and batch data pipelines for disparate data sources.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
  • Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
  • Identify improvement areas in the current data system and implement optimizations.
  • Work on specific areas of data governance including metadata management and data quality management.
  • Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
  • Develop Proof-of-Concepts to validate new technology solutions or advancements.
  • Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
  • Work on building intelligent systems using various AI/ML algorithms. 

 

Desired Experience/Skill:

 

  • Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
  • Experience with private and public cloud architectures with pros/cons.
  • Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
  • Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
  • Knowledge of Kafka, Redis is preferred
  • Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
  • Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Read more
leading pharmacy provider
Agency job
via Econolytics by Jyotsna Econolytics
Noida, NCR (Delhi | Gurgaon | Noida)
4 - 10 yrs
₹18L - ₹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
Algorithms
Predictive modelling
Job Description:

• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.

Must Have:

• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
Read more
NCR (Delhi | Gurgaon | Noida)
2 - 12 yrs
₹25L - ₹40L / yr
Data governance
DevOps
Data integration
Data engineering
skill iconPython
+14 more
Data Platforms (Data Integration) is responsible for envisioning, building and operating the Bank’s data integration platforms. The successful candidate will work out of Gurgaon as a part of a high performing team who is distributed across our two development centers – Copenhagen and Gurugram. The individual must be driven, passionate about technology and display a level of customer service that is second to none.

Roles & Responsibilities

  • Designing and delivering a best-in-class, highly scalable data governance platform
  • Improving processes and applying best practices
  • Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
  •  Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
  • Flexible on working hours as per business needs
Read more
GitHub
at GitHub
4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 15 yrs
$50K - $80K / yr
skill iconData Science
Data Scientist
Data engineering
Financial analysis
Finance
+8 more

We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has  world class investors and big data experts as part of the GP,  top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.

We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.

You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.

Requirements:

  • Atleast 3 years of relevant ML experience
  • Graduation date : 2018 and earlier
  •   3-5 years of experience in high level Python programming.
  • Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
  •   Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques. 
  •   Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
  • Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
  •   Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling. 
  •   Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
  •   A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
  • A valid Github profile with some activity in it

Bonus to have:

  •   Experience in storing and retrieving data from large and complex time series databases
  •   Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
  •   Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation. 
  •   Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.) 
  •   Knowledge of REST APIs and data extracting and cleaning techniques 
  •   Good to have experience in Pyspark or any other big data programming/parallel computing
  •   Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
  •   Any progress towards CFA or FRM is a bonus
  • Average tenure of atleast 1.5 years in a company
Read more
Largest Analytical firm
Bengaluru (Bangalore)
4 - 14 yrs
₹10L - ₹28L / yr
Hadoop
Big Data
Spark
skill iconScala
skill iconPython
+2 more

·        Advanced Spark Programming Skills

·        Advanced Python Skills

·        Data Engineering ETL and ELT Skills

·        Expertise on Streaming data

·        Experience in Hadoop eco system

·        Basic understanding of Cloud Platforms

·        Technical Design Skills, Alternative approaches

·        Hands on expertise on writing UDF’s

·        Hands on expertise on streaming data ingestion

·        Be able to independently tune spark scripts

·        Advanced Debugging skills & Large Volume data handling.

·        Independently breakdown and plan technical Tasks

Read more
Yulu Bikes
at Yulu Bikes
1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹12L / yr
skill iconData Science
skill iconData Analytics
SQL
skill iconPython
Datawarehousing
+2 more
Skill Set 
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.

JD 

- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.

- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.

- Technical expertise with data models, database design and development, data mining and segmentation techniques

- Proven success in a collaborative, team-oriented environment

- Working experience with geospatial data will be a plus.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos