Subject Matter expert

at Amazon India

DP
Posted by Tanya Thakur
icon
Chennai
icon
5 - 12 yrs
icon
₹10L - ₹22L / yr
icon
Full time
Skills
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
Spotfire
Six Sigma Black Belt
SQL
Quality management
Quality improvement

BASIC QUALIFICATIONS

 

  • 2+ years experience in program or project management
  • Project handling experience using six sigma/Lean processes
  • Experience interpreting data to make business recommendations

  • Bachelor’s degree or higher in Operations, Business, Project Management, Engineering
  • 5-10 years' experience in project / Customer Satisfaction, with proven success record
  • Understand basic and systematic approaches to manage projects/programs
  • Structured problem solving approach to identify & fix problems
  • Open-minded, creative and proactive thinking
  • Pioneer to invent and make differences
  • Understanding of customer experience, listening to customers' voice and work backwards to improve business process and operations
  • Certification in 6 Sigma

 

PREFERRED QUALIFICATIONS

 

  • Automation Skills with experience in Advance SQL, Python, Tableau
Read more

About Amazon India

Founded
2015
Type
Size
100-1000
Stage
Profitable
About

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. We are driven by the excitement of building technologies, inventing products, and providing services that change lives. We embrace new ways of doing things, make decisions quickly, and are not afraid to fail. We have the scope and capabilities of a large company, and the spirit and heart of a small one.

 

Together, Amazonians research and develop new technologies from Amazon Web Services to Alexa on behalf of our customers: shoppers, sellers, content creators, and developers around the world.

 

Our mission is to be Earth's most customer-centric company. Our actions, goals, projects, programs, and inventions begin and end with the customer top of mind.

 

You'll also hear us say that at Amazon, it's always "Day 1."​ What do we mean? That our approach remains the same as it was on Amazon's very first day - to make smart, fast decisions, stay nimble, invent, and focus on delighting our customers.

Read more
Company video
Photos
Connect with the team
icon
Srinivasa Manchikalapati
icon
Sakshi Tayal
icon
Disha Jain
icon
Prangya Paramita Behera
icon
Shrestha srivastava
icon
Archana J
icon
Nithya Nagarathinam
icon
Akhil Ravipalli
icon
Payal Dhingra
icon
Chitra k
icon
Vanisree D
icon
Srilalitha K
icon
Apurva D
icon
Jignesh Saner
icon
Manojkumar Ganesh
icon
Bhavya venkatesh
icon
Savitha Annamalai
icon
Sanjay Sriram
icon
Aparna Shanbhogue
icon
Shikha Sidhu
icon
Rabia Basiri
icon
Kanchan Singh
icon
Aanchal Singhal
icon
Rupali Sharma
icon
Satarupa Sinha
icon
Megha Lakshman
icon
Radha V
icon
Neha Sinha
icon
Sarika Kumari
icon
Kavita SHarma
icon
Zakiuddin Masood
icon
Gurvinder Singh
icon
Pramod Ayyappath
icon
Surabhi Suman
icon
Neeta Singh Mehta
icon
Alton Abraham
icon
Sanjay Samuel
icon
Chaitanya Reddy Katta
icon
Anita T
icon
Jitendra Barlinge
icon
Viprali Rastogi
icon
Pradeep C
icon
Mohammad Azimuddin
icon
Samta Chopra
icon
Navin Kumar
icon
Nishant Goel
icon
Rajesh Yadav
icon
SANJAY KUMAR
icon
Swapnil Shukla
icon
Navin Kumar
icon
Harshitha Macha
icon
Tanmay Singh
icon
Shivkumar Gurram
icon
Saurabh Prasad
icon
PRERNA VERMA
icon
Rajeev Reddy
icon
Yasmin Kohati
icon
Mansi Patil
Company social profiles
N/A
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Information Solution Provider Company
Agency job
via Jobdost by Sathish Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
₹10L - ₹20L / yr
PowerBI
Data modeling
SQL
SSIS
SSAS
  • Good experience on Power BI Visualizations, DAX queries in Power BI
  • Experience in implementing Row Level Security
  • Can understand data models, can implement simple-medium data models
  • Quick learner to pick up the Application data design and processe
  • Expert in SQL, Analyze current ETL/SSIS process
  • Hands on experience in data modeling  
  • Quick learner to pick up the Application data design and processes
  • Data warehouse development and work with SSIS & SSAS (Good to have)
•Can lead a  team of 2-3 developers and own deliverables
Read more
Remote only
5 - 14 yrs
₹20L - ₹30L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL
Shell Scripting
+5 more

Responsibilities

 

Analysis of mapping Documents provided by Business.

Perform technical analysis and ensures that all the standard requirements have been met.

Develop using Ab initio, and Oracle SQL and deliver in a bi-weekly Sprint model.

Develop the code by following all coding standards and performance optimization standards.

Participate in peer review and document the review comments.

Support Upper environment deployments and production failure analysis and fixes.

 

Eligibility

 

5+ years of overall IT experience.

3+ years of relevant experience in Ab Initio Development with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL, and Autosys roles.

Hands-on development experience with various Ab Initio components such as Rollup, Scan, join, Partition by key, Partition by Round Robin, Gather, Merge, Interleave, Lookup, etc.

Extensive experience with developing, unit testing, and break/fix support of Ab Initio ETL.

Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis.

Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts.

Read more
Series 'A' funded Silicon Valley based BI startup
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 6 yrs
₹30L - ₹45L / yr
Data engineering
Data Engineer
Scala
Data Warehouse (DWH)
Big Data
+7 more
It is the leader in capturing technographics-powered buying intent, helps
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.

Role – Data Engineer

Responsibilities

 Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
 Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
 Assemble large, complex data sets from third-party vendors to meet business
requirements.
 Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
 Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.

Requirements
 5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
DP
Posted by Kalaithendral Nagarajan
Chennai
4 - 9 yrs
₹4L - ₹12L / yr
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+5 more

4 - 8 overall experience.

  • 1-2 years’ experience in Azure Data Factory - schedule Jobs in Flows and ADF Pipelines, Performance Tuning, Error logging etc..
  • 1+ years of experience with Power BI - designing and developing reports, dashboards, metrics and visualizations in Powe BI.
  • (Required) Participate in video conferencing calls - daily stand-up meetings and all day working with team members on cloud migration planning, development, and support.
  • Proficiency in relational database concepts & design using star, Azure Datawarehouse, and data vault.
  • Requires 2-3 years of experience with SQL scripting (merge, joins, and stored procedures) and best practices.
  • Knowledge on deploying and run SSIS packages in Azure.
  • Knowledge of Azure Data Bricks.
  • Ability to write and execute complex SQL queries and stored procedures.
Read more
DP
Posted by Piyush Palkar
Kuala Lumpur
3 - 5 yrs
₹20L - ₹25L / yr
SQL
Python
Business Analysis
Statistical Modeling
MS-Office
+6 more

Your Day-to-Day

  1. Derive Insights and drive major strategic projects to improve Business Metrics and take responsibility for cost efficiency and Revenue management across the country
  2. Perform Market research, Post Mortem analyses on competitor expansion and Market Penetration patterns. 
  3. Provide in-depth business analysis and data insights for internal stakeholders to help improve business. Derive and launch projects in order to reduce the gaps between targeted and projected business metrics
  4. Responsible for optimizing Carsome’s C2B and B2C customer acquisition and Dealer retention funnel. Work closely with Marketing and Tech teams to create, produce and implement creative digital marketing campaigns and drive CRM initiatives and strategies 
  5. Analyse the Revenue flows and processes large datasets to gather process insights and propose process improvement ideas for Carsome across SE-Asia
  6. Lead commercial projects & process mapping, from conceptualization to completion, to build or re-engineer business models, tools and processes.
  7. Having experience in analyses and insights in dealing on Unit Economics, COGs and P&L will be preferred ,but not mandatory
  8. Use Business Intelligence and Data Science tools to answer the appropriate business problems using SQL, Tableau or Python.
  9. Coordinate with HQ Data Insights Team and manage internal stakeholders across departments to ensure the smooth delivery of strategic projects
  10. Work across different departments/functions (BI,DE, tech, pricing, finance, operations, marketing, CS,CX) and also on high impact projects and support business expansion initiatives





Your Know-Know


  • At least a Bachelor's Degree in Accounting/Finance/Business or the equivalent. 
  • 3-5  years of experience in strategy / consulting / analytical / project management roles; experience in e-commerce, Start-ups or Unicorns(CARS24,OLA,SWIGGY,FLIPKART,OYO) or entrepreneur experience preferred + At Least 2 years of experience leading a team
  • Top-notch academics from a Tier 1 college (IIM / IIT/ NIT)
  • Must have SQL/PostgreSQL/Tableau Experience. 
  • Excellent Market Research, reporting and analytical skills, including carrying out weekly and monthly reporting
  • Holds experience in working with Data/Business Intelligence Team
  • Analytical mindset with ability to present data in a structured and informative way
  • Enjoy a fast-paced environment and can align business objectives with product priorities
  • Good to have : Financial modelling, Developing financial forecasts , development of Financial - strategic plan/framework
Read more
DP
Posted by Raunak Swarnkar
Bengaluru (Bangalore)
0 - 2 yrs
₹10L - ₹15L / yr
Python
PySpark
SQL
pandas
Cloud Computing
+2 more

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
DP
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
Scala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
Mumbai, Pune, Hyderabad, Gurugram
2 - 6 yrs
₹4L - ₹7L / yr
Data Analytics
Python
R Programming
SAS
Machine Learning (ML)
+1 more

JOB DESCRIPTION

  • 2 to 6 years of experience in imparting technical training/ mentoring  
  • Must have very strong concepts of Data Analytics
  • Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
  • Must have good knowledge of SQL and Advanced SQL
  • Should have basic knowledge of Statistics
  • Should be good in Operating systems GNU/Linux, Network fundamentals,        
  • Must have knowledge on MS office (Excel/ Word/ PowerPoint)                           
  • Self-Motivated and passionate about technology                                     
  • Excellent analytical and logical skills and team player                               
  • Must have exceptional Communication Skills/ Presentation Skills                         
  • Good Aptitude skills is preferred                        
  • Exceptional communication skills

Responsibilities:                                                                                         

  • Ability to quickly learn any new technology and impart the same to other employees
  • Ability to resolve all technical queries of students
  • Conduct training sessions and drive the placement driven quality in the training
  • Must be able to work independently without the supervision of a senior person
  • Participate in reviews/ meetings                                                                                                          

Qualification:                                                                               

  • UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
  • PG: MCA/MS/MSC – Computer Science
  • Any Graduate/ Post graduate, provided they are certified in similar courses

ABOUT EDUBRIDGE

EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.

Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.

To know more about EduBridge please visit: http://www.edubridgeindia.com/

You can also visit us on Facebook , LinkedIn for our latest initiatives and products

Read more
A Chemical & Purifier Company headquartered in the US.
Agency job
via Multi Recruit by Fiona RKS
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹18L / yr
Azure data factory
Azure Data factory
Azure Data Engineer
SQL
SQL Azure
+2 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Author data services using a variety of programming languages
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centres and Azure regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Work in an Agile environment with Scrum teams.
  • Ensure data quality and help in achieving data governance.


Basic Qualifications
  • 2+ years of experience in a Data Engineer role
  • Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with data pipeline and workflow management tools
  • Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience supporting and working with cross-functional teams in a dynamic environment
Read more
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹12L / yr
ETL
Python
Amazon Web Services (AWS)
SQL
PostgreSQL

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Amazon India?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort