Cutshort logo
Snow flake schema jobs

29+ Snow flake schema Jobs in India

Apply to 29+ Snow flake schema Jobs on CutShort.io. Find your next job, effortlessly. Browse Snow flake schema Jobs and apply today!

icon
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
5 - 8 yrs
₹18L - ₹22L / yr
MuleSoft
Datawarehousing
Collibra
Snow flake schema

Role: MuleSoft Developer.

Skills: MuleSoft, Snowflake, Data Lineage experience suing Collibra, Data Warehousing

Location: Bangalore/ Mangalore Hybrid

Notice Period - Immediate to 15 days


Responsibilities:

• 5+ years of experience in MuleSoft

• Strong experience in Snowflake

• Data Lineage experience suing Collibra

• Data Warehousing: Experience with developing data warehouse, data mart, data lake

type of solution

• Problem-Solving: Strong analytical skills and the ability to combine data from different

sources

• Communication: Excellent communication skills to work effectively with cross-

functional teams.

• Good to have – Experience with open-source data ingestion


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Hyderabad
5 - 10 yrs
Best in industry
SQL
skill iconAmazon Web Services (AWS)
ETL
Windows Azure
Snow flake schema

Responsibilities include:  

  • Develop and maintain data validation logic in our proprietary Control Framework tool 
  • Actively participate in business requirement elaboration and functional design sessions to develop an understanding of our Operational teams’ analytical needs, key data flows and sources 
  • Assist Operational teams in the buildout of Checklists and event monitoring workflows within our Enterprise Control Framework platform 
  • Build effective working relationships with Operational users, Reporting and IT development teams and business partners across the organization 
  • Conduct interviews, generate user stories, develop scenarios and workflow analyses  
  • Contribute to the definition of reporting solutions that empower Operational teams to make immediate decisions as to the best course of action 
  • Perform some business user acceptance testing 
  • Provide production support and troubleshooting for existing operational dashboards  
  • Conduct regular demos and training of new features for the stakeholder community 



Qualifications  

  • Bachelor’s degree or equivalent in Business, Accounting, Finance, MIS, Information Technology or related field of study 
  • Minimum 5 years’ of SQL required 
  • Experience querying data on cloud platforms (AWS/ Azure/ Snowflake) required 
  • Exceptional problem solving and analytical skills, attention to detail and organization 
  • Able to independently troubleshoot and gather supporting evidence  
  • Prior experience developing within a BI reporting tool (e.g. Spotfire, Tableau, Looker, Information Builders) a plus  
  • Database Management and ETL development experience a plus 
  • Self-motivated, self-assured, and self-managed  
  • Able to multi-task to meet time-driven goals  
  • Asset management experience, including investment operation a plus 



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Pune
5 - 10 yrs
Best in industry
ETL
SQL
Snow flake schema
Data Warehouse (DWH)

Job Description for QA Engineer:

  • 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus


Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution


Read more
is a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

is a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
5 - 8 yrs
₹15L - ₹20L / yr
Snow flake schema
skill iconPython
skill iconAmazon Web Services (AWS)
SQL

Skills: ITSM methodologies, Python, Snowflake and AWS. Open for 18*5 support as well.

Immediate Joiner - 30 days NP


• Bachelor’s degree in computer science, Software Engineering, or a related field.

Should have hands on 5+ Experience in ITSM methodologies

• 3+ Years of experience in SQL, Snowflake, Python development

• 2+ years hands-on experience in Snowflake DW

• Good communication and client/stakeholders’ management skill

• Willing to work across multiple time-zone and handled team based out of off - shore

Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Pune
5 - 9 yrs
₹18L - ₹30L / yr
Data Engineer
Python
Datawarehousing
Snow flake schema
Data modeling
+7 more

Skills & Experience:

❖ At least 5+ years of experience as a Data Engineer

❖ Hands-on and in-depth experience with Star / Snowflake schema design, data modeling,

data pipelining and MLOps.

❖ Experience in Data Warehouse technologies (e.g. Snowflake, AWS Redshift, etc)

❖ Experience in AWS data pipelines (Lambda, AWS glue, Step functions, etc)

❖ Proficient in SQL

❖ At least one major programming language (Python / Java)

❖ Experience with Data Analysis Tools such as Looker or Tableau

❖ Experience with Pandas, Numpy, Scikit-learn, and Jupyter notebooks preferred

❖ Familiarity with Git, GitHub, and JIRA.

❖ Ability to locate & resolve data quality issues

❖ Ability to demonstrate end to ed data platform support experience

Other Skills:

❖ Individual contributor

❖ Hands-on with the programming

❖ Strong analytical and problem solving skills with meticulous attention to detail

❖ A positive mindset and can-do attitude

❖ To be a great team player

❖ To have an eye for detail

❖ Looking for opportunities to simplify, automate tasks, and build reusable components.

❖ Ability to judge suitability of new technologies for solving business problems

❖ Build strong relationships with analysts, business, and engineering stakeholders

❖ Task Prioritization

❖ Familiar with agile methodologies.

❖ Fintech or Financial services industry experience

❖ Eagerness to learn, about the Private Equity/Venture Capital ecosystem and associated

secondary market

Responsibilities:

o Design, develop and maintain a data platform that is accurate, secure, available, and fast.

o Engineer efficient, adaptable, and scalable data pipelines to process data.

o Integrate and maintain a variety of data sources: different databases, APIs, SAASs, files, logs,

events, etc.

o Create standardized datasets to service a wide variety of use cases.

o Develop subject-matter expertise in tables, systems, and processes.

o Partner with product and engineering to ensure product changes integrate well with the

data platform.

o Partner with diverse stakeholder teams, understand their challenges and empower them

with data solutions to meet their goals.

o Perform data quality on data sources and automate and maintain a quality control

capability.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Bengaluru (Bangalore)
8 - 15 yrs
Best in industry
Snow flake schema
skill iconPython
PySpark
databricks

Responsibilities:

  • Lead the design, development, and implementation of scalable data architectures leveraging Snowflake, Python, PySpark, and Databricks.
  • Collaborate with business stakeholders to understand requirements and translate them into technical specifications and data models.
  • Architect and optimize data pipelines for performance, reliability, and efficiency.
  • Ensure data quality, integrity, and security across all data processes and systems.
  • Provide technical leadership and mentorship to junior team members.
  • Stay abreast of industry trends and best practices in data architecture and analytics.
  • Drive innovation and continuous improvement in data management practices.

Requirements:

  • Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred.
  • 5+ years of experience in data architecture, data engineering, or a related field.
  • Strong proficiency in Snowflake, including data modeling, performance tuning, and administration.
  • Expertise in Python and PySpark for data processing, manipulation, and analysis.
  • Hands-on experience with Databricks for building and managing data pipelines.
  • Proven leadership experience, with the ability to lead cross-functional teams and drive projects to successful completion.
  • Experience in the banking or insurance domain is highly desirable.
  • Excellent communication skills, with the ability to effectively collaborate with stakeholders at all levels of the organization.
  • Strong problem-solving and analytical skills, with a keen attention to detail.

Benefits:

  • Competitive salary and performance-based incentives.
  • Comprehensive benefits package, including health insurance, retirement plans, and wellness programs.
  • Flexible work arrangements, including remote options.
  • Opportunities for professional development and career advancement.
  • Dynamic and collaborative work environment with a focus on innovation and continuous learning.


Read more
Nyteco

at Nyteco

2 candid answers
1 video
Alokha Raj
Posted by Alokha Raj
Remote only
4 - 6 yrs
₹17L - ₹20L / yr
Data Transformation Tool (DBT)
ETL
SQL
Big Data
Google Cloud Platform (GCP)
+2 more

Join Our Journey

Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive. 


After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction. 


As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.


Roles And Responsibilities:

  • Work with stakeholders to determine data needs, and design and build scalable data pipelines.
  • Develop and sustain ELT processes to guarantee timely and precise data availability for analytical purposes.
  • Construct and oversee large-scale data pipelines that collect data from various sources.
  • Expand and refine our DBT setup for data transformation.
  • Engage with our data platform team to address customer issues.
  • Apply your advanced SQL and big data expertise to develop innovative data solutions.
  • Enhance and debug existing data pipelines for improved performance and reliability.
  • Generate and update dashboards and reports to share analytical results with stakeholders.
  • Implement data quality controls and validation procedures to maintain data accuracy and integrity.
  • Work with various teams to incorporate analytics into product development efforts.
  • Use technologies like Snowflake, DBT, and Fivetran effectively.


Mandatory Qualifications:

  • Hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Possess at least 4 years of experience in Data Engineering, ETL Building, database management, and Data Warehousing.
  • Demonstrated expertise as an Analytics Engineer or in a similar role.
  • Proficient in SQL, a scripting language (Python), and a data visualization tool.
  • Mandatory experience in working with DBT.
  • Experience in working with Airflow, and cloud platforms like AWS, GCP, or Snowflake.
  • Deep knowledge of ETL/ELT patterns.
  • Require at least 1 year of experience in building Data pipelines and leading data warehouse projects.
  • Experienced in mentoring data professionals across all levels, from junior to senior.
  • Proven track record in establishing new data engineering processes and navigating through ambiguity.
  • Preferred Skills: Knowledge of Snowflake and reverse ETL tools is advantageous.


Grow, Develop, and Thrive With Us

  • Global Collaboration: Work with a dynamic team that’s making an impact across the globe, in the recycling industry and beyond. We have customers in India, Singapore, United-States, Mexico, Germany, France and more
  • Professional Growth: a highway toward setting-up a great data team and evolve into a leader
  • Flexible Work Environment: Competitive compensation, performance-based rewards, health benefits, paid time off, and flexible working hours to support your well-being.


Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gloria Dsouza
Posted by Gloria Dsouza
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹15L / yr
Snow flake schema
SQL
skill iconPython
Spark
Data Warehouse (DWH)
  • As a data engineer, you will build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret. You ultimate goal is to make data accessible for organizations to optimize their performance. 
  • Work closely with PMs, business analysts to build and improvise data pipelines, identify and model business objects • Write scripts implementing data transformation, data structures, metadata for bringing structure for partially unstructured data and improvise quality of data 
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL
  • Own data pipelines - Monitoring, testing, validating and ensuring meaningful data exists in data warehouse with high level of data quality 
  • What we look for in the candidate is strong analytical skills with the ability to collect, organise, analyse, and disseminate significant amounts of information with attention to detail and accuracy 
  • Create long term and short-term design solutions through collaboration with colleagues
  • Proactive to experiment with new tools
  • Strong programming skill in python
  • Skillset: Python, SQL, ETL frameworks, PySpark and Snowflake 
  • Strong communication and interpersonal skills to interact with senior-level management regarding the implementation of changes 
  • Willingness to learn and eagerness to contribute to projects 
  • Designing datawarehouse and most appropriate DB schema for the data product
  • Positive attitude and proactive problem-solving mindset
  • Experience in building data pipelines and connectors
  • Knowledge on AWS cloud services would be preferred


Read more
Indian Based IT Service Organization

Indian Based IT Service Organization

Agency job
via People First Consultants by Aishwarya KA
Chennai, Tirunelveli
5 - 7 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Greetings!!!!


We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location


Required Education/Experience


● Bachelor’s degree in computer Science or related field

● 5-7 years’ experience in the following:

● Snowflake, Databricks management,

● Python and AWS Lambda

● Scala and/or Java

● Data integration service, SQL and Extract Transform Load (ELT)

● Azure or AWS for development and deployment

● Jira or similar tool during SDLC

● Experience managing codebase using Code repository in Git/GitHub or Bitbucket

● Experience working with a data warehouse.

● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML

● Exposure to working in an agile work environment


Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
Snow flake schema
Snowflake
+5 more

Job Title: AWS-Azure Data Engineer with Snowflake

Location: Bangalore, India

Experience: 4+ years

Budget: 15 to 20 LPA

Notice Period: Immediate joiners or less than 15 days

Job Description:

We are seeking an experienced AWS-Azure Data Engineer with expertise in Snowflake to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, implementing, and maintaining data infrastructure and systems using AWS, Azure, and Snowflake. Your primary focus will be on developing scalable and efficient data pipelines, optimizing data storage and processing, and ensuring the availability and reliability of data for analysis and reporting.

Responsibilities:

  1. Design, develop, and maintain data pipelines on AWS and Azure to ingest, process, and transform data from various sources.
  2. Optimize data storage and processing using cloud-native services and technologies such as AWS S3, AWS Glue, Azure Data Lake Storage, Azure Data Factory, etc.
  3. Implement and manage data warehouse solutions using Snowflake, including schema design, query optimization, and performance tuning.
  4. Collaborate with cross-functional teams to understand data requirements and translate them into scalable and efficient technical solutions.
  5. Ensure data quality and integrity by implementing data validation, cleansing, and transformation processes.
  6. Develop and maintain ETL processes for data integration and migration between different data sources and platforms.
  7. Implement and enforce data governance and security practices, including access control, encryption, and compliance with regulations.
  8. Collaborate with data scientists and analysts to support their data needs and enable advanced analytics and machine learning initiatives.
  9. Monitor and troubleshoot data pipelines and systems to identify and resolve performance issues or data inconsistencies.
  10. Stay updated with the latest advancements in cloud technologies, data engineering best practices, and emerging trends in the industry.

Requirements:

  1. Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
  2. Minimum of 4 years of experience as a Data Engineer, with a focus on AWS, Azure, and Snowflake.
  3. Strong proficiency in data modelling, ETL development, and data integration.
  4. Expertise in cloud platforms such as AWS and Azure, including hands-on experience with data storage and processing services.
  5. In-depth knowledge of Snowflake, including schema design, SQL optimization, and performance tuning.
  6. Experience with scripting languages such as Python or Java for data manipulation and automation tasks.
  7. Familiarity with data governance principles and security best practices.
  8. Strong problem-solving skills and ability to work independently in a fast-paced environment.
  9. Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams and stakeholders.
  10. Immediate joiner or notice period less than 15 days preferred.

If you possess the required skills and are passionate about leveraging AWS, Azure, and Snowflake to build scalable data solutions, we invite you to apply. Please submit your resume and a cover letter highlighting your relevant experience and achievements in the AWS, Azure, and Snowflake domains.

Read more
My Client is the world’s largest media investment company.

My Client is the world’s largest media investment company.

Agency job
via Merito by Jinita Sumaria
Gurugram
3 - 5 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
SQL
+5 more

The Client is the world’s largest media investment company. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channel We are currently looking for a Manager Analyst – Analytics to join us. In this role, you will work on

various projects for the in-house team across data management, reporting, and analytics.


Responsibility:

 

•       Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics

•       Develop data extraction and manipulation code based on business rules

•       Design and construct data store and procedures for their maintenance  Develop and maintain strong relationships with stakeholders  Write high-quality code as per prescribed standards.

•       Participate in internal projects as required


Requirements:

 

•       2-5 years for strong experience in working with SQL, Python, ETL development.

•       Strong Experience in writing complex SQLs

•       Good Communication skills

•       Good experience of working with any BI tool like Tableau, Power BI.

•       Familiar with various cloud technologies and their offerings within the data specialization and Data Warehousing.

•       Snowflake, AWS are good to have.

 

Minimum qualifications:

•       B. Tech./MCA or equivalent preferred

Excellent 2 years Hand on experience on Big data, ETL Development, Data Processing.  

Read more
BlueYonder
Bengaluru (Bangalore), Hyderabad
10 - 14 yrs
Best in industry
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Gradle
+13 more

·      Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.

·      BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.

·      This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.

·      Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment

·      The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools

Our current technical environment:

·      Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake

·      • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture

·      • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)

·      Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite

Read more
Unthread
Yashvi Sanghvi
Posted by Yashvi Sanghvi
Mumbai
2 - 3 yrs
₹3L - ₹20L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+8 more
  • +2 years experience working as a react.js developer.
  • In-depth knowledge of JavaScript, CSS, HTML, and front-end languages.
  • Knowledge of REACT tools including React.js, Redux, Material UI.
  • Experience with user interface design & user experience design
  • Knowledge of performance testing frameworks including Mocha and Jest.
  • Experience with browser-based debugging and performance testing software.
  • Excellent troubleshooting skills.
  • Good project management skills.
  • Developing applications in React including component design and state management for specific use cases
  • Experience working with at least one SQL and NoSQL Database (MongoDB, SQL Server, Snowflake, Postgres preferred)
  • Basic experience with AWS platform


Read more
People Impact

People Impact

Agency job
via People Impact by Pruthvi K
Remote only
4 - 10 yrs
₹10L - ₹20L / yr
Amazon Redshift
Datawarehousing
skill iconAmazon Web Services (AWS)
Snow flake schema
Data Warehouse (DWH)

Job Title: Data Warehouse/Redshift Admin

Location: Remote

Job Description

AWS Redshift Cluster Planning

AWS Redshift Cluster Maintenance

AWS Redshift Cluster Security

AWS Redshift Cluster monitoring.

Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters

Hands-on experience with Query Tuning in high concurrency environment

Expertise setting up and managing AWS Redshift

AWS certifications Preferred (AWS Certified SysOps Administrator)

Read more
Hyderabad
4 - 8 yrs
₹6L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
  1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
  2. Experience in developing lambda functions with AWS Lambda
  3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
  4. Should be able to code in Python and Scala.
  5. Snowflake experience will be a plus

 

Read more
QUT

QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
Bengaluru (Bangalore)
3 - 7 yrs
₹1L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
What You'll Bring

•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie

•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Read more
Creating the Data observability space.

Creating the Data observability space.

Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
8 - 20 yrs
₹40L - ₹70L / yr
skill iconAmazon Web Services (AWS)
Snow flake schema
Google Cloud Platform (GCP)
Microsoft Windows Azure
Architect and build cloud environments with a focus on AWS including the design of production,
staging, QA, and development of cloud infrastructures running in 24×7 environments.
● Most of our deployments are in K8s, You will work with the team to run and manage multiple K8s
environments 24/7
● Implement and oversee all aspects of the cloud environment including provisioning, scale,
monitoring, and security.
● Nurture cloud computing expertise internally and externally to drive cloud adoption.
● Implement systems solutions, and processes needed to manage cloud cost, monitoring, scalability,
and redundancy.
● Ensure all cloud solutions adhere to security and compliance best practices.
● Collaborate with Enterprise Architecture, Data Platform, DevOps, and Integration Teams to ensure
cloud adoption follows standard best practices.
Responsibilities :
● Bachelor’s degree in Computer Science, Computer Engineering or Information Technology or
equivalent experience.
● Experience with Kubernetes on cloud and deployment technologies such as Helm is a major plus
● Expert level hands on experience with AWS (Azure and GCP experience are a big plus)
● 10 or more years of experience.
● Minimum of 5 years’ experience building and supporting cloud solutions
Read more
Top IT MNC

Top IT MNC

Agency job
Chennai, Bengaluru (Bangalore), Kochi (Cochin), Coimbatore, Hyderabad, Pune, Kolkata, Noida, Gurugram, Mumbai
5 - 13 yrs
₹8L - ₹20L / yr
Snow flake schema
skill iconPython
snowflake
Greetings,

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Read more
PayU

at PayU

1 video
6 recruiters
Vishakha Sonde
Posted by Vishakha Sonde
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹20L / yr
skill iconPython
ETL
Data engineering
Informatica
SQL
+2 more

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
There is an urgent opening for Snowflake in MNC company.

There is an urgent opening for Snowflake in MNC company.

Agency job
via Volibits by Ankita Mishra
Pune, Mumbai, Bengaluru (Bangalore), Chennai, Noida, Hyderabad
7 - 12 yrs
₹5L - ₹15L / yr
Snow flake schema
SnowSQL
Snowpipe
There is an urgent requirement for Snowflake Developer in MNC company. Notice Period should be joining in April or 2nd week of May. Having skills like Snowpipe, SnowSQL, Snowflake schema & Snowflake developer.The Budget for this profile is upto 22 LPA.
Read more
Picture the future

Picture the future

Agency job
via Jobdost by Sathish Kumar
Hyderabad
4 - 7 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.


Mandatory Requirements 

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

Read more
TekSystems

TekSystems

Agency job
via NDSSG DataSync Pvt Ltd by Hamza Bootwala
Bengaluru (Bangalore), Hyderabad
4 - 8 yrs
₹10L - ₹20L / yr
WebFOCUS
Snow flake schema
Sybase
Data Warehouse (DWH)
Oracle
+2 more

Required:
1) WebFOCUS BI Reporting
2) WebFOCUS Adminstration
3) Sybase or Oracle or SQL Server or Snowflake
4) DWH Skills

Nice to have:
1) Experience in SAP BO / Crystal report / SSRS / Power BI
2) Experience in Informix
3) Experience in ETL

Responsibilities:

• Technical knowledge regarding best practices of BI development / integration.
• Candidate must understand business processes, be a detailed-oriented person and quickly grasp new concepts.
• Additionally the candidate will have strong presentation, interpersonal, software development and work management skills.
• Strong Advanced SQL programming skills are required
• Proficient in MS Word, Excel, Access, and PowerPoint
• Experience working with one or more BI Reporting tools as Analyst/Developer.
• Knowledge of data mining techniques and procedures and knowing when their use is appropriate
• Ability to present complex information in an understandable and compelling manner.
• Experience converting reports from one reporting tool to another

Read more
enterprise-grade, streaming integration with intelligence pl

enterprise-grade, streaming integration with intelligence pl

Agency job
via Jobdost by Mamatha A
Chennai
5 - 15 yrs
₹15L - ₹30L / yr
skill iconJava
skill iconC++
Data Structures
SQL
Amazon RDS
+15 more

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.

Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics.

Strong Core Java / C++ experience

·       Excellent understanding of Logical ,Object-oriented design patterns, algorithms and data structures.

·       Sound knowledge of application access methods including authentication mechanisms, API quota limits, as well as different endpoint REST, Java etc

·       Strong exp in databases - not just a SQL Programmer but with knowledge of DB internals

·       Sound knowledge of Cloud database available as service is plus (RDS, CloudSQL, Google BigQuery, Snowflake )

·       Experience working in any cloud environment and microservices based architecture utilizing GCP, Kubernetes, Docker, CircleCI, Azure or similar technologies

·       Experience in Application verticals such as ERP, CRM, Sales with applications such as Salesforce, Workday, SAP  < Not Mandatory - added advantage >

·       Experience in building distributed systems  < Not Mandatory - added advantage >

·       Expertise on Data warehouse < Not Mandatory - added advantage >

·       Exp in developing & delivering product as SaaS i< Not Mandatory - added advantage 

Read more
A logistic Company

A logistic Company

Agency job
via Anzy by Dattatraya Kolangade
Bengaluru (Bangalore)
5 - 7 yrs
₹18L - ₹25L / yr
Data engineering
ETL
SQL
Hadoop
Apache Spark
+13 more
Key responsibilities:
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills

____ 04

Freight Commerce Solutions Pvt Ltd. 

• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
Read more
Futurense Technologies

at Futurense Technologies

1 recruiter
Rajendra Dasigari
Posted by Rajendra Dasigari
Bengaluru (Bangalore)
2 - 7 yrs
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
skill iconMachine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
+19 more

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort