8+ Snow flake schema Jobs in Chennai | Snow flake schema Job openings in Chennai
Apply to 8+ Snow flake schema Jobs in Chennai on CutShort.io. Explore the latest Snow flake schema Job opportunities across top companies like Google, Amazon & Adobe.

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.

About the Role:
We are seeking a skilled and driven Backend Developer with expertise in Python (Django/FastAPI) and Node.js (TypeScript) to join our team. The ideal candidate will have experience in database design (RDBMS and NoSQL), REST API and GraphQL development, cloud services, and AI-driven applications. You will be responsible for designing and implementing scalable backend solutions, ensuring high performance, security, and reliability.
If you’re passionate about backend development, Generative AI, and data engineering, this is the role for you!
Key Responsibilities:
Backend Development:
- Develop and maintain robust, scalable backend services using Node.js (TypeScript) and Python (Django/FastAPI).
- Build APIs with REST and GraphQL, ensuring high security and performance.
- Implement authentication mechanisms such as OAuth2.0, SAML, JWT, MFA, and passkeys (optional).
- Research and integrate Generative AI (Gen AI) models and OpenAI APIs into backend systems.
Database & Data Engineering:
- Design and optimize schemas for both relational (PostgreSQL, YSQL) and NoSQL (DynamoDB, MongoDB) databases.
- Work with Redshift, BigQuery, and Snowflake to manage large-scale data processing.
- Develop ETL pipelines for data ingestion and transformation.
- Utilize Apache Airflow for workflow automation.
Cloud Services & Serverless Architecture:
- Work extensively with AWS Cloud services, and optionally Azure and GCP.
- Design and implement serverless architectures and event-driven systems using frameworks like AWS Lambda or equivalent on Azure/GCP.
- Configure and manage webhooks for event notifications and integrations.
- Integrate Apache Pulsar for real-time event streaming and messaging.
Programming & AI Integration:
- Apply design patterns, SOLID principles, and functional programming practices.
- Develop Python-based AI/ML solutions, leveraging Django/FastAPI for backend services.
- Manage AI/ML environments using Conda.
DevOps & Deployment:
- Utilize Docker and Kubernetes (K8s) for containerization and orchestration.
- Collaborate with DevOps teams for CI/CD pipelines and scalable deployments.
Tools & Utilities:
- Use Postman, Swagger, and cURL for API testing and documentation.
- Demonstrate strong knowledge of Unix commands for troubleshooting and development.
- Work with Git for versioning and code management.
Key Skills & Qualifications:
Must-Have:
✔ Proficiency in Python (Django/FastAPI) and Node.js (TypeScript).
✔ Experience with NestJS framework.
✔ Expertise in RDBMS and NoSQL database design and optimization.
✔ Hands-on experience with REST API and GraphQL development.
✔ Familiarity with authentication protocols such as OAuth2.0, SAML, JWT, and MFA.
✔ Strong understanding of AWS Cloud Services and Serverless Architecture.
✔ Experience with Gen AI, OpenAI APIs, and AI model integration.
✔ Hands-on knowledge of Python and Conda environments.
✔ Expertise in Redshift, BigQuery, Snowflake, and Apache Airflow for Data Engineering.
✔ Exposure to Apache Pulsar for event streaming.
Nice-to-Have:
➕ Exposure to Azure and GCP serverless frameworks.
➕ Knowledge of webhooks for event handling.
➕ Experience with passkeys as an authentication option.
Soft Skills:
✅ Problem-solving mindset with a passion for tackling complex challenges.
✅ Ability to learn and adapt to new tools, frameworks, and programming languages.
✅ Collaborative attitude and strong communication skills.
What We Offer:
💰 Competitive compensation and benefits package.
🚀 Opportunity to work with cutting-edge technologies in a fast-paced environment.
📚 A culture of learning, growth, and collaboration.
🌍 Exposure to large-scale systems, AI/ML integrations, and exciting technical challenges.
Greetings!!!!
We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location
Required Education/Experience
● Bachelor’s degree in computer Science or related field
● 5-7 years’ experience in the following:
● Snowflake, Databricks management,
● Python and AWS Lambda
● Scala and/or Java
● Data integration service, SQL and Extract Transform Load (ELT)
● Azure or AWS for development and deployment
● Jira or similar tool during SDLC
● Experience managing codebase using Code repository in Git/GitHub or Bitbucket
● Experience working with a data warehouse.
● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML
● Exposure to working in an agile work environment

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics.
Strong Core Java / C++ experience
· Excellent understanding of Logical ,Object-oriented design patterns, algorithms and data structures.
· Sound knowledge of application access methods including authentication mechanisms, API quota limits, as well as different endpoint REST, Java etc
· Strong exp in databases - not just a SQL Programmer but with knowledge of DB internals
· Sound knowledge of Cloud database available as service is plus (RDS, CloudSQL, Google BigQuery, Snowflake )
· Experience working in any cloud environment and microservices based architecture utilizing GCP, Kubernetes, Docker, CircleCI, Azure or similar technologies
· Experience in Application verticals such as ERP, CRM, Sales with applications such as Salesforce, Workday, SAP < Not Mandatory - added advantage >
· Experience in building distributed systems < Not Mandatory - added advantage >
· Expertise on Data warehouse < Not Mandatory - added advantage >
· Exp in developing & delivering product as SaaS i< Not Mandatory - added advantage

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
Skills:
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.