33+ Data engineering Jobs in Pune | Data engineering Job openings in Pune
Apply to 33+ Data engineering Jobs in Pune on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.
Strong Data engineer profile
Mandatory (Experience 1): Must have 6 months+ of hands-on Data Engineering experience.
Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).
Mandatory (Technical): Must have strong SQL capability
Preferred
Preferred (Experience): Worked on Call center data
Job Specific Criteria
CV Attachment is mandatory
Have you used Databricks or any notebook environment?
Have you worked on ETL/ELT workflow?
We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role: Azure Fabric Data Engineer
Experience: 5–10 Years
Location: Pune/Bangalore
Employment Type: Full-Time
About the Role
We are looking for an experienced Azure Data Engineer with strong expertise in Microsoft Fabric and Power BI to build scalable data pipelines, Lakehouse architectures, and enterprise analytics solutions on the Azure cloud.
Key Responsibilities
- Design & build data pipelines using Microsoft Fabric (Pipelines, Dataflows Gen2, Notebooks).
- Develop and optimize Lakehouse / Data Lake / Delta Lake architectures.
- Build ETL/ELT workflows using Fabric, Azure Data Factory, or Synapse.
- Create and optimize Power BI datasets, data models, and DAX calculations.
- Implement semantic models, incremental refresh, and Direct Lake/DirectQuery.
- Work with Azure services: ADLS Gen2, Azure SQL, Synapse, Event Hub, Functions, Databricks.
- Build dimensional models (Star/Snowflake) and support BI teams.
- Ensure data governance & security using Purview, RBAC, and AAD.
Required Skills
- Strong hands-on experience with Microsoft Fabric (Lakehouse, Pipelines, Dataflows, Notebooks).
- Expertise in Power BI (DAX, modeling, Dataflows, optimized datasets).
- Deep knowledge of Azure Data Engineering stack (ADF, ADLS, Synapse, SQL).
- Strong SQL, Python/PySpark skills.
- Experience in Delta Lake, Medallion architecture, and data quality frameworks.
Nice to Have
- Azure Certifications (DP-203, PL-300, Fabric Analytics Engineer).
- Experience with CI/CD (Azure DevOps/GitHub).
- Databricks experience (preferred).
Note: One Technical round is mandatory to be taken F2F from either Pune or Bangalore office
Review Criteria
- Strong Senior Data Engineer profile
- 4+ years of hands-on Data Engineering experience
- Must have experience owning end-to-end data architecture and complex pipelines
- Must have advanced SQL capability (complex queries, large datasets, optimization)
- Must have strong Databricks hands-on experience
- Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Must have Power BI integration experience
- CTC has 80% fixed and 20% variable in their ctc structure
Preferred
- Worked on Call center data, understand nuances of data generated in call centers
- Experience implementing data governance, quality checks, or lineage frameworks
- Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture
Job Specific Criteria
- CV Attachment is mandatory
- Are you Comfortable integrating with Power BI datasets?
- We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role & Responsibilities
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
Ideal Candidate
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
ROLES AND RESPONSIBILITIES:
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
IDEAL CANDIDATE:
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
ROLES AND RESPONSIBILITIES:
We are looking for a Junior Data Engineer who will work under guidance to support data engineering tasks, perform basic coding, and actively learn modern data platforms and tools. The ideal candidate should have foundational SQL knowledge, basic exposure to Databricks. This role is designed for early-career professionals who are eager to grow into full data engineering responsibilities while contributing to data pipeline operations and analytical support.
Key Responsibilities-
- Support the development and maintenance of data pipelines and ETL/ELT workflows under mentorship.
- Write basic SQL queries, transformations, and assist with Databricks notebook tasks.
- Help troubleshoot data issues and contribute to ensuring pipeline reliability.
- Work with senior engineers and analysts to understand data requirements and deliver small tasks.
- Assist in maintaining documentation, data dictionaries, and process notes.
- Learn and apply data engineering best practices, coding standards, and cloud fundamentals.
- Support basic tasks related to Power BI data preparation or integrations as needed.
IDEAL CANDIDATE:
- Foundational SQL skills with the ability to write and understand basic queries.
- Basic exposure to Databricks, data transformation concepts, or similar data tools.
- Understanding of ETL/ELT concepts, data structures, and analytical workflows.
- Eagerness to learn modern data engineering tools, technologies, and best practices.
- Strong problem-solving attitude and willingness to work under guidance.
- Good communication and collaboration skills to work with senior engineers and analysts.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
- Strong Senior Data Engineer profile
- Mandatory (Experience 1): Must have 4+ years of hands-on Data Engineering experience
- Mandatory (Experience 2): Must have experience owning end-to-end data architecture and complex pipelines
- Mandatory (Technical 1): Must have advanced SQL capability (complex queries, large datasets, optimization)
- Mandatory (Technical 2): Must have strong Databricks hands-on experience
- Mandatory (Role Requirement): Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Mandatory (BI Requirement): Must have Power BI integration experience
- Mandatory (Note): Bajaj CTC has 80% fixed and 20% variable
Strong Data engineer profile
Mandatory (Experience 1): Must have 6 months+ of hands-on Data Engineering experience.
Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).
Mandatory (Technical): Must have strong SQL capability
We are seeking a highly skilled Senior Data Engineer with expertise in Databricks, Python, Scala, Azure Synapse, and Azure Data Factory to join our data engineering team. The team is responsible for ingesting data from multiple sources, making it accessible to internal stakeholders, and enabling seamless data exchange across internal and external systems.
You will play a key role in enhancing and scaling our Enterprise Data Platform (EDP) hosted on Azure and built using modern technologies such as Databricks, Synapse, Azure Data Factory (ADF), ADLS Gen2, Azure DevOps, and CI/CD pipelines.
Responsibilities
- Design, develop, optimize, and maintain scalable data architectures and pipelines aligned with ETL principles and business goals.
- Collaborate across teams to build simple, functional, and scalable data solutions.
- Troubleshoot and resolve complex data issues to support business insights and organizational objectives.
- Build and maintain data products to support company-wide usage.
- Advise, mentor, and coach data and analytics professionals on standards and best practices.
- Promote reusability, scalability, operational efficiency, and knowledge-sharing within the team.
- Develop comprehensive documentation for data engineering standards, processes, and capabilities.
- Participate in design and code reviews.
- Partner with business analysts and solution architects on enterprise-level technical architectures.
- Write high-quality, efficient, and maintainable code.
Technical Qualifications
- 5–8 years of progressive data engineering experience.
- Strong expertise in Databricks, Python, Scala, and Microsoft Azure services including Synapse & Azure Data Factory (ADF).
- Hands-on experience with data pipelines across multiple source & target systems (Databricks, Synapse, SQL Server, Data Lake, SQL/NoSQL sources, and file-based systems).
- Experience with design patterns, code refactoring, CI/CD, and building scalable data applications.
- Experience developing batch ETL pipelines; real-time streaming experience is a plus.
- Solid understanding of data warehousing, ETL, dimensional modeling, data governance, and handling both structured and unstructured data.
- Deep understanding of Synapse and SQL Server, including T-SQL and stored procedures.
- Proven experience working effectively with cross-functional teams in dynamic environments.
- Experience extracting, processing, and analyzing large / complex datasets.
- Strong background in root cause analysis for data and process issues.
- Advanced SQL proficiency and working knowledge of a variety of database technologies.
- Knowledge of Boomi is an added advantage.
Core Skills & Competencies
- Excellent analytical and problem-solving abilities.
- Strong communication and cross-team collaboration skills.
- Self-driven with the ability to make decisions independently.
- Innovative mindset and passion for building quality data solutions.
- Ability to understand operational systems, identify gaps, and propose improvements.
- Experience with large-scale data ingestion and engineering.
- Knowledge of CI/CD pipelines (preferred).
- Understanding of Python and parallel processing frameworks (MapReduce, Spark, Scala).
- Familiarity with Agile development methodologies.
Education
- Bachelor’s degree in Computer Science, Information Technology, MIS, or an equivalent field.
Google Data Engineer - SSE
Position Description
Google Cloud Data Engineer
Notice Period: Immediate to 30 days serving
Job Description:
We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.
• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.
• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.
• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.
• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.
• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.
• Optimize query performance and data storage across structured and unstructured datasets.
• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.
Required Skills & Qualifications:
• 8-15 years of experience
• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.
• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.
• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).
• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.
• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.
• Familiarity with data formats such as Avro, ORC, Parquet.
• Experience handling large-scale data migrations and implementing data lake architectures.
• Expertise in data modeling, data warehousing, and distributed data processing frameworks.
• Deep understanding of data formats such as Avro, ORC, Parquet.
• Certification in GCP Data Engineering Certification or equivalent.
Good to Have:
• Experience in BigQuery, Presto, or equivalent.
• Exposure to Hadoop, Spark, Oozie, HBase.
• Understanding of cloud database migration strategies.
• Knowledge of GCP data governance and security best practices.
Job Description -
Position: Senior Data Engineer (Azure)
Experience - 6+ Years
Mode - Hybrid
Location - Gurgaon, Pune, Jaipur, Bangalore, Bhopal
Key Responsibilities:
- Data Processing on Azure: Azure Data Factory, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Service, Data Pipeline.
- Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.).
- Design and implement scalable data models and migration strategies.
- Work on distributed big data batch or streaming pipelines (Kafka or similar).
- Develop data integration and transformation solutions for structured and unstructured data.
- Collaborate with cross-functional teams for performance tuning and optimization.
- Monitor data workflows and ensure compliance with data governance and quality standards.
- Contribute to continuous improvement through automation and DevOps practices.
Required Skills & Experience:
- 6–10 years of experience as a Data Engineer.
- Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory.
- Experience in Data Modelling, Data Migration, and Data Warehousing.
- Good understanding of database structure principles and schema design.
- Hands-on experience using MS SQL Server, Oracle, or similar RDBMS platforms.
- Experience in DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) – good to have.
- Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub).
- Familiarity with visualization tools like Power BI or Tableau.
- Strong analytical, problem-solving, and debugging skills.
- Self-motivated, detail-oriented, and capable of managing priorities effectively.
Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
Experience - 7+Yrs
Must-Have:
o Python (Pandas, PySpark)
o Data engineering & workflow optimization
o Delta Tables, Parquet
· Good-to-Have:
o Databricks
o Apache Spark, DBT, Airflow
o Advanced Pandas optimizations
o PyTest/DBT testing frameworks
Interested candidates can revert back with detail below.
Total Experience -
Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-
Current CTC -
Expected CTC -
Notice Period -LWD -
Current location -
Desired location -
Wissen Technology is hiring for Data Engineer
About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.
Experience:7+ years
Location:Pune, Mumbai, Bangalore
Mode of Work:Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python (Pandas, PySpark).
- Optimize data workflows and ensure efficient data processing.
- Work with Delta Tables and Parquet for data storage and management.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Implement best practices for data engineering and workflow optimization.
Qualifications and Required Skills:
- Proficiency in Python, specifically with Pandas and PySpark.
- Strong experience in data engineering and workflow optimization.
- Knowledge of Delta Tables and Parquet.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a team environment.
- Strong communication skills.
Good to Have Skills:
- Experience with Databricks.
- Knowledge of Apache Spark, DBT, and Airflow.
- Advanced Pandas optimizations.
- Familiarity with PyTest/DBT testing frameworks.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
Wissen | Driving Digital Transformation
A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.
Overview
We are seeking an Azure Solutions Lead who will be responsible for managing and maintaining the overall architecture, design, application management and migrations, security of growing Cloud Infrastructure that supports the company’s core business and infrastructure systems and services. In this role, you will protect our critical information, systems, and assets, build new solutions, implement, and configure new applications and hardware, provide training, and optimize/monitor cloud systems. You must be passionate about applying technical skills that create operational efficiencies and offer solutions to support business operations and strategy roadmap.
Responsibilities:
- Works in tandem with our Architecture, Applications and Security teams
- Identify and implement the most optimal and secure Azure cloud-based solutions for the company.
- Design and implement end-to-end Azure data solutions, including data ingestion, storage, processing, and visualization.
- Architect data platforms using Azure services such as Azure Data Fabric, Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and One Lake etc.
- Develop and maintain data pipelines for efficient data movement and transformation.
- Design data models and schemas to support business requirements and analytical insights.
- Collaborate with stakeholders to understand business needs and translate them into technical solutions.
- Provide technical leadership and guidance to the data engineering team.
- Stay updated on emerging Azure technologies and best practices in data architecture.
- Stay current with industry trends, making recommendations as available to help keep the environment operating at it optimum while minimizing waste and maximizing investment.
- Create and update the documentation to facilitate cross-training and troubleshooting
- Work with the Security and Architecture teams to refine and deploy security best practices to identify, detect, protect, respond, and recover from threats to assets and information.
Qualifications:
- Overall 7+ years of IT Experience & minimum 2 years as an Azure Data Lead
- Strong expertise in all aspects of Azure services with a focus on data engineering & BI reporting.
- Proficiency in Azure Data Factory (ADF), Data Factory, Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools.
- Experience in data modelling, data warehousing, and business intelligence concepts.
- Proven track record of designing and implementing scalable and robust data solutions.
- Excellent communication skills with strong teamwork, analytical and troubleshooting skills, and attentiveness to detail.
- Self-starter, ability to work independently and within a team.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.
Technical Architect (Databricks)
- 10+ Years Data Engineering Experience with expertise in Databricks
- 3+ years of consulting experience
- Completed Data Engineering Professional certification & required classes
- Minimum 2-3 projects delivered with hands-on experience in Databricks
- Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
- Experience in Spark and/or Hadoop, Flink, Presto, other popular big data engines
- Familiarity with Databricks multi-hop pipeline architecture
Sr. Data Engineer (Databricks)
- 5+ Years Data Engineering Experience with expertise in Databricks
- Completed Data Engineering Associate certification & required classes
- Minimum 1 project delivered with hands-on experience in development on Databricks
- Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
- SQL delivery experience, and familiarity with Bigquery, Synapse or Redshift
- Proficient in Python, knowledge of additional databricks programming languages (Scala)
Job Title: Lead Data Engineer
📍 Location: Pune
🧾 Experience: 10+ Years
💰 Budget: Up to 1.7 LPM
Responsibilities
- Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
- Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
- Optimize SQL queries for performance and cost efficiency.
- Ensure data quality, reliability, and security across pipelines and datasets.
- Implement Snowflake best practices for performance, scaling, and governance.
- Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
- Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.
Exp: 10+ Years
CTC: 1.7 LPM
Location: Pune
SnowFlake Expertise Profile
Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.
Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.
Responsibilities:
* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience
* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for
ETL or BI.
* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.
* Ensure data quality, integrity, and security within the Snowflake environment.
* Participate in code reviews and contribute to the team's development standards.
Education:
* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.
* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).
Key Responsibilities
● Design & Development
○ Architect and implement data ingestion pipelines using Microsoft Fabric Data Factory (Dataflows) and OneLake sources
○ Build and optimize Lakehouse and Warehouse solutions leveraging Delta Lake, Spark Notebooks, and SQL Endpoints
○ Define and enforce Medallion (Bronze–Silver–Gold) architecture patterns for raw, enriched, and curated datasets
● Data Modeling & Transformation
○ Develop scalable transformation logic in Spark (PySpark/Scala) and Fabric SQL to support reporting and analytics
○ Implement slowly changing dimensions (SCD Type 2), change-data-capture (CDC) feeds, and time-windowed aggregations
● Performance Tuning & Optimization
○ Monitor and optimize data pipelines for throughput, cost efficiency, and reliability
○ Apply partitioning, indexing, caching, and parallelism best practices in Fabric Lakehouses and Warehouse compute
● Data Quality & Governance
○ Integrate Microsoft Purview for metadata cataloging, lineage tracking, and data discovery
○ Develop automated quality checks, anomaly detection rules, and alerts for data reliability
● CI/CD & Automation
○ Implement infrastructure-as-code (ARM templates or Terraform) for Fabric workspaces, pipelines, and artifacts
○ Set up Git-based version control, CI/CD pipelines (e.g. Azure DevOps) for seamless deployment across environments
● Collaboration & Support
○ Partner with data scientists, BI developers, and business analysts to understand requirements and deliver data solutions
○ Provide production support, troubleshoot pipeline failures, and drive root-cause analysis
Required Qualifications
● 5+ years of professional experience in data engineering roles, with at least 1 year working hands-on in Microsoft Fabric
● Strong proficiency in:
○ Languages: SQL (T-SQL), Python, and/or Scala
○ Fabric Components: Data Factory Dataflows, OneLake, Spark Notebooks, Lakehouse, Warehouse
○ Data Storage: Delta Lake, Parquet, CSV, JSON formats
● Deep understanding of data modeling principles (star schemas, snowflake schemas, normalized vs. denormalized)
● Experience with CI/CD and infrastructure-as-code for data platforms (ARM templates, Terraform, Git)
● Familiarity with data governance tools, especially Microsoft Purview
● Excellent problem-solving skills and ability to communicate complex technical concepts clearly
NOTE: Candidate should be willing to take one technical round F2F from any of the branch location. (Pune/ Mumbai/ Bangalore)
∙
Good experience in 5+ SQL and NoSQL database development and optimization.
∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway.
∙In-depth understanding of data warehousing principles and performance tuning techniques.
∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB.
∙Proficient in Python or Scala for data processing and automation.
∙3+ years of experience working with AWS-managed database services.
∙3+ years of experience with Power BI or similar BI/reporting platforms.
Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.
Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Experience: 4+ years.
Location: Vadodara & Pune
Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines
What you'll be doing:
- Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
- Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
- Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
- Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
- Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
- Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
- Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
- Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
- Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.
What you need:
Basic Skills:
- 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
- Strong experience with Apache Kafka for stream processing and real-time data integration.
- Proficiency in SQL and ETL/ELT processes.
- Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
- Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
- Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
- Knowledge of data governance, security, and compliance best practices.
- Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Ability to work in a collaborative team environment and communicate effectively with cross-functional teams
Responsibilities:
- Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
- Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
- Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
- Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
- Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
- Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
- Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
- Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
- Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
- Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform.
I am looking for Mulesoft Developer for a reputed MNC
Experience: 6+ Years
Relevant experience: 4 Years
Location : Pune, Mumbai, Bangalore, Indore, Kolkata
Skills:
Mulesoft
Experience: 6+ Years
Relevant experience: 4 Years
Location : Pune, Mumbai, Bangalore, Indore, Kolkata
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:
• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.
• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.
ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.
• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication
Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.
About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality
• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team
Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills
Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel
Work Environment
• Customer Office (Mumbai) / Remote Work
Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Looking Data Enginner for our OWn organization-
Notice Period- 15-30 days
CTC- upto 15 lpa
Preferred Technical Expertise
- Expertise in Python programming.
- Proficient in Pandas/Numpy Libraries.
- Experience with Django framework and API Development.
- Proficient in writing complex queries using SQL
- Hands on experience with Apache Airflow.
- Experience with source code versioning tools such as GIT, Bitbucket etc.
Good to have Skills:
- Create and maintain Optimal Data Pipeline Architecture
- Experienced in handling large structured data.
- Demonstrated ability in solutions covering data ingestion, data cleansing, ETL, Data mart creation and exposing data for consumers.
- Experience with any cloud platform (GCP is a plus)
- Experience with JQuery, HTML, Javascript, CSS is a plus.
Job description
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
www.datametica.com
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
Job Description-
Backend Developer- Senior
Experience - 3-6 years
Location: Pune/Kota
Minimum Qualifications:
- BE/B.Tech or ME/M.Tech in Computer Science.
- Must have “Can Do Attitude” towards work
- Must have work exp of 3-6 years
- Must have programming exp of 1-2 years in any of Python/Golang/Java languages
- Must have worked in product based company
- Ready to work in a startup and adaptable to a dynamic environment
- Ready to accept ad-hoc requirements and track them till they get implemented
- Ready to learn new technologies like Andriod, Angular, etc.
- Good at HTTP basics, OOPs concepts, data structures, algorithms, networking and
security aspects
- Ability to write clean code and maintain it
- Good at SQL/No-SQL databases
Preferred Qualifications:
- Experience in any good product based startup
- Experience in working with the team and managing a small team of 2-5 associates
- Experience in being a mentor for co-developers
- Experience in design/developing scalable systems.
- Experience in public cloud platforms services/APIs of AWS, Google Cloud, etc.
- Experience in data engineering
- Experience in SOA/Microservice architecture development
Responsibilities:
- Design and develop scalable services and APIs in Python/Golang
- Always maintain the services secure
- Should optimize APIs for mobile data and apps
- Use off-the-shelf and state-of-the-art services for faster development of product
- Guide team members with designs
- Take the end to end ownership of features and resolve customer issues on priority
- Mentor/guide/monitor junior developer
- Expertise Android/Angular to the required extent and guide app developers while
designing APIs
Opportunities in the role:
- LearnAngular, Python, Node.js, Golang, ELK stack, MEAN/MERN
- Work on AWS, Azure, Google Cloud Platform
- Work on databases like RDS, MongoDB, Big Table & DynamoDB, Redis, Aerospike
- Experience with SQL/ NoSQL Databases (RDS, DynamoDB, Google Datastore, Redis)
- Experience with ELK stack.
- Fast prototyping of proof-of concept features/application based on a brief
- Work on data engineering












