Cutshort logo

50+ SQL Jobs in Pune | SQL Job openings in Pune

Apply to 50+ SQL Jobs in Pune on CutShort.io. Explore the latest SQL Job opportunities across top companies like Google, Amazon & Adobe.

Sql jobs in other cities
ESQL JobsESQL Jobs in PuneMySQL JobsMySQL Jobs in AhmedabadMySQL Jobs in Bangalore (Bengaluru)MySQL Jobs in BhubaneswarMySQL Jobs in ChandigarhMySQL Jobs in ChennaiMySQL Jobs in CoimbatoreMySQL Jobs in Delhi, NCR and GurgaonMySQL Jobs in HyderabadMySQL Jobs in IndoreMySQL Jobs in JaipurMySQL Jobs in Kochi (Cochin)MySQL Jobs in KolkataMySQL Jobs in MumbaiMySQL Jobs in PunePL/SQL JobsPL/SQL Jobs in AhmedabadPL/SQL Jobs in Bangalore (Bengaluru)PL/SQL Jobs in ChandigarhPL/SQL Jobs in ChennaiPL/SQL Jobs in CoimbatorePL/SQL Jobs in Delhi, NCR and GurgaonPL/SQL Jobs in HyderabadPL/SQL Jobs in IndorePL/SQL Jobs in JaipurPL/SQL Jobs in KolkataPL/SQL Jobs in MumbaiPL/SQL Jobs in PunePostgreSQL JobsPostgreSQL Jobs in AhmedabadPostgreSQL Jobs in Bangalore (Bengaluru)PostgreSQL Jobs in BhubaneswarPostgreSQL Jobs in ChandigarhPostgreSQL Jobs in ChennaiPostgreSQL Jobs in CoimbatorePostgreSQL Jobs in Delhi, NCR and GurgaonPostgreSQL Jobs in HyderabadPostgreSQL Jobs in IndorePostgreSQL Jobs in JaipurPostgreSQL Jobs in Kochi (Cochin)PostgreSQL Jobs in KolkataPostgreSQL Jobs in MumbaiPostgreSQL Jobs in PunePSQL JobsPSQL Jobs in Bangalore (Bengaluru)PSQL Jobs in ChennaiPSQL Jobs in Delhi, NCR and GurgaonPSQL Jobs in HyderabadPSQL Jobs in MumbaiPSQL Jobs in PuneRemote SQL JobsSQL JobsSQL Jobs in AhmedabadSQL Jobs in Bangalore (Bengaluru)SQL Jobs in BhubaneswarSQL Jobs in ChandigarhSQL Jobs in ChennaiSQL Jobs in CoimbatoreSQL Jobs in Delhi, NCR and GurgaonSQL Jobs in HyderabadSQL Jobs in IndoreSQL Jobs in JaipurSQL Jobs in Kochi (Cochin)SQL Jobs in KolkataSQL Jobs in MumbaiTransact-SQL JobsTransact-SQL Jobs in Bangalore (Bengaluru)Transact-SQL Jobs in ChennaiTransact-SQL Jobs in HyderabadTransact-SQL Jobs in JaipurTransact-SQL Jobs in Pune
icon
NonStop io Technologies Pvt Ltd
Prashant N
Posted by Prashant N
Pune
4 - 8 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
Entity Framework
LINQ
+5 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.


Responsibilities:

  • Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
  • Write clean, scalable, and efficient code while following best practices
  • Develop and optimize APIs and microservices
  • Work with SQL Server and other databases to ensure high performance and reliability
  • Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
  • Participate in code reviews and provide constructive feedback
  • Troubleshoot, debug, and enhance existing applications
  • Ensure compliance with security and performance standards, especially for healthcare-related applications


Qualifications & Skills:

  • Strong experience in .NET Core/.NET Framework and C#
  • Proficiency in building RESTful APIs and microservices architecture
  • Experience with Entity Framework, LINQ, and SQL Server
  • Familiarity with front-end technologies like React, Angular, or Blazor is a plus
  • Knowledge of cloud services (Azure/AWS) is a plus
  • Experience with version control (Git) and CI/CD pipelines
  • Strong understanding of object-oriented programming (OOP) and design patterns
  • Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus


Why Join Us?

  • Opportunity to work on a cutting-edge healthcare product
  • A collaborative and learning-driven environment
  • Exposure to AI and software engineering innovations
  • Excellent work ethics and culture

If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Bengaluru (Bangalore), Pune, Chennai
5 - 12 yrs
₹5L - ₹25L / yr
PySpark
Automation
SQL

Skill Name: ETL Automation Testing

Location: Bangalore, Chennai and Pune

Experience: 5+ Years


Required:

Experience in ETL Automation Testing

Strong experience in Pyspark.

Read more
NeoGenCode Technologies Pvt Ltd
Pune
8 - 15 yrs
₹5L - ₹24L / yr
Data engineering
Snow flake schema
SQL
ETL
ELT
+5 more

Job Title : Data Engineer – Snowflake Expert

Location : Pune (Onsite)

Experience : 10+ Years

Employment Type : Contractual

Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.


Job Summary :

We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.

The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.

Responsibilities :

  • Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
  • Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
  • Ensure high data quality, security, and adherence to governance frameworks.
  • Conduct code reviews and align development with best practices.

Qualifications :

  • Bachelor’s in Computer Science, Data Science, IT, or related field.
  • Snowflake certifications (Pro/Architect) preferred.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vishakha Walunj
Posted by Vishakha Walunj
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
PySpark
databricks
SQL
skill iconPython

Required Skills:

  • Hands-on experience with Databricks, PySpark
  • Proficiency in SQL, Python, and Spark.
  • Understanding of data warehousing concepts and data modeling.
  • Experience with CI/CD pipelines and version control (e.g., Git).
  • Fundamental knowledge of any cloud services, preferably Azure or GCP.


Good to Have:

  • Bigquery
  • Experience with performance tuning and data governance.


Read more
For Fintech Co

For Fintech Co

Agency job
via Vikash Technologies by Rishika Teja
Pune, Noida
7 - 10 yrs
₹15L - ₹25L / yr
MEAN stack
SQL
NOSQL Databases
skill iconAmazon Web Services (AWS)
MVC Framework

Hiring for Mean Stack Developer - Lead


Exp: 7 - 10 yrs

Work Location : Pune & Noida

Work from Office

Edu : BE/B.Tech/MCA Only 60% & above


Job Description :


1) 6+ yrs in Mean Stack ( Mongo DB , Angular JS , Node JS , Express JS )

2)Exp in Large Scale System

3) Handling a team of 6+

4) Exp in restful APIS / SOAP APIs

5) Exp in SQL/ NOSQL

6) Exp in AWS

7) Exp in MVC Framework

Read more
TCS

TCS

Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Mumbai, Pune, Chennai
4 - 8 yrs
₹6L - ₹20L / yr
Marketing Campaign
SAS
Teradata
SQL

 

Required Technical Skill Set:Teradata with Marketing Campaign knowledge and SAS

Desired Competencies (Technical/Behavioral Competency)

Must-Have

1. Advanced coding skills in Teradata SQL and SAS is required

2. Experience with customer segmentation, marketing optimization, and marketing automation. Thorough understanding of customer contact management principles

3. Design and execution of campaign on consumer and business products using Teradata communication manager and inhouse tools

4. Analyzing effectiveness of various campaigns by doing necessary analysis to add insights and improve future campaigns

5. Timely resolution of Marketing team queries and other ad-hoc request

Good-to-Have

1. Awareness of CRM tools & process, automation

2. Knowledge of commercial databases preferable

3. People & team management skills

Read more
Solidatus
Pune
6 - 8 yrs
₹0.5L - ₹0.5L / yr
skill iconJava
skill iconSpring Boot
skill iconNodeJS (Node.js)
Databases
SQL
+6 more

Competitive Salary


About Solidatus


At Solidatus, we empower organizations to connect and visualize their data relationships, making it easier to identify, access, and understand their data. Our metadata management technology helps businesses establish a sustainable data foundation, ensuring they meet regulatory requirements, drive digital transformation, and unlock valuable insights. 

 

We’re experiencing rapid growth—backed by HSBC, Citi, and AlbionVC, we secured £14 million in Series A funding in 2021. Our achievements include recognition in the Deloitte UK Technology Fast 50, multiple A-Team Innovation Awards, and a top 1% place to work ranking from The FinancialTechnologist. 

 

Now is an exciting time to join us as we expand internationally and continue shaping the future of data management. 


About the Engineering Team


Engineering is the heart of Solidatus. Our team of world-class engineers, drawn from outstanding computer science and technical backgrounds, plays a critical role in crafting the powerful, elegant solutions that set us apart. We thrive on solving challenging visualization and data management problems, building technology that delights users and drives real-world impact for global enterprises.

As Solidatus expands its footprint, we are scaling our capabilities with a focus on building world-class connectors and integrations to extend the reach of our platform. Our engineers are trusted with the freedom to explore, innovate, and shape the product’s future — all while working in a collaborative, high-impact environment. Here, your code doesn’t just ship — it empowers some of the world's largest and most complex organizations to achieve their data ambitions.


Who We Are & What You’ll Do


Join our Data Integration team and help shape the way data flows! 


Your Mission:


To expand and refine our suite of out-of-the-box integrations, using our powerful API and SDK to bring in metadata for visualisation from a vast range of sources including databases with diverse SQL dialects.

But that is just the beginning. At our core, we are problem-solvers and innovators. You’ll have the chance to:                                                        

Design

intuitive layouts

representing flow of data across complex deployments of diverse technologies

Design and optimize API connectivity and parsers reading from source systems metadata

Explore new paradigms for representing data lineage

Enhance our data ingestion capabilities to handle massive volumes of data

Dig deep into data challenges to build smarter, more scalable solutions

Beyond engineering, you’ll collaborate with users, troubleshoot tricky issues, streamline development workflows, and contribute to a culture of continuous improvement.


What We’re Looking For


  • We don’t believe in sticking to a single tech stack just for the sake of it. We’re engineers first, and we pick the best tools for the job. More than ticking off a checklist, we value mindset, curiosity, and problem-solving skills.
  • You’re quick to learn and love diving into new technologies
  • You push for excellence and aren’t satisfied with “just okay”
  • You can break down complex topics in a way that anyone can understand
  • You should have 6–8 years of proven experience in developing, and delivering high-quality, scalable software solutions 
  • You should be a strong self-starter with the ability to take ownership of tasks and drive them to completion with minimal supervision.
  • You should be able to mentor junior developers, perform code reviews, and ensure adherence to best practices in software engineering.


Tech & Skills We’d Love to See


Must-have:·

  • Strong hands-on experience with Java, Spring Boot RESTful APIs, and Node.js
  • Solid knowledge of databases, SQL dialects, and data structures


Nice-to-have:

  • Experience with C#, ASP.NET Core, TypeScript, React.js, or similar frameworks
  • Bonus points for data experience—we love data wizards


If you’re passionate about engineering high-impact solutions, playing with cutting- edge tech, and making data work smarter, we’d love to have you on board!

Read more
NonStop io Technologies Pvt Ltd
Pune
8 - 12 yrs
Best in industry
skill icon.NET
skill iconJava
MySQL
SQL
Microservices
+2 more

About the Role:

We are seeking an experienced Tech Lead with 8+ years of hands-on experience in backend development using .NET or Java. The ideal candidate will have strong leadership capabilities, the ability to mentor a team, and a solid technical foundation to deliver scalable and maintainable backend systems. Prior experience in the healthcare domain is a plus.


Key Responsibilities:

  • Lead a team of backend developers to deliver product and project-based solutions.
  • Oversee the development and implementation of backend services and APIs.
  • Collaborate with cross-functional teams including frontend, QA, DevOps, and Product.
  • Perform code reviews and enforce best practices in coding and design.
  • Ensure performance, quality, and responsiveness of backend applications.
  • Participate in sprint planning, estimations, and retrospectives.
  • Troubleshoot, analyze, and optimize application performance.

Required Skills:

  • 8+ years of backend development experience in .NET or Java.
  • Proven experience as a Tech Lead managing development teams.
  • Strong understanding of REST APIs, microservices, and software design patterns.
  • Familiarity with SQL and NoSQL databases.
  • Good knowledge of Agile/Scrum methodologies.

Preferred Skills:

  • Experience in the healthcare domain.
  • Exposure to frontend frameworks like Angular or React.
  • Understanding of cloud platforms such as Azure/AWS/GCP.
  • CI/CD and DevOps practices.

What We Offer:

  • Collaborative and value-driven culture.
  • Projects with real-world impact in critical domains.
  • Flexibility and autonomy in work.
  • Continuous learning and growth opportunities.


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 10 yrs
Best in industry
skill iconPython
Spark
Apache Airflow
skill iconDocker
SQL
+2 more

What You’ll Do:


As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York City, India, and Bosnia. The role will focus on building predictive models, implement data drive solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.

  • Explore ways to to create better predictive models 
  • Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights 
  • Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
  • Design and deploy new iterations of production-level code
  • Contribute posts to our upcoming technical blog  


Who You Are:


  • Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred 
  • 5+ years of working experience as Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
  • Advanced proficiency in performing statistical analysis in Python, including relevant libraries is required
  • Experience working with data processing , transformation and building model pipelines using tools such as spark , airflow , docker
  • You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
  • You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…) 
  • You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing
  • You can write production level code, work with Git repositories
  • Active Kaggle participant 
  • Working experience with SQL
  • Familiar with medical and healthcare data (medical claims, Rx, preferred)
  • Conversant with cloud technologies such as AWS or Google Cloud


Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
PySpark
XGBoost

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.


Data Axle Pune is pleased to have achieved certification as a Great Place to Work!


Roles & Responsibilities:

We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.


We are looking for a Senior Data Scientist who will be responsible for:

  1. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
  2. Design or enhance ML workflows for data ingestion, model design, model inference and scoring
  3. Oversight on team project execution and delivery
  4. Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
  5. Visualize and publish model performance results and insights to internal and external audiences


Qualifications:

  1. Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
  2. Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
  4. Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
  5. Proficiency in Python and SQL required; PySpark/Spark experience a plus
  6. Ability to conduct a productive peer review and proper code structure in Github
  7. Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
  8. Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.


It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Gruve
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Pune
5 - 10 yrs
Upto ₹22L / yr (Varies
)
skill iconReact.js
skill iconNextJs (Next.js)
Wordpress
skill iconPHP
skill iconHTML/CSS
+4 more

We are seeking an experienced WordPress Developer with expertise in both frontend and backend development. The ideal candidate will have a deep understanding of headless WordPress architecture, where the backend is managed with WordPress, and the frontend is built using React.js (or Next.js). The developer should follow best coding practices to ensure the website is secure, high-performing, scalable, and fully responsive. 


Key Responsibilities: 

Backend Development (WordPress): 

  • Develop and maintain a headless WordPress CMS to serve content via REST API / GraphQL. 
  • Create custom WordPress plugins and themes to optimize content delivery. 
  • Ensure secure authentication and role-based access for API endpoints. 
  • Optimize WordPress database queries for better performance. 

Frontend Development (React.js / Next.js): 

  • Build a decoupled frontend using React.js (or Next.js) that fetches content from WordPress. 
  • Experience with Figma for translating UI/UX designs to code. 
  • Ensure seamless integration of frontend with WordPress APIs. 
  • Implement modern UI/UX principles to create responsive, fast-loading web pages. 

Code quality, Performance & Security Optimization: 

  • Optimize website speed using caching, lazy loading, and CDN integration. 
  • Ensure the website follows SEO best practices and is mobile-friendly. 
  • Implement security best practices to prevent vulnerabilities such as SQL injection, XSS, and CSRF. 
  • Write clean, maintainable, and well-documented code following industry standards. 
  • Implement version control using Git/GitHub/GitLab. 
  • Conduct regular code reviews and debugging to ensure a high-quality product. 

Collaboration & Deployment: 

  • Work closely with designers, content teams, and project managers. 
  • Deploy and manage WordPress and frontend code in staging and production environments. 
  • Monitor website performance and implement improvements. 

Required Skills & Qualifications: 

  • B.E/B. Tech Degree, Master’s Degree required
  • Experience: 6 – 8 Years
  • Strong experience in React.js / Next.js for building frontend applications. 
  • Proficiency in JavaScript (ES6+), TypeScript, HTML5, CSS3, and TailwindCSS.
  • Familiarity with SSR (Server Side Rendering) and SSG (Static Site Generation). 
  • Experience in WordPress development (PHP, MySQL, WP REST API, GraphQL). 
  • Experience with ACF (Advanced Custom Fields), Custom Post Types, WP Headless CMS
  • Strong knowledge of WordPress security, database optimization, and caching techniques. 

Why Join Us:

  • Competitive salary and benefits package.
  • Work in a dynamic, collaborative, and creative environment.
  • Opportunity to lead and influence design decisions across various platforms.
  • Professional development opportunities and career growth potential.


Read more
Deqode

at Deqode

1 recruiter
Mokshada Solanki
Posted by Mokshada Solanki
Bengaluru (Bangalore), Mumbai, Pune, Gurugram
4 - 5 yrs
₹4L - ₹20L / yr
SQL
skill iconAmazon Web Services (AWS)
Migration
PySpark
ETL

Job Summary:

Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.


Key Responsibilities:

  • Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
  • Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
  • Work on data migration tasks in AWS environments.
  • Monitor and improve database performance; automate key performance indicators and reports.
  • Collaborate with cross-functional teams to support data integration and delivery requirements.
  • Write shell scripts for automation and manage ETL jobs efficiently.


Required Skills:

  • Strong experience with MySQL, complex SQL queries, and stored procedures.
  • Hands-on experience with AWS Glue, PySpark, and ETL processes.
  • Good understanding of AWS ecosystem and migration strategies.
  • Proficiency in shell scripting.
  • Strong communication and collaboration skills.


Nice to Have:

  • Working knowledge of Python.
  • Experience with AWS RDS.



Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Mumbai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL
redshift

Profile: AWS Data Engineer

Mode- Hybrid

Experience- 5+7 years

Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram


Roles and Responsibilities

  • Design and maintain ETL pipelines using AWS Glue and Python/PySpark
  • Optimize SQL queries for Redshift and Athena
  • Develop Lambda functions for serverless data processing
  • Configure AWS DMS for database migration and replication
  • Implement infrastructure as code with CloudFormation
  • Build optimized data models for performance
  • Manage RDS databases and AWS service integrations
  • Troubleshoot and improve data processing efficiency
  • Gather requirements from business stakeholders
  • Implement data quality checks and validation
  • Document data pipelines and architecture
  • Monitor workflows and implement alerting
  • Keep current with AWS services and best practices


Required Technical Expertise:

  • Python/PySpark for data processing
  • AWS Glue for ETL operations
  • Redshift and Athena for data querying
  • AWS Lambda and serverless architecture
  • AWS DMS and RDS management
  • CloudFormation for infrastructure
  • SQL optimization and performance tuning
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
SQL
Data engineering
Apache Spark
PySpark
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. 

Key Roles & Responsibilities:

  • Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
  • Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
  • Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
  • Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
  • Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
  • Implement data governance, security, and compliance best practices.
  • Build and maintain data models, transformations, and data marts for analytics and reporting.
  • Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
  • Automate infrastructure and deployments using Terraform, Airflow, or dbt.
  • Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
  • Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.


Basic Qualifications:

  • Bachelor’s or Master’s Degree in Computer Science or Data Science.
  • 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
  • Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
  • Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
  • Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
  • Proficiency in SQL, Python, or Scala for data transformation and analytics.
  • Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
  • Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
  • Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
  • Strong understanding of data governance, access control, and encryption strategies.
  • Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.


Preferred Qualifications:

  • Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
  • Experience in BI and analytics tools (Tableau, Power BI, Looker).
  • Familiarity with data observability tools (Monte Carlo, Great Expectations).
  • Experience with machine learning feature engineering pipelines in Databricks.
  • Contributions to open-source data engineering projects.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Pune, Mumbai, Bengaluru (Bangalore), Chennai
4 - 7 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
Glue semantics
Amazon Redshift
+1 more

Job Overview:

We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
  • Integrate data from diverse sources and ensure its quality, consistency, and reliability.
  • Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
  • Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
  • Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Automate data validation, transformation, and loading processes to support real-time and batch data processing.
  • Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.

Required Skills:

  • 5 to 7 years of hands-on experience in data engineering roles.
  • Strong proficiency in Python and PySpark for data transformation and scripting.
  • Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
  • Solid understanding of SQL and database optimization techniques.
  • Experience working with large-scale data pipelines and high-volume data environments.
  • Good knowledge of data modeling, warehousing, and performance tuning.

Preferred/Good to Have:

  • Experience with workflow orchestration tools like Airflow or Step Functions.
  • Familiarity with CI/CD for data pipelines.
  • Knowledge of data governance and security best practices on AWS.
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Mumbai, Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹5L - ₹10L / yr
ETL
SQL
skill iconAmazon Web Services (AWS)
PySpark
KPI

Role - ETL Developer

Work ModeHybrid

Experience- 4+ years

Location - Pune, Gurgaon, Bengaluru, Mumbai

Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL

Required Skills:

  • 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
  • Experience in Pyspark, AWS, AWS Glue
  • Experience in AWS ,Migration
  • Experience with automated scripting and tracking KPIs/metrics for database performance
  • Proficiency in shell scripting and ETL.
  • Strong communication skills and a collaborative team player
  • Knowledge of Python and AWS RDS is a plus


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Indore
4 - 6 yrs
₹10L - ₹18L / yr
skill iconAmazon Web Services (AWS)
skill iconJavascript
skill iconPHP
SQL

🚀 We’re Hiring- PHP Developer Deqode

📍 Location: Pune (Hybrid)

🕒Experience: 4–6 Years

⏱️ Notice Period: Immediate Joiner


We're looking for a skilled PHP Developer to join our team. If you have a strong grasp of secure coding practices, are experienced in PHP upgrades, and thrive in a fast-paced deployment environment, we’d love to connect with you!


🔧 Key Skills:

- PHP | MySQL | JavaScript | Jenkins | Nginx | AWS


🔐 Security-Focused Responsibilities Include:

- Remediation of PenTest findings

- XSS mitigation (input/output sanitization)

- API rate limiting

- 2FA integration

- PHP version upgrade

- Use of AWS Secrets Manager

- Secure session and password policies



Read more
Top tier global IT consulting company

Top tier global IT consulting company

Agency job
via AccioJob by AccioJobHiring Board
Pune, Hyderabad, Gurugram, Chennai
0 - 1 yrs
₹11.1L - ₹11.1L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
SQL
Any programming language

AccioJob is conducting an exclusive diversity hiring drive with a reputed global IT consulting company for female candidates only.


Apply Here: https://links.acciojob.com/3SmQ0Bw


Key Details:

• Role: Application Developer

• CTC: ₹11.1 LPA

• Work Location: Pune, Chennai, Hyderabad, Gurgaon (Onsite)

• Required Skills: DSA, OOPs, SQL, and proficiency in any programming language


Eligibility Criteria:

• Graduation Year: 2024–2025

• Degree: B.E/B.Tech or M.E/M.Tech

• CS/IT branches: No prior experience required

• Non-CS/IT branches: Minimum 6 months of technical experience

• Minimum 60% in UG


Selection Process:

Offline Assessment at AccioJob Skill Center(s) in:

• Pune

• Hyderabad

• Noida

• Delhi

• Greater Noida


Further Rounds for Shortlisted Candidates Only:

• Coding Test

• Code Pairing Round

• Technical Interview

• Leadership Round


Note: Candidates must bring their own laptop & earphones for the assessment.


Apply Here: https://links.acciojob.com/3SmQ0Bw

Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
7 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
ETL
skill iconPython
skill iconJava
skill iconScala
+4 more

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.


Roles and Responsibilities:

  • Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
  • Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
  • Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
  • Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
  • Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
  • Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
  • Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
  • Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
  • Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.


Basic Qualifications

  • 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
  • Strong proficiency in SQL and experience with BigQuery for data warehousing.
  • Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
  • Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
  • Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
  • Knowledge of data governance, security best practices, and compliance requirements in cloud environments.


Preferred Qualifications:

  • Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
  • Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
  • Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
  • Proficiency in Python, Java, or Scala for data engineering and pipeline development.
  • Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
  • Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
  • Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.
Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
9 - 12 yrs
Best in industry
skill iconPython
PySpark
skill iconMachine Learning (ML)
SQL
skill iconData Science
+1 more

Roles & Responsibilities:  

We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.  


We are looking for a Lead Data Scientist who will be responsible for  

  • Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture  
  • Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery  
  • Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies  
  • Visualize and publish model performance results and insights to internal and external audiences  


Qualifications:  

  • Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)  
  • Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  • Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)  
  • Proficiency in Python and SQL required; PySpark/Spark experience a plus  
  • Ability to conduct a productive peer review and proper code structure in Github
  • Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)  
  • Working knowledge of modern CI/CD methods  


This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level. 

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Pune
4 - 8 yrs
₹7L - ₹18L / yr
skill iconJava
skill iconJavascript
skill iconHTML/CSS
skill iconPostgreSQL
SQL
+8 more

Key Responsibilities would include: 


1. Design, develop, and maintain enterprise-level Java applications. 

2. Collaborate with cross-functional teams to gather and analyze requirements, and implement solutions. 

3. Develop & customize the application using HTML5, CSS, and jQuery to create dynamic and responsive user interfaces. 

4. Integrate with relational databases (RDBMS) to manage and retrieve data efficiently. 

5. Write clean, maintainable, and efficient code following best practices and coding standards. 

6. Participate in code reviews, debugging, and testing to ensure high-quality deliverables. 

7. Troubleshoot and resolve issues in existing applications and systems. 


Qualification requirement - 


1. 4 years of hands-on experience in Java / J2ee development, preferably with enterprise-level projects.

2. Spring Framework including - SOA, AoP and Spring security 

3. Proficiency in web technologies including HTML5, CSS, jQuery, and JavaScript.

4. Experience with RESTful APIs and web services.

5. Knowledge of build tools like Maven or Gradle

6. Strong knowledge of relational databases (e.g., MySQL, PostgreSQL, Oracle) and experience with SQL.

7. Experience with version control systems like Git.

8. Understanding of software development lifecycle (SDLC) 

9. Strong problem-solving skills and attention to details.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Ahmedabad
4 - 9 yrs
₹10L - ₹35L / yr
skill iconPython
pytest
skill iconAmazon Web Services (AWS)
Test Automation (QA)
SQL

At least 5 years of experience in testing and developing automation tests.

A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.

Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.

Familiarity with Playwright or other browser application testing frameworks is a significant advantage.

Proficiency in object-oriented programming and principles is required.

Extensive knowledge of AWS services is essential.

Strong expertise in REST API testing and SQL is required.

A solid understanding of testing and development life cycle methodologies is necessary.

Knowledge of the financial industry and trading systems is a plus

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
3 - 5 yrs
₹6L - ₹15L / yr
Software Testing (QA)
SQL
TestNG
Selenium
Automation

Job Title: Sr. QA Engineer

Location: Pune, Banner

Mode - Hybrid


Major Responsibilities:


  • Understand product requirements and design test plans/ test cases.
  • Collaborate with developers for discussing story design/ test cases/code walkthrough etc.
  • Design automation strategy for regression test cases.
  • Execute tests and collaborate with developers in case of issues.
  • Review unit test coverage/ enhance existing unit test coverage
  • Automate integration/end-to-end tests using Junit/ Mockito /Selenium/Cypress


Requirements: 


  • Experience of web application testing/ test automation
  • Good analytical skills
  • Exposure to test design techniques
  • Exposure to Agile Development methodology, Scrums
  • Should be able to read and understand code.
  • Review and understand unit test cases/ suggest additional unit-level coverage points.
  • Exposure to multi-tier web application deployment/architecture (SpringBoot)
  • Good exposure to SQL query language
  • Exposure to Configuration management tool for code investigation - GitHub
  • Exposure to Web Service / API testing
  • Cucumber – use case-driven test automation
  • System understanding, writing test cases from scratch, requirement analysis, thinking from a user perspective, test designing, and requirement analysis


Read more
Innominds

at Innominds

1 video
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
5yrs+
Upto ₹35L / yr (Varies
)
skill iconJava
skill iconAmazon Web Services (AWS)
SQL
Internet of Things (IOT)
Spring
+1 more

In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.


The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.


We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.


  • A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
  • At least 5 years of experience as Software Engineer, in an enterprise context
  • Experience in design, development and deployment of large-scale cloud-based applications and services
  • Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
  • Experience with IoT products, backend services and design principles
  • Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
  • Passionate about code quality, security and testing
  • Microservice development experience with Java (Spring) is a plus
  • Good command of English in both Oral & Written


Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Bengaluru (Bangalore), Pune, Gurugram, Chennai, Bhopal, Jaipur
5 - 10 yrs
₹15L - ₹24L / yr
Tableau
SQL

Job Description:

We are seeking a Tableau Developer with 5+ years of experience to join our Core Analytics team. The candidate will work on large-scale BI projects using Tableau and related tools.


Must Have:

  • Strong expertise in Tableau Desktop and Server, including add-ons like Data and Server Management.
  • Ability to interpret business requirements, build wireframes, and finalize KPIs, calculations, and designs.
  • Participate in design discussions to implement best practices for dashboards and reports.
  • Build scalable BI and Analytics products based on feedback while adhering to best practices.
  • Propose multiple solutions for a given problem, leveraging toolset functionality.
  • Optimize data sources and dashboards while ensuring business requirements are met.
  • Collaborate with product, platform, and program teams for timely delivery of dashboards and reports.
  • Provide suggestions and take feedback to deliver future-ready dashboards.
  • Peer review team members’ dashboards, offering constructive feedback to improve overall design.
  • Proficient in SQL, UI/UX practices, and alation, with an understanding of good data models for reporting.
  • Mentor less experienced team members.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Navi Mumbai, Kolkata, Rajasthan
5 - 24 yrs
₹9L - ₹70L / yr
skill iconC
skill iconC++
Visual C++
Embedded C++
Artificial Intelligence (AI)
+32 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-fri role, In office, with excellent perks and benefits!


Position Overview

We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9


Key Responsibilities:

1. System Architecture & Design

● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.

● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.

● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.


2. Perception & AI Integration

● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.

● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.

● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.


3. Embedded & Real-Time Systems

● Design high-performance embedded software stacks for real-time robotic control and autonomy.

● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.

● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.


4. Robotics Simulation & Digital Twins

● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.

● Leverage synthetic data generation (Omniverse Replicator) for training AI models.

● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.


5. Navigation & Motion Planning

● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.

● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.

● Implement reinforcement learning-based policies using Isaac Gym.


6. Performance Optimization & Scalability

● Ensure low-latency AI inference and real-time execution of robotics applications.

● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.

● Develop benchmarking and profiling tools to measure software performance on edge AI devices.


Required Qualifications:

● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.

● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.

● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.

● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.

● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.

● Strong background in robotic perception, planning, and real-time control.

● Experience with cloud-edge AI deployment and scalable architectures.


Preferred Qualifications

● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym

● Knowledge of robot kinematics, control systems, and reinforcement learning

● Expertise in distributed computing, containerization (Docker), and cloud robotics

● Familiarity with automotive, industrial automation, or warehouse robotics

● Experience designing architectures for autonomous systems or multi-robot systems.

● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics

● Experience with microservices or service-oriented architecture (SOA)

● Knowledge of machine learning and AI integration within robotic systems

● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
SQL
skill iconJava
skill iconSpring Boot
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:


* Establish formal data practice for the organisation.

* Build & operate scalable and robust data architectures.

* Create pipelines for the self-service introduction and usage of new data.

* Implement DataOps practices

* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.

* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.

* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.

 

Who You Are:


* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.

* Experience working with public clouds like GCP/AWS.

* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.

* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.

* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.

* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc

* Good communication skills with the ability to collaborate with both technical and non-technical people.

* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
7yrs+
Upto ₹35L / yr (Varies
)
SailPoint
IIQ
IdentityNow
skill iconJava
skill iconXML
+2 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.


Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.


Position Summary:

As Architect, you will be responsible for designing, implementing, and managing SailPoint IdentityNow (IIQ) solutions to ensure effective identity governance and access management across our enterprise. You will work closely with stakeholders to understand their requirements, develop solutions that align with business objectives, and oversee the deployment and integration of SailPoint technologies.


Key Responsibilities:


Architect and Design Solutions:

= Design and architect SailPoint IIQ solutions that meet business needs and align with IT strategy.

= Develop detailed technical designs, including integration points, workflows, and data models.

Implementation and Integration:

= Lead the implementation and configuration of SailPoint IIQ, including connectors, identity governance, and compliance features.

= Integrate SailPoint with various systems, applications, and directories (e.g., Active Directory, LDAP, databases).

Project Management

= Manage project timelines, resources, and deliverables to ensure successful deployment of SailPoint IIQ solutions.

Coordinate with cross-functional teams to address project requirements, risks, and issues.

Customization and Development:

Customize SailPoint IIQ functionalities, including developing custom connectors, workflows, and rules.

Develop and maintain documentation related to architecture, configurations, and customizations.

Support and Troubleshooting:

= Provide ongoing support for SailPoint IIQ implementations, including troubleshooting and resolving technical issues.

= Conduct regular reviews and performance tuning to optimize the SailPoint environment.

Compliance and Best Practices:

= Ensure SailPoint IIQ implementations adhere to industry best practices, security policies, and regulatory requirements.

= Stay current with SailPoint updates and advancements, and recommend improvements and enhancements.

Collaboration and Training:

= Collaborate with business and IT stakeholders to understand requirements and translate them into technical solutions.

= Provide training and support to end-users and internal teams on SailPoint IIQ functionalities and best practices.



Education and Experience:

  1. Bachelor’s degree in computer science, Information Technology, or a related field.
  2. Minimum of 5 years of experience with identity and access management (IAM) solutions, with a strong focus on SailPoint IIQ.
  3. Proven experience in designing and implementing SailPoint IIQ solutions in complex environments.

 


Technical Skills:

  1. Expertise in SailPoint IIQ architecture, configuration, and customization.
  2. Strong knowledge of identity governance, compliance, and role-based access control (RBAC).
  3. Experience with integration of SailPoint with various systems and applications.
  4. Proficiency in Java, XML, SQL, and other relevant technologies.


Certification Preferred:

1 SailPoint IIQ Certification (e.g., SailPoint Certified Implementation Engineer).

2 Other relevant IAM or security certifications (e.g., CISSP, CISM).

Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
5 - 10 yrs
₹4L - ₹14L / yr
MERN Stack
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
skill iconNextJs (Next.js)
+1 more

We’re looking for a Tech Lead with expertise in ReactJS (Next.js), backend technologies, and database management to join our dynamic team.

Key Responsibilities:

  • Lead and mentor a team of 4-6 developers.
  • Architect and deliver innovative, scalable solutions.
  • Ensure seamless performance while handling large volumes of data without system slowdowns.
  • Collaborate with cross-functional teams to meet business goals.

Required Expertise:

  • Frontend: ReactJS (Next.js is a must).
  • Backend: Experience in Node.js, Python, or Java.
  • Databases: SQL (mandatory), MongoDB (nice to have).
  • Caching & Messaging: Redis, Kafka, or Cassandra experience is a plus.
  • Proven experience in system design and architecture.
  • Cloud certification is a bonus.
Read more
DataToBiz Pvt. Ltd.

at DataToBiz Pvt. Ltd.

2 recruiters
Vibhanshi Bakliwal
Posted by Vibhanshi Bakliwal
Pune
8 - 12 yrs
₹15L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.


Location - Pune (Hybrid 3 days)


Responsibilities:


Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.

Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.

Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.

Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.

Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.

Troubleshoot and resolve technical issues related to Power BI dashboards and reports.

Provide technical guidance and mentorship to junior team members.

Stay abreast of the latest trends and technologies in the Power BI ecosystem.

Ensure data security, governance, and compliance with industry best practices.

Contribute to the development and improvement of the organization's data and analytics strategy.

May lead and mentor a team of junior Power BI developers.


Qualifications:


8-12 years of experience in Business Intelligence and Data Analytics.

Proven expertise in Power BI development, including DAX, advanced data modeling techniques.

Strong SQL skills, including writing complex queries, stored procedures, and views.

Experience with ETL/ELT processes and tools.

Experience with data warehousing concepts and methodologies.

Excellent analytical, problem-solving, and communication skills.

Strong teamwork and collaboration skills.

Ability to work independently and proactively.

Bachelor's degree in Computer Science, Information Systems, or a related field preferred.

Read more
Intellikart Ventures LLP
Prajwal Shinde
Posted by Prajwal Shinde
Pune
2 - 5 yrs
₹9L - ₹15L / yr
PowerBI
SQL
ETL
snowflake
Apache Kafka
+1 more

Experience: 4+ years.

Location: Vadodara & Pune

Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines

What you'll be doing:

  • Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
  • Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
  • Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
  • Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
  • Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
  • Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
  • Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
  • Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
  • Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.


What you need:

Basic Skills:


  • 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
  • Strong experience with Apache Kafka for stream processing and real-time data integration.
  • Proficiency in SQL and ETL/ELT processes.
  • Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
  • Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
  • Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
  • Knowledge of data governance, security, and compliance best practices.
  • Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
  • Ability to work in a collaborative team environment and communicate effectively with cross-functional teams


Responsibilities:

  • Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
  • Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
  • Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
  • Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
  • Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
  • Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
  • Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
  • Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
  • Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
  • Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform. 


Read more
KPMG

at KPMG

Agency job
via Pluginlive by Harsha Saggi
Bengaluru (Bangalore), Delhi, Gurugram, Pune
5 - 15 yrs
₹1L - ₹60L / yr
SQL
skill iconAmazon Web Services (AWS)

About the company

KPMG International Limited, commonly known as KPMG, is one of the largest professional services networks in the world, recognized as one of the "Big Four" accounting firms alongside Deloitte, PricewaterhouseCoopers (PwC), and Ernst & Young (EY). KPMG provides a comprehensive range of professional services primarily focused on three core areas: Audit and Assurance, Tax Services, and Advisory Services. Their Audit and Assurance services include financial statement audits, regulatory audits, and other assurance services. The Tax Services cover various aspects such as corporate tax, indirect tax, international tax, and transfer pricing. Meanwhile, their Advisory Services encompass management consulting, risk consulting, deal advisory, and other related services.


Apply through this link- https://forms.gle/qmX9T7VrjySeWYa37


Job Description

Position: Data Engineer


Experience: Experience 5+ years of relevant experience


Location :  WFO (3 days working) Pune – Kharadi, NCR – Gurgaon , Bangalore


Employment Type:  contract for 3 months-Can be extended basis performance and future requirements


Skills Required:

 • Proficiency in SQL, AWS, data integration tools like Airflow or equivalent. Knowledge on using tools like JIRA, GitHub, etc. 

 • Data Engineer who will be able to work on the data management activities and orchestration processes. 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
4 - 8 yrs
₹1L - ₹12L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

Job Description :

Job Title : Data Engineer

Location : Pune (Hybrid Work Model)

Experience Required : 4 to 8 Years


Role Overview :

We are seeking talented and driven Data Engineers to join our team in Pune. The ideal candidate will have a strong background in data engineering with expertise in Python, PySpark, and SQL. You will be responsible for designing, building, and maintaining scalable data pipelines and systems that empower our business intelligence and analytics initiatives.


Key Responsibilities:

  • Develop, optimize, and maintain ETL pipelines and data workflows.
  • Design and implement scalable data solutions using Python, PySpark, and SQL.
  • Collaborate with cross-functional teams to gather and analyze data requirements.
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Monitor and troubleshoot data pipelines to ensure reliability and performance.
  • Work on hybrid data environments involving on-premise and cloud-based systems.
  • Assist in the deployment and maintenance of big data solutions.

Required Skills and Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or related field.
  • 4 to 8 Years of experience in Data Engineering or related roles.
  • Proficiency in Python and PySpark for data processing and analysis.
  • Strong SQL skills with experience in writing complex queries and optimizing performance.
  • Familiarity with data pipeline tools and frameworks.
  • Knowledge of cloud platforms such as AWS, Azure, or GCP is a plus.
  • Excellent problem-solving and analytical skills.
  • Strong communication and teamwork abilities.

Preferred Qualifications:

  • Experience with big data technologies like Hadoop, Hive, or Spark.
  • Familiarity with data visualization tools and techniques.
  • Knowledge of CI/CD pipelines and DevOps practices in a data engineering context.

Work Model:

  • This position follows a hybrid work model, with candidates expected to work from the Pune office as per business needs.

Why Join Us?

  • Opportunity to work with cutting-edge technologies.
  • Collaborative and innovative work environment.
  • Competitive compensation and benefits.
  • Clear career progression and growth opportunities.


Read more
LoanTap Financial Technologies
Agency job
via Headhunter by Abhishek Shah
Pune
3 - 8 yrs
₹4L - ₹18L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconAngular (2+)
skill iconAngularJS (1.x)
ASP.NET
+5 more
  • 8 - 12 years of professional experience in .Net Framework 2.0/3.5/4.0/4.5 (C#, ASP.Net) and ADO.Net. Strong knowledge of software development, debugging and deployment tools Knowledge and experience of SQL Server (Any version). 
  • Knowledge of Versioning tools. Strong experience working with HTML, CSS, JavaScript, and jQuery. 
  • Should have Good Problem Solving and Analytical Skills Ability to work independently and as part of a team Ability to handle individual projects as well as team. 
  • Understanding of MVC, WCF, WPF and any mobile technology – beneficial Ability to learn new technology in a short period.


Read more
ScatterPie Analytics
Akshada Desai
Posted by Akshada Desai
Pune
3 - 5 yrs
₹3L - ₹15L / yr
Microsoft BizTalk Server
SQL
  • BizTalk Development:
  • Design, develop, and deploy BizTalk solutions for integrating business processes.
  • Build custom BizTalk maps, orchestrations, pipelines, and adapters to meet integration requirements.
  • Troubleshoot and resolve issues with BizTalk applications, ensuring minimal downtime.
  • Implement best practices for BizTalk development and integration.
  • Create and maintain documentation related to BizTalk applications and processes.
  • SQL Development & Management:
  • Develop, optimize, and maintain SQL Server databases, including stored procedures, queries, and triggers.
  • Write efficient SQL queries to extract, transform, and load data.
  • Perform database tuning and optimization for performance improvements.
  • Ensure database security, integrity, and backup.
  • Conduct routine maintenance tasks such as patching and updates for SQL databases.
  • Integration & Collaboration:
  • Collaborate with business analysts and other developers to understand business requirements and integrate them into solutions.
  • Participate in design and code reviews, ensuring quality and standards are adhered to.
  • Assist in the deployment of BizTalk solutions and SQL database updates to production environments.
  • Support Existing Systems:
  • Provide ongoing support and maintenance for existing BizTalk applications, SQL databases, and integration processes.
  • Troubleshoot, monitor, and optimize existing BizTalk integration workflows, databases, and related services.
  • Ensure smooth operations by resolving issues and improving the performance of legacy systems.
  • Monitoring & Support:
  • Monitor the performance of BizTalk applications and SQL databases to ensure optimal performance.
  • Provide ongoing support for BizTalk integrations and SQL database issues.
  • Troubleshoot and resolve technical issues related to integrations and SQL queries.
  • Provide 2nd/3rd line support for any issues that arise in the production environment.

Required Skills & Qualifications:

  • Technical Skills:
  • Strong experience with BizTalk Server (preferably BizTalk 2016 or later).
  • Expertise in creating and managing BizTalk orchestrations, maps, and pipelines.
  • Proficient in SQL Server (SQL Server 2012/2014/2016/2017/2019).
  • Experience with T-SQL, stored procedures, triggers, views, and indexes.
  • Familiarity with SQL Server Management Studio (SSMS) and SQL Server Integration Services (SSIS).
  • Knowledge of web services (SOAP, REST) and messaging formats (XML, JSON).
  • Experience with BizTalk adapters and message brokers.
  • Analytical and Troubleshooting Skills:
  • Strong problem-solving skills with the ability to debug and troubleshoot BizTalk and SQL issues.
  • Ability to quickly understand new systems and integrations and provide efficient solutions.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Mumbai, Pune
4 - 8 yrs
Best in industry
Market Research
SQL
Equity derivatives
SoapUI
skill iconPostman
+4 more
  • 4-8 years of experience in Functional testing with good foundation in technical expertise
  • Experience in the Capital Markets domain is MUST
  • Exposure to API testing tools like SoapUI and Postman
  • Well versed with SQL
  • Hands on experience in debugging issues using Unix commands
  • Basic understanding of XML and JSON structures
  • Knowledge of FitNesse is good to have
  • Should be early joinee.


Read more
Cornertree

at Cornertree

1 recruiter
Deepesh Shrimal
Posted by Deepesh Shrimal
Pune, Mumbai
3 - 10 yrs
₹5L - ₹45L / yr
Duck Creek
data insight
SQL Server Reporting Services (SSRS)
SQL
ETL
  • Bachelor's degree required, or higher education level, or foreign equivalent, preferably in area wit
  • At least 5 years experience in Duck Creek Data Insights as Technical Architect/Senior Developer.
  • Strong Technical knowledge on SQL databases, MSBI.
  • Should have strong hands-on knowledge on Duck Creek Insight product, SQL Server/DB level configuration, T-SQL, XSL/XSLT, MSBI etc
  • Well versed with Duck Creek Extract Mapper Architecture
  • Strong understanding of Data Modelling, Data Warehousing, Data Marts, Business Intelligence with ability to solve business problems
  • Strong understanding of ETL and EDW toolsets on the Duck Creek Data Insights
  • Strong knowledge on Duck Creek Insight product overall architecture flow, Data hub, Extract mapper etc
  • Understanding of data related to business application areas policy, billing, and claims business solutions
  • Minimum 4 to 7 year working experience on Duck Creek Insights product
  • Strong Technical knowledge on SQL databases, MSBI
  • Preferable having experience in Insurance domain
  • Preferable experience in Duck Creek Data Insights
  • Experience specific to Duck Creek would be an added advantage
  • Strong knowledge of database structure systems and data mining
  • Excellent organisational and analytical abilities
  • Outstanding problem solver


Read more
Pune
4 - 7 yrs
₹18L - ₹30L / yr
Large Language Models (LLM)
skill iconPython
skill iconDocker
Retrieval Augmented Generation (RAG)
SQL
+7 more

Job Description

Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.


Role & Responsibilities

  • Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
  • LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
  • Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
  • Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
  • Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
  • Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.


Preferred Candidate Profile

  • Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
  • Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
  • Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
  • AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
  • Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
  • MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
  • Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
  • Education: Degree in Computer Science, Data Engineering, or related field.

Perks and Benefits

  • Competitive Compensation: INR 20L to 30L per year.
  • Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.


Read more
Siddhatech Software

at Siddhatech Software

2 recruiters
Vaidehi Ghangurde
Posted by Vaidehi Ghangurde
Pune
1 - 3 yrs
₹3L - ₹6L / yr
skill iconAndroid Development
skill iconJava
skill iconKotlin
SDK
skill iconGit
+2 more
  • Design and Build Advanced Applications for the Android Platform
  • Collaborate with Cross-Functional Teams to Define, Design and Ship New Features
  • Troubleshoot and Fix Bugs in New and Existing Applications
  • Continuously Discover, Evaluate and Implement New Development Tools
  • Work With Outside Data Sources and APIs
  • Knowledge of Android SDK, Java programming, Kotlin, Jetpack Compose, Realm
  • Version Control, Clean Architecture
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Mumbai
3 - 6 yrs
Best in industry
API
Functional testing
SQL
Linux/Unix
Investment banking

Job Description:

- 3+ years of experience in Functional testing with good foundation in technical expertise

- Experience in Capital Markets/Investment Banking domain is MUST

- Exposure to API testing tools like SoapUI and Postman

- Well versed with SQL

- Hands on experience in debugging issues using Unix commands

- Basic understanding of XML and JSON structures

- Knowledge of FitNesse is good to have


Location:

Pune/Mumbai


About Wissen Technology:

 

·       The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.

·       Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.

·       Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.

·       Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.

·       Globally present with offices US, India, UK, Australia, Mexico, and Canada.

·       We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.

·       Wissen Technology has been certified as a Great Place to Work®.

·       Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.

·       Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.

·       We have served clients across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.

 

 

Website : www.wissen.com 

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Pune
4 - 6 yrs
₹15L - ₹25L / yr
PyTorch
skill iconPython
Scikit-Learn
NumPy
pandas
+2 more

Who are we looking for?  


We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.  

 

Job Summary 

  • Supporting company mission by understanding complex business problems through data-driven solutions. 
  • Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ... 
  • Developing end-to-end ML production-ready solutions and visualizations. 
  • Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards. 
  • Communicating complex technical concepts and findings to non-technical stakeholders of the projects 
  • Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms. 
  • Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings. 
  • Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models. 

 

Qualification and experience 

  • B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields. 
  • 5+ years of professional experience in the field of machine learning, and data science. 
  • Experience with large-scale Time-series data-based production code development is a plus. 

 

Skills and competencies 

  • Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must. 
  • Ability to work on multiple projects. Must have strong design and implementation skills. 
  • Ability to conduct research based on complex business problems. 
  • Strong presentation skills and the ability to collaborate in a multi-disciplinary team. 
  • Must have programming experience in Python. 
  • Excellent English communication skills, both written and verbal. 


Benefits and Perks

  • Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you. 
  • Progressive leave policy for effective work-life balance. 
  • Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.  
  • Multicultural peer groups and supportive workplace policies.  
  • Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work. 


 Hiring Process 

  • Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements. 
  • First Round: Technical round 1 to gauge your domain knowledge and functional expertise. 
  • Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
  • Final HR Round: Culture fit round and compensation discussions.
  • Offer: Congratulations you made it!  


If this position sparked your interest, apply now to initiate the screening process.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 4 yrs
₹8L - ₹20L / yr
skill iconPython
PySpark
ETL
databricks
Azure
+6 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe. 

 

 

We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English. 

 

 

We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives. 

 

 

Skills Required 

  • Experience in the manufacturing industry (metal industry is a plus)  
  • 2+ years of experience as a Data Engineer 
  • Experience in data cleaning & structuring and data manipulation 
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines. 
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation. 
  • Experience in SQL and data structures  
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases. 
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform. 
  • Proficient in data management and data governance  
  • Strong analytical and problem-solving skills. 
  • Excellent communication and teamwork abilities. 

 


Nice To Have 

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database). 
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud. 


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Pune, Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconAmazon Web Services (AWS)
EMR
skill iconPython
GLUE
SQL
+1 more

Greetings , Wissen Technology is Hiring for the position of Data Engineer

Please find the Job Description for your Reference:


JD

  • Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
  • Implement data ingestion processes from various sources including APIs, databases, and flat files.
  • Optimize and tune big data workflows for performance and scalability.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Manage and monitor EMR clusters, ensuring high availability and reliability.
  • Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
  • Implement data security best practices to ensure data is protected and compliant with relevant regulations.
  • Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
  • Troubleshoot and resolve issues related to data processing and EMR cluster performance.

 

 

Qualifications:

 

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • 5+ years of experience in data engineering, with a focus on big data technologies.
  • Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
  • Solid understanding of data modeling, ETL processes, and data warehousing concepts.
  • Experience with SQL and NoSQL databases.
  • Familiarity with CI/CD pipelines and version control systems (e.g., Git).
  • Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
Read more
IntraEdge

at IntraEdge

1 recruiter
Karishma Shingote
Posted by Karishma Shingote
Pune
5 - 11 yrs
₹5L - ₹15L / yr
SQL
snowflake
Enterprise Data Warehouse (EDW)
skill iconPython
PySpark

Sr. Data Engineer (Data Warehouse-Snowflake)

Experience: 5+yrs

Location: Pune (Hybrid)


As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.


Skills Preferred

  • Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
  • Proven ability to focus on priorities, strategies, and vision.
  • Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
  • Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
  • Coordinate the change management process, incident management and problem management process.
  • Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
  • Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery


Knowledge Preferred

  • Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
  • Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
  • Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
  • Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases. 
  • Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
  • Your experience in handling big data sets and big data technologies will be an asset.
  • Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.


Primary responsibilities

  • You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
  • Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
  • Support the development of processes and standards for data mining, data modeling and data protection.
  • Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
  • Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
  • Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
  • Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
  • Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
  • Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
  • Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.


Relevant work experience

Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.

Aptitude for working with data, interpreting results, business intelligence and analytic best practices.


Business understanding

Good knowledge and understanding of Consumer and industrial products sector and IoT. 

Good functional understanding of solutions supporting business processes.


Skill Must have

  • Snowflake 5+ years
  • Overall different Data warehousing techs 5+ years
  • SQL 5+ years
  • Data warehouse designing experience 3+ years
  • Experience with cloud and on-prem hybrid models in data architecture
  • Knowledge of Data Governance and strong understanding of data lineage and data quality
  • Programming & Scripting: Python, Pyspark
  • Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)


Nice to have

  • Demonstrated experience in modern enterprise data integration platforms such as Informatica
  • AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
  • Good understanding of Data Architecture approaches
  • Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
  • Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
  • Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
  • Building mock and proof-of-concepts across different capabilities/tool sets exposure
  • Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort