Cutshort logo

50+ SQL Jobs in Pune | SQL Job openings in Pune

Apply to 50+ SQL Jobs in Pune on CutShort.io. Explore the latest SQL Job opportunities across top companies like Google, Amazon & Adobe.

Sql jobs in other cities
ESQL JobsESQL Jobs in PuneMySQL JobsMySQL Jobs in AhmedabadMySQL Jobs in Bangalore (Bengaluru)MySQL Jobs in BhubaneswarMySQL Jobs in ChandigarhMySQL Jobs in ChennaiMySQL Jobs in CoimbatoreMySQL Jobs in Delhi, NCR and GurgaonMySQL Jobs in HyderabadMySQL Jobs in IndoreMySQL Jobs in JaipurMySQL Jobs in Kochi (Cochin)MySQL Jobs in KolkataMySQL Jobs in MumbaiMySQL Jobs in PunePL/SQL JobsPL/SQL Jobs in AhmedabadPL/SQL Jobs in Bangalore (Bengaluru)PL/SQL Jobs in ChandigarhPL/SQL Jobs in ChennaiPL/SQL Jobs in CoimbatorePL/SQL Jobs in Delhi, NCR and GurgaonPL/SQL Jobs in HyderabadPL/SQL Jobs in IndorePL/SQL Jobs in JaipurPL/SQL Jobs in KolkataPL/SQL Jobs in MumbaiPL/SQL Jobs in PunePostgreSQL JobsPostgreSQL Jobs in AhmedabadPostgreSQL Jobs in Bangalore (Bengaluru)PostgreSQL Jobs in BhubaneswarPostgreSQL Jobs in ChandigarhPostgreSQL Jobs in ChennaiPostgreSQL Jobs in CoimbatorePostgreSQL Jobs in Delhi, NCR and GurgaonPostgreSQL Jobs in HyderabadPostgreSQL Jobs in IndorePostgreSQL Jobs in JaipurPostgreSQL Jobs in Kochi (Cochin)PostgreSQL Jobs in KolkataPostgreSQL Jobs in MumbaiPostgreSQL Jobs in PunePSQL JobsPSQL Jobs in Bangalore (Bengaluru)PSQL Jobs in ChennaiPSQL Jobs in Delhi, NCR and GurgaonPSQL Jobs in HyderabadPSQL Jobs in MumbaiPSQL Jobs in PuneRemote SQL JobsSQL JobsSQL Jobs in AhmedabadSQL Jobs in Bangalore (Bengaluru)SQL Jobs in BhubaneswarSQL Jobs in ChandigarhSQL Jobs in ChennaiSQL Jobs in CoimbatoreSQL Jobs in Delhi, NCR and GurgaonSQL Jobs in HyderabadSQL Jobs in IndoreSQL Jobs in JaipurSQL Jobs in Kochi (Cochin)SQL Jobs in KolkataSQL Jobs in MumbaiTransact-SQL JobsTransact-SQL Jobs in Bangalore (Bengaluru)Transact-SQL Jobs in ChennaiTransact-SQL Jobs in HyderabadTransact-SQL Jobs in JaipurTransact-SQL Jobs in Pune
icon
Acuity Knowledge Partners

at Acuity Knowledge Partners

2 candid answers
1 video
Agency job
via HashRoot by shirin shahana
Pune, Bengaluru (Bangalore), Gurugram
4 - 7 yrs
₹16L - ₹17L / yr
BMC Control-M
SQL
Powershell
MFT

Job Title: BMC Control-M Specialist

📍 Location: Bangalore / Pune / Gurgaon (Onsite)

🕒 Experience: 4+ Years

🚀 Notice Period: Immediate joiners preferred

💼 Employment Type: Full-Time

About the Role

We’re hiring a BMC Control-M Specialist to join our team supporting the Momentum platform. You’ll be responsible for building and managing job schedules across enterprise systems to ensure smooth, auditable, and optimized batch processing.

What You’ll Do

  • Design and implement Control-M jobs and workflows
  • Manage job dependencies, conditions, and calendars
  • Monitor, troubleshoot, and optimize job performance
  • Work with infra and app teams to resolve failures
  • Contribute to SLA tracking, runbooks, and change management

What You Should Have

✅ 4+ years of experience with BMC Control-M

✅ Strong skills in SQL and PowerShell

✅ Solid understanding of job flow design, batch scheduling, alerts, and dependencies

✅ Experience supporting enterprise platforms (e.g., finance, ERP, regulatory systems)

✅ Hands-on with MFT environments

Bonus Skills

➕ Boomi ETL

➕ Bash scripting

➕ Experience working in a client-facing role

Why Join Us?

  • Mission-critical, high-impact projects
  • Collaborative and fast-paced work culture
  • Opportunity to work with modern enterprise tech stacks
  • Immediate onboarding for quick starters!


Read more
Pune
4 - 6 yrs
₹10L - ₹15L / yr
skill iconAngular (2+)
skill icon.NET
SQL
Relational Database (RDBMS)

.NET + Angular Full Stack Developer (4–5 Years Experience)

Location: Pune (Onsite)

Experience Required: 4 to 5 years

Communication: Fluent English (verbal & written)

Client-Facing: Must have prior experience working directly with clients

Technology: .NET, Angular

Job Overview

We are seeking a skilled and experienced Full Stack Developer with strong expertise in .NET (C#) and Angular to join our dynamic team in Pune. The ideal candidate will have hands-on experience across the full development stack, a strong understanding of relational databases and SQL, and the ability to work independently with clients. Experience in microservices architecture is a plus.

Key Responsibilities

  • Design, develop, and maintain modern web applications using .NET Core / .NET Framework and Angular

  • Write clean, scalable, and maintainable code for both backend and frontend components

  • Interact directly with clients for requirement gathering, demos, sprint planning, and issue resolution

  • Work closely with designers, QA, and other developers to ensure high-quality product delivery

  • Perform regular code reviews, ensure adherence to coding standards, and mentor junior developers if needed

  • Troubleshoot and debug application issues and provide timely solutions

  • Participate in discussions on architecture, design patterns, and technical best practices

Must-Have Skills

✅ Strong hands-on experience with .NET Core / .NET Framework (Web API, MVC)

✅ Proficiency in Angular (Component-based architecture, RxJS, State Management)

✅ Solid understanding of RDBMS and SQL (preferably with SQL Server)

✅ Familiarity with Entity Framework or Dapper

✅ Strong knowledge of RESTful API design and integration

✅ Version control using Git

✅ Excellent verbal and written communication skills

✅ Ability to work in a client-facing role and handle discussions independently

Good-to-Have / Optional Skills

Understanding or experience in Microservices Architecture

Exposure to CI/CD pipelines, unit testing frameworks, and cloud environments (e.g., Azure or AWS)


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.


Read more
Giant Leap Systems
Priyanka Kulkarni
Posted by Priyanka Kulkarni
Pune
0 - 2 yrs
₹3L - ₹4L / yr
skill iconAngular (2+)
skill iconReact.js
06692
skill iconHTML/CSS
skill iconJavascript
+7 more
  • Develop web applications using Angular or React (based on project requirements) and Spring Boot.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Write efficient, reusable, and maintainable code adhering to coding standards.
  • Ensure performance, security, and scalability of applications.
  • Debug issues, resolve bugs, and identify performance bottlenecks.
  • Stay updated with the latest technologies and development trends.
  • Participate in code reviews and receive mentorship from senior developers.
  • Assist in preparing technical documentation for developed features.
  • Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • Knowledge and hands-on experience with Angular and/or React, and Spring Boot.
  • Solid understanding of web development fundamentals: HTML, CSS, JavaScript, and REST APIs.
  • Familiarity with databases and basic SQL queries.
  • Awareness of software development methodologies, design patterns, and best practices.
  • Strong analytical and problem-solving abilities.
  • Good communication and teamwork skills.

Willingness to learn and adapt quickly to new tools and technologies


Read more
Gruve
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Pune
5 - 10 yrs
Upto ₹22L / yr (Varies
)
skill iconReact.js
skill iconNextJs (Next.js)
Wordpress
skill iconPHP
skill iconHTML/CSS
+4 more

We are seeking an experienced WordPress Developer with expertise in both frontend and backend development. The ideal candidate will have a deep understanding of headless WordPress architecture, where the backend is managed with WordPress, and the frontend is built using React.js (or Next.js). The developer should follow best coding practices to ensure the website is secure, high-performing, scalable, and fully responsive. 


Key Responsibilities: 

Backend Development (WordPress): 

  • Develop and maintain a headless WordPress CMS to serve content via REST API / GraphQL. 
  • Create custom WordPress plugins and themes to optimize content delivery. 
  • Ensure secure authentication and role-based access for API endpoints. 
  • Optimize WordPress database queries for better performance. 

Frontend Development (React.js / Next.js): 

  • Build a decoupled frontend using React.js (or Next.js) that fetches content from WordPress. 
  • Experience with Figma for translating UI/UX designs to code. 
  • Ensure seamless integration of frontend with WordPress APIs. 
  • Implement modern UI/UX principles to create responsive, fast-loading web pages. 

Code quality, Performance & Security Optimization: 

  • Optimize website speed using caching, lazy loading, and CDN integration. 
  • Ensure the website follows SEO best practices and is mobile-friendly. 
  • Implement security best practices to prevent vulnerabilities such as SQL injection, XSS, and CSRF. 
  • Write clean, maintainable, and well-documented code following industry standards. 
  • Implement version control using Git/GitHub/GitLab. 
  • Conduct regular code reviews and debugging to ensure a high-quality product. 

Collaboration & Deployment: 

  • Work closely with designers, content teams, and project managers. 
  • Deploy and manage WordPress and frontend code in staging and production environments. 
  • Monitor website performance and implement improvements. 

Required Skills & Qualifications: 

  • B.E/B. Tech Degree, Master’s Degree required
  • Experience: 6 – 8 Years
  • Strong experience in React.js / Next.js for building frontend applications. 
  • Proficiency in JavaScript (ES6+), TypeScript, HTML5, CSS3, and TailwindCSS.
  • Familiarity with SSR (Server Side Rendering) and SSG (Static Site Generation). 
  • Experience in WordPress development (PHP, MySQL, WP REST API, GraphQL). 
  • Experience with ACF (Advanced Custom Fields), Custom Post Types, WP Headless CMS
  • Strong knowledge of WordPress security, database optimization, and caching techniques. 

Why Join Us:

  • Competitive salary and benefits package.
  • Work in a dynamic, collaborative, and creative environment.
  • Opportunity to lead and influence design decisions across various platforms.
  • Professional development opportunities and career growth potential.


Read more
Mindstix Software Labs
Agency job
via AccioJob by AccioJobHiring Board
Pune
0 - 1 yrs
₹5L - ₹6L / yr
DSA
SQL
Object Oriented Programming (OOPs)

AccioJob is conducting a Walk-In hiring drive in partnership with MindStix to fill the SDE 1 position at their Pune office.


Apply, Register, and select your Slot here: https://go.acciojob.com/hLMAv4


Job Description:

  • Role: SDE 1
  • Work Location: Pune
  • CTC: 5 LPA - 6 LPA

Eligibility Criteria:

  • Degree: B.Tech, BE, M.Tech, MCA, BCA
  • Branch: Open to all streams
  • Graduation Year: 2024 and 2025
  • Notice Period: Candidates should have a notice period of 10 days or less

Evaluation Process:

  1. Offline Assessment at AccioJob Pune Skill Centre
  2. Company-side Process: In-person Assignment 2 Technical Rounds, 1 HR Round

Note: Please bring your laptop and microphone for the test.


Register Here: https://go.acciojob.com/hLMAv4

Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Pune
3 - 6 yrs
₹14L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
Scikit-Learn
XGBoost
+6 more

Job Overview

  • Level 1-Previous working experience as a Data Scientist minimum 5 years

  • Level 2-Previous working experience as a Data Scientist for 3 to 5 years

  • •In-depth knowledge of Agile process and principles
  • •Outstanding communication, presentation, and leadership skills
  • •Excellent organizational and time management skills
  • •Sharp analytical and problem-solving skills
  • •Creative thinker with a vision
  • •Flexibility / capacity of adaptation
  • •Presentation skills (project reviews with customers and top management)
  • •Interest in industrial & automotive topics
  • •Fluent in English
  • •Ability to work in international teams

  • •Engineering degree with strong background in mathematics and computer science. A PhD in a quantitative field and/or a minimum of 3 years of experience in machine learning is a plus.
  • •Excellent understanding of traditional machine learning techniques and algorithms, such as k-NN, SVM, Random Forests, etc.
  • •Understanding of deep learning techniques
  • •Understanding and, ideally, experience with Reinforcement Learning methods
  • •Experience using ML, DL frameworks (Scikit-learn, XGBoost, TensorFlow, Keras, MXNet, etc.)
  • •Proficiency in at least one programming language (preferably python)
  • •Experience with SQL and NoSQL databases
  • •Excellent verbal and written skills in English is mandatory Engineering degree.

  • Appreciated extra skills
  • •Experience in signal and image processing
  • •Experience in forecasting and time series modeling
  • •Experience with computer vision libraries like OpenCV
  • •Experience using cloud platforms
  • •Experience with versioning control systems (git)
  • •Interest in IoT and hardware adapted to ML tasks


Read more
DEMAND MEDIA BPM LLP

at DEMAND MEDIA BPM LLP

2 candid answers
Darshana Mate
Posted by Darshana Mate
Pune
1 - 5 yrs
₹2L - ₹6L / yr
SQL
PowerBI
skill iconPython

Job Purpose

Responsible for managing end-to-end database operations, ensuring data accuracy, integrity, and security across systems. The position plays a key role in driving data reliability, availability, and compliance with operational standards.


Key Responsibilities:

  • Collate audit reports from the QA team and structure data in accordance with Standard Operating Procedures (SOP).
  • Perform data transformation and validation for accuracy and consistency.
  • Upload processed datasets into SQL Server using SSIS packages.
  • Monitor and optimize database performance, identifying and resolving bottlenecks.
  • Perform regular backups, restorations, and recovery checks to ensure data continuity.
  • Manage user access and implement robust database security policies.
  • Oversee database storage allocation and utilization.
  • Conduct routine maintenance and support incident management, including root cause analysis and resolution.
  • Design and implement scalable database solutions and architecture.
  • Create and maintain stored procedures, views, and other database components.
  • Optimize SQL queries for performance and scalability.
  • Execute ETL processes and support seamless integration of multiple data sources.
  • Maintain data integrity and quality through validation and cleansing routines.
  • Collaborate with cross-functional teams on data solutions and project deliverables.

 

Educational Qualification: Any Graduate

Required Skills & Qualifications:

  • Proven experience with SQL Server or similar relational database platforms.
  • Strong expertise in SSIS, ETL processes, and data warehousing.
  • Proficiency in SQL/T-SQL, including scripting, performance tuning, and query optimization.
  • Experience in database security, user role management, and access control.
  • Familiarity with backup/recovery strategies and database maintenance best practices.
  • Strong analytical skills with experience working with large and complex datasets.
  • Solid understanding of data modeling, normalization, and schema design.
  • Knowledge of incident and change management processes.
  • Excellent communication and collaboration skills.
  • Experience with Python for data manipulation and automation is a strong plus.


Read more
Phi Commerce

at Phi Commerce

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
6 - 10 yrs
Upto ₹18L / yr (Varies
)
Integration
System integration
Linux/Unix
SQL

We are seeking a detail-oriented and experienced Implementation Lead to drive the end-to-end deployment of our payment gateway solutions. This role is crucial in bridging business needs, technical execution, and client success. The ideal candidate will have a solid background in payment systems, experience managing cross-functional teams, conducting UAT (User Acceptance Testing), and overseeing integration flows for payment gateways.


Key Responsibilities:

  • Lead and manage the implementation of payment gateway solutions across varied client environments.
  • Oversee and coordinate the activities of cross-functional teams, including developers, QA, and support, during the implementation lifecycle.
  • Collaborate with clients to gather business and technical requirements, design tailored solutions, and develop clear implementation roadmaps.
  • Plan, conduct, and manage UAT sessions with clients; capture feedback and ensure timely resolution of issues.
  • Design, review, and optimize payment flow processes such as authorization, capture, refunds, and chargebacks.
  • Monitor project progress, ensuring deliverables are met on time and within scope.
  • Act as a Subject Matter Expert (SME) on payment gateway technologies, API integrations, and industry compliance standards (e.g., PCI-DSS).
  • Prepare and maintain comprehensive technical and procedural documentation.
  • Ensure seamless handover of projects to support and operations teams post-implementation.


Qualifications & Experience:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Minimum 5 years of experience in the payment gateway or fintech domain.
  • At least 2 years of experience leading teams or managing cross-functional projects.
  • Proven track record of managing end-to-end implementation projects successfully.
  • Strong knowledge of payment gateway flows, including APIs, SDKs, tokenization, and reconciliation.
  • Hands-on experience with UAT planning, execution, and client coordination.
  • Familiarity with integrating major payment networks such as Visa, Mastercard, UPI, etc.
  • Excellent communication, analytical, and project management skills.
  • Experience with Agile/Scrum methodologies is a plus.
  • In-depth understanding of compliance standards like PCI-DSS, PSD2, etc.
  • Proficiency with tools such as Jira, Confluence, Postman, and payment testing utilities.
  • Ability to manage multiple client implementations simultaneously.


Read more
Phi Commerce

at Phi Commerce

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
2 - 5 yrs
Upto ₹10L / yr (Varies
)
Integration
System integration
SQL
Linux/Unix

We are looking for a skilled and motivated Integration Engineer to join our dynamic team in the payment domain. This role involves the seamless integration of payment systems, APIs, and third-party services into our platform, ensuring smooth and secure payment processing. The ideal candidate will bring experience with payment technologies, integration methodologies, and a strong grasp of industry standards.

Key Responsibilities:

  • System Integration:
  • Design, develop, and maintain integrations between various payment processors, gateways, and internal platforms using RESTful APIs, SOAP, and related technologies.
  • Payment Gateway Integration:
  • Integrate third-party payment solutions such as Visa, MasterCard, PayPal, Stripe, and others into the platform.
  • Troubleshooting & Support:
  • Identify and resolve integration issues including transactional failures, connectivity issues, and third-party service disruptions.
  • Testing & Validation:
  • Conduct end-to-end integration testing to ensure payment system functionality across development, staging, and production environments.

Qualifications:

  • Education:
  • Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Equivalent work experience is also acceptable.
  • Experience:
  • 3+ years of hands-on experience in integrating payment systems and third-party services.
  • Proven experience with payment gateways (e.g., Stripe, Square, PayPal, Adyen) and protocols (e.g., ISO 20022, EMV).
  • Familiarity with payment processing systems and industry standards.

Desirable Skills:

  • Strong understanding of API security, OAuth, and tokenization practices.
  • Experience with PCI-DSS compliance.
  • Excellent problem-solving and debugging skills.
  • Effective communication and cross-functional collaboration capabilities.
Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
AnyWhareIndia, Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Indore, Kolkata
5 - 11 yrs
₹6L - ₹30L / yr
Snowflake
skill iconPython
PySpark
SQL

Role descriptions / Expectations from the Role

·        6-7 years of IT development experience with min 3+ years hands-on experience in Snowflake

·        Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers.

·        Strong experience with building productionized data ingestion and data pipelines in Snowflake

·        Good knowledge of Snowflake's architecture, features likie  Zero-Copy Cloning, Time Travel, and performance tuning capabilities

·        Should have good exp on Snowflake RBAC and data security.

·        Strong experience in Snowflake features including new snowflake features.

·        Should have good experience in Python/Pyspark.

·        Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF)

·        Should have experience/knowledge in orchestration and scheduling tools experience like Airflow

·        Should have good understanding on ETL or ELT processes and ETL tools.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
5 - 8 yrs
₹10L - ₹18L / yr
Ab Initio
GDE
EME
SQL
Teradata
+5 more

Job Title : Ab Initio Developer

Location : Pune

Experience : 5+ Years

Notice Period : Immediate Joiners Only


Job Summary :

We are looking for an experienced Ab Initio Developer to join our team in Pune.

The ideal candidate should have strong hands-on experience in Ab Initio development, data integration, and Unix scripting, with a solid understanding of SDLC and data warehousing concepts.


Mandatory Skills :

Ab Initio (GDE, EME, graphs, parameters), SQL/Teradata, Data Warehousing, Unix Shell Scripting, Data Integration, DB Load/Unload Utilities.


Key Responsibilities :

  • Design and develop Ab Initio graphs/plans/sandboxes/projects using GDE and EME.
  • Manage and configure standard environment parameters and multifile systems.
  • Perform complex data integration from multiple source and target systems with business rule transformations.
  • Utilize DB Load/Unload Utilities effectively for optimized performance.
  • Implement generic graphs, ensure proper use of parallelism, and maintain project parameters.
  • Work in a data warehouse environment involving SDLC, ETL processes, and data analysis.
  • Write and maintain Unix Shell Scripts and use utilities like sed, awk, etc.
  • Optimize and troubleshoot performance issues in Ab Initio jobs.

Mandatory Skills :

  • Strong expertise in Ab Initio (GDE, EME, graphs, parallelism, DB utilities, multifile systems).
  • Experience with SQL and databases like SQL Server or Teradata.
  • Proficiency in Unix Shell Scripting and Unix utilities.
  • Data integration and ETL from varied source/target systems.

Good to Have :

  • Experience in Ab Initio and AWS integration.
  • Knowledge of Message Queues and Continuous Graphs.
  • Exposure to Metadata Hub.
  • Familiarity with Big Data tools such as Hive, Impala.
  • Understanding of job scheduling tools.
Read more
Tekdi Technologies Pvt. Ltd.
Tekdi Recruitment
Posted by Tekdi Recruitment
Pune
3 - 5 yrs
₹8L - ₹12L / yr
skill iconJava
GWT
RESTful APIs
SQL
Hibernate (Java)
+3 more

Java developer will be responsible for many duties throughout the development lifecycle of applications, from concept and design right through to testing.Duties/Responsibilities:

  • To support and maintain existing Java code base, debug the application
  • To analyse user and business requirements and design and implement appropriate solutions
  • To design and code programs following in-house standards and good design principles
  • To ensure that all programs are documented to the company standards
  • To create unit test plans and perform unit testing of the programs
  • To provide advice and guidance to other members of the team

Required Skills/Abilities: 

  • Hands on experience in designing and developing applications using Java EE platforms
  • Object Oriented analysis and design using common design patterns
  • Good knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate)
  • Experience in the Spring Framework
  • Experience in developing web applications using at least one popular web framework (JSF, Wicket, GWT, Spring MVC)
  • Experience in RESTFul webservices
  • Experience with test-driven development
  • Exposure to portal/mobility development - Desired
  • Exposure to any of middleware solutions like MQ, Oracle fusion middleware(WebLogic), WebSphere, Open Source



















Read more
TCS

at TCS

Agency job
via Aavyan Consulting by Jayatri Paul
Bengaluru (Bangalore), Pune, Chennai
5 - 8 yrs
₹8L - ₹12L / yr
Maximo
skill iconJava
Oracle
SQL

We’re hiring a Maximo Technical Lead with hands-on experience in Maximo 7.6 or higher, Java, and Oracle DB. The role involves leading Maximo implementations, upgrades, and support projects, especially for manufacturing clients.


Key Skills:

IBM Maximo (MAS 8.x preferred)

Java, Oracle 12c+, WebSphere

Maximo Mobile / Asset Management / Cognos / BIRT

SQL, scripting, troubleshooting

Experience leading tech teams and working with clients


Good to Have:

IBM Maximo Certification

MES/Infrastructure planning knowledge

Experience with Rail or Manufacturing domain


https://lnkd.in/getubzJd

Read more
Mindstix Software Labs
Agency job
via AccioJob by AccioJobHiring Board
Pune
0 - 1 yrs
₹5L - ₹6L / yr
DSA
SQL
Object Oriented Programming (OOPs)

AccioJob is conducting an offline hiring drive in partnership with MindStix to fill the SDE 1 position at their Pune office.


Apply, Register, and select your Slot here: 

https://go.acciojob.com/Hb8ATw

Job Description:

  • Role: SDE 1
  • Work Location: Pune
  • CTC: 5 LPA - 6 LPA

Eligibility Criteria:

  • Degree: B.Tech, BE, M.Tech, MCA, BCA
  • Branch: Open to all streams
  • Graduation Year: 2024 and 2025
  • Notice Period: Candidates should have a notice period of 10 days or less

Evaluation Process:

  1. Offline Assessment at AccioJob Pune Skill Centre
  2. Company-side Process: In-person Assignment 2 Technical Rounds, 1 HR Round

Note: Please bring your laptop and microphone for the test.


Register Here: https://go.acciojob.com/Hb8ATw

Read more
Bengaluru (Bangalore), Mumbai, Gurugram, Pune, Hyderabad, Chennai, Kolkata
3 - 8 yrs
₹5L - ₹20L / yr
Oracle Analytics Cloud (OAC)
Fusion Data Intelligence (FDI) Specialist
RPD
OAC Reports
Data Visualization
+7 more

Job Title : Oracle Analytics Cloud (OAC) / Fusion Data Intelligence (FDI) Specialist

Experience : 3 to 8 years

Location : All USI locations – Hyderabad, Bengaluru, Mumbai, Gurugram (preferred) and Pune, Chennai, Kolkata

Work Mode : Hybrid Only (2-3 days from office or all 5 days from office)


Mandatory Skills : Oracle Analytics Cloud (OAC), Fusion Data Intelligence (FDI), RPD, OAC Reports, Data Visualizations, SQL, PL/SQL, Oracle Databases, ODI, Oracle Cloud Infrastructure (OCI), DevOps tools, Agile methodology.


Key Responsibilities :

  • Design, develop, and maintain solutions using Oracle Analytics Cloud (OAC).
  • Build and optimize complex RPD models, OAC reports, and data visualizations.
  • Utilize SQL and PL/SQL for data querying and performance optimization.
  • Develop and manage applications hosted on Oracle Cloud Infrastructure (OCI).
  • Support Oracle Cloud migrations, OBIEE upgrades, and integration projects.
  • Collaborate with teams using the ODI (Oracle Data Integrator) tool for ETL processes.
  • Implement cloud scripting using CURL for Oracle Cloud automation.
  • Contribute to the design and implementation of Business Continuity and Disaster Recovery strategies for cloud applications.
  • Participate in Agile development processes and DevOps practices including CI/CD and deployment orchestration.

Required Skills :

  • Strong hands-on expertise in Oracle Analytics Cloud (OAC) and/or Fusion Data Intelligence (FDI).
  • Deep understanding of data modeling, reporting, and visualization techniques.
  • Proficiency in SQL, PL/SQL, and relational databases on Oracle.
  • Familiarity with DevOps tools, version control, and deployment automation.
  • Working knowledge of Oracle Cloud services, scripting, and monitoring.

Good to Have :

  • Prior experience in OBIEE to OAC migrations.
  • Exposure to data security models and cloud performance tuning.
  • Certification in Oracle Cloud-related technologies.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Mumbai, Gurugram, Pune, Hyderabad, Chennai
3 - 6 yrs
₹5L - ₹20L / yr
IBM Sterling Integrator Developer
IBM Sterling B2B Integrator
Shell Scripting
skill iconPython
SQL
+1 more

Job Title : IBM Sterling Integrator Developer

Experience : 3 to 5 Years

Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune

Employment Type : Full-Time


Job Description :

We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.

The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.

Key Responsibilities :

  • Develop, configure, and maintain IBM Sterling Integrator solutions.
  • Design and implement integration solutions using IBM Sterling.
  • Collaborate with cross-functional teams to gather requirements and provide solutions.
  • Work with custom languages and scripting to enhance and automate integration processes.
  • Ensure optimal performance and security of integration systems.

Must-Have Skills :

  • Hands-on experience with IBM Sterling Integrator and associated integration tools.
  • Proficiency in at least one custom scripting language.
  • Strong command over Shell scripting, Python, and SQL (mandatory).
  • Good understanding of EDI standards and protocols is a plus.

Interview Process :

  • 2 Rounds of Technical Interviews.

Additional Information :

  • Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Read more
Partner Company

Partner Company

Agency job
via AccioJob by AccioJobHiring Board
Hyderabad, Pune, Noida
0 - 0 yrs
₹5L - ₹6L / yr
SQL
MS-Excel
PowerBI
skill iconPython

AccioJob is conducting an offline hiring drive in partnership with Our Partner Company to hire Junior Business/Data Analysts for an internship with a Pre-Placement Offer (PPO) opportunity.


Apply, Register and select your Slot here: https://go.acciojob.com/69d3Wd


Job Description:

  • Role: Junior Business/Data Analyst (Internship + PPO)
  • Work Location: Hyderabad
  • Internship Stipend: 15,000 - 25,000/month
  • Internship Duration: 3 months
  • CTC on PPO: 5 LPA - 6 LPA

Eligibility Criteria:

  • Degree: Open to all academic backgrounds
  • Graduation Year: 2023, 2024, 2025

Required Skills:

  • Proficiency in SQLExcelPower BI, and basic Python
  • Strong analytical mindset and interest in solving business problems with data

Hiring Process:

  1. Offline Assessment at AccioJob Skill Centres (Hyderabad, Pune, Noida)
  2. 1 Assignment + 2 Technical Interviews (Virtual; In-person for Hyderabad candidates)

Note: Please bring your laptop and earphones for the test.


Register Here: https://go.acciojob.com/69d3Wd

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Mumbai, Pune, Chennai, Gurugram
5.6 - 7 yrs
₹10L - ₹28L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL

Job Summary:

As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.

Key Responsibilities:

  • Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
  • Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
  • Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
  • Work with AWS DMS and RDS for database integration and migration
  • Optimize data flows and system performance for speed and cost-effectiveness
  • Deploy and manage infrastructure using AWS CloudFormation templates
  • Collaborate with cross-functional teams to gather requirements and build robust data solutions
  • Ensure data integrity, quality, and security across all systems and processes

Required Skills & Experience:

  • 6+ years of experience in Data Engineering with strong AWS expertise
  • Proficient in Python and PySpark for data processing and ETL development
  • Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
  • Strong SQL skills for building complex queries and performing data analysis
  • Familiarity with AWS CloudFormation and infrastructure as code principles
  • Good understanding of serverless architecture and cost-optimized design
  • Ability to write clean, modular, and maintainable code
  • Strong analytical thinking and problem-solving skills


Read more
Bengaluru (Bangalore), Pune, Chennai
5 - 12 yrs
₹5L - ₹25L / yr
PySpark
Automation
SQL

Skill Name: ETL Automation Testing

Location: Bangalore, Chennai and Pune

Experience: 5+ Years


Required:

Experience in ETL Automation Testing

Strong experience in Pyspark.

Read more
NeoGenCode Technologies Pvt Ltd
Pune
8 - 15 yrs
₹5L - ₹24L / yr
Data engineering
Snow flake schema
SQL
ETL
ELT
+5 more

Job Title : Data Engineer – Snowflake Expert

Location : Pune (Onsite)

Experience : 10+ Years

Employment Type : Contractual

Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.


Job Summary :

We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.

The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.

Responsibilities :

  • Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
  • Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
  • Ensure high data quality, security, and adherence to governance frameworks.
  • Conduct code reviews and align development with best practices.

Qualifications :

  • Bachelor’s in Computer Science, Data Science, IT, or related field.
  • Snowflake certifications (Pro/Architect) preferred.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vishakha Walunj
Posted by Vishakha Walunj
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
PySpark
databricks
SQL
skill iconPython

Required Skills:

  • Hands-on experience with Databricks, PySpark
  • Proficiency in SQL, Python, and Spark.
  • Understanding of data warehousing concepts and data modeling.
  • Experience with CI/CD pipelines and version control (e.g., Git).
  • Fundamental knowledge of any cloud services, preferably Azure or GCP.


Good to Have:

  • Bigquery
  • Experience with performance tuning and data governance.


Read more
TCS

TCS

Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Mumbai, Pune, Chennai
4 - 8 yrs
₹6L - ₹20L / yr
Marketing Campaign
SAS
Teradata
SQL

 

Required Technical Skill Set:Teradata with Marketing Campaign knowledge and SAS

Desired Competencies (Technical/Behavioral Competency)

Must-Have

1. Advanced coding skills in Teradata SQL and SAS is required

2. Experience with customer segmentation, marketing optimization, and marketing automation. Thorough understanding of customer contact management principles

3. Design and execution of campaign on consumer and business products using Teradata communication manager and inhouse tools

4. Analyzing effectiveness of various campaigns by doing necessary analysis to add insights and improve future campaigns

5. Timely resolution of Marketing team queries and other ad-hoc request

Good-to-Have

1. Awareness of CRM tools & process, automation

2. Knowledge of commercial databases preferable

3. People & team management skills

Read more
Solidatus
Pune
6 - 8 yrs
₹0.5L - ₹0.5L / yr
skill iconJava
skill iconSpring Boot
skill iconNodeJS (Node.js)
Databases
SQL
+6 more

Competitive Salary


About Solidatus


At Solidatus, we empower organizations to connect and visualize their data relationships, making it easier to identify, access, and understand their data. Our metadata management technology helps businesses establish a sustainable data foundation, ensuring they meet regulatory requirements, drive digital transformation, and unlock valuable insights. 

 

We’re experiencing rapid growth—backed by HSBC, Citi, and AlbionVC, we secured £14 million in Series A funding in 2021. Our achievements include recognition in the Deloitte UK Technology Fast 50, multiple A-Team Innovation Awards, and a top 1% place to work ranking from The FinancialTechnologist. 

 

Now is an exciting time to join us as we expand internationally and continue shaping the future of data management. 


About the Engineering Team


Engineering is the heart of Solidatus. Our team of world-class engineers, drawn from outstanding computer science and technical backgrounds, plays a critical role in crafting the powerful, elegant solutions that set us apart. We thrive on solving challenging visualization and data management problems, building technology that delights users and drives real-world impact for global enterprises.

As Solidatus expands its footprint, we are scaling our capabilities with a focus on building world-class connectors and integrations to extend the reach of our platform. Our engineers are trusted with the freedom to explore, innovate, and shape the product’s future — all while working in a collaborative, high-impact environment. Here, your code doesn’t just ship — it empowers some of the world's largest and most complex organizations to achieve their data ambitions.


Who We Are & What You’ll Do


Join our Data Integration team and help shape the way data flows! 


Your Mission:


To expand and refine our suite of out-of-the-box integrations, using our powerful API and SDK to bring in metadata for visualisation from a vast range of sources including databases with diverse SQL dialects.

But that is just the beginning. At our core, we are problem-solvers and innovators. You’ll have the chance to:                                                        

Design

intuitive layouts

representing flow of data across complex deployments of diverse technologies

Design and optimize API connectivity and parsers reading from source systems metadata

Explore new paradigms for representing data lineage

Enhance our data ingestion capabilities to handle massive volumes of data

Dig deep into data challenges to build smarter, more scalable solutions

Beyond engineering, you’ll collaborate with users, troubleshoot tricky issues, streamline development workflows, and contribute to a culture of continuous improvement.


What We’re Looking For


  • We don’t believe in sticking to a single tech stack just for the sake of it. We’re engineers first, and we pick the best tools for the job. More than ticking off a checklist, we value mindset, curiosity, and problem-solving skills.
  • You’re quick to learn and love diving into new technologies
  • You push for excellence and aren’t satisfied with “just okay”
  • You can break down complex topics in a way that anyone can understand
  • You should have 6–8 years of proven experience in developing, and delivering high-quality, scalable software solutions 
  • You should be a strong self-starter with the ability to take ownership of tasks and drive them to completion with minimal supervision.
  • You should be able to mentor junior developers, perform code reviews, and ensure adherence to best practices in software engineering.


Tech & Skills We’d Love to See


Must-have:·

  • Strong hands-on experience with Java, Spring Boot RESTful APIs, and Node.js
  • Solid knowledge of databases, SQL dialects, and data structures


Nice-to-have:

  • Experience with C#, ASP.NET Core, TypeScript, React.js, or similar frameworks
  • Bonus points for data experience—we love data wizards


If you’re passionate about engineering high-impact solutions, playing with cutting- edge tech, and making data work smarter, we’d love to have you on board!

Read more
NonStop io Technologies Pvt Ltd
Pune
8 - 12 yrs
Best in industry
skill icon.NET
skill iconJava
MySQL
SQL
Microservices
+2 more

About the Role:

We are seeking an experienced Tech Lead with 8+ years of hands-on experience in backend development using .NET or Java. The ideal candidate will have strong leadership capabilities, the ability to mentor a team, and a solid technical foundation to deliver scalable and maintainable backend systems. Prior experience in the healthcare domain is a plus.


Key Responsibilities:

  • Lead a team of backend developers to deliver product and project-based solutions.
  • Oversee the development and implementation of backend services and APIs.
  • Collaborate with cross-functional teams including frontend, QA, DevOps, and Product.
  • Perform code reviews and enforce best practices in coding and design.
  • Ensure performance, quality, and responsiveness of backend applications.
  • Participate in sprint planning, estimations, and retrospectives.
  • Troubleshoot, analyze, and optimize application performance.

Required Skills:

  • 8+ years of backend development experience in .NET or Java.
  • Proven experience as a Tech Lead managing development teams.
  • Strong understanding of REST APIs, microservices, and software design patterns.
  • Familiarity with SQL and NoSQL databases.
  • Good knowledge of Agile/Scrum methodologies.

Preferred Skills:

  • Experience in the healthcare domain.
  • Exposure to frontend frameworks like Angular or React.
  • Understanding of cloud platforms such as Azure/AWS/GCP.
  • CI/CD and DevOps practices.

What We Offer:

  • Collaborative and value-driven culture.
  • Projects with real-world impact in critical domains.
  • Flexibility and autonomy in work.
  • Continuous learning and growth opportunities.


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 10 yrs
Best in industry
skill iconPython
Spark
Apache Airflow
skill iconDocker
SQL
+2 more

What You’ll Do:


As a Data Scientist, you will work closely across DeepIntent Analytics teams located in New York City, India, and Bosnia. The role will support internal and external business partners in defining patient and provider audiences, and generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include creating and scoring audiences, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.

  • Explore ways to to create better audiences 
  • Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights 
  • Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
  • Design and deploy new iterations of production-level code
  • Contribute posts to our upcoming technical blog  

Who You Are:

  • Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred 
  • 3+ years of working experience as Data Analyst, Data Engineer, Data Scientist in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
  • Background in either data engineering or analytics
  • Hands on technical experience is required, proficiency in performing statistical analysis in Python, including relevant libraries, required
  • You have an advanced understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
  • Experience in programmatic, DSP related, marketing predictive analytics, audience segmentation or audience behaviour analysis or medical / healthcare experience
  • You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference) 
  • Familiarity with data science tools such as, Xgboost, pytorch, Jupyter and strong LLM user experience (developer/API experience is a plus)
  • You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing


Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
PySpark
XGBoost

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.


Data Axle Pune is pleased to have achieved certification as a Great Place to Work!


Roles & Responsibilities:

We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.


We are looking for a Senior Data Scientist who will be responsible for:

  1. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
  2. Design or enhance ML workflows for data ingestion, model design, model inference and scoring
  3. Oversight on team project execution and delivery
  4. Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
  5. Visualize and publish model performance results and insights to internal and external audiences


Qualifications:

  1. Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
  2. Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
  4. Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
  5. Proficiency in Python and SQL required; PySpark/Spark experience a plus
  6. Ability to conduct a productive peer review and proper code structure in Github
  7. Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
  8. Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.


It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Deqode

at Deqode

1 recruiter
Mokshada Solanki
Posted by Mokshada Solanki
Bengaluru (Bangalore), Mumbai, Pune, Gurugram
4 - 5 yrs
₹4L - ₹20L / yr
SQL
skill iconAmazon Web Services (AWS)
Migration
PySpark
ETL

Job Summary:

Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.


Key Responsibilities:

  • Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
  • Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
  • Work on data migration tasks in AWS environments.
  • Monitor and improve database performance; automate key performance indicators and reports.
  • Collaborate with cross-functional teams to support data integration and delivery requirements.
  • Write shell scripts for automation and manage ETL jobs efficiently.


Required Skills:

  • Strong experience with MySQL, complex SQL queries, and stored procedures.
  • Hands-on experience with AWS Glue, PySpark, and ETL processes.
  • Good understanding of AWS ecosystem and migration strategies.
  • Proficiency in shell scripting.
  • Strong communication and collaboration skills.


Nice to Have:

  • Working knowledge of Python.
  • Experience with AWS RDS.



Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Mumbai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL
redshift

Profile: AWS Data Engineer

Mode- Hybrid

Experience- 5+7 years

Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram


Roles and Responsibilities

  • Design and maintain ETL pipelines using AWS Glue and Python/PySpark
  • Optimize SQL queries for Redshift and Athena
  • Develop Lambda functions for serverless data processing
  • Configure AWS DMS for database migration and replication
  • Implement infrastructure as code with CloudFormation
  • Build optimized data models for performance
  • Manage RDS databases and AWS service integrations
  • Troubleshoot and improve data processing efficiency
  • Gather requirements from business stakeholders
  • Implement data quality checks and validation
  • Document data pipelines and architecture
  • Monitor workflows and implement alerting
  • Keep current with AWS services and best practices


Required Technical Expertise:

  • Python/PySpark for data processing
  • AWS Glue for ETL operations
  • Redshift and Athena for data querying
  • AWS Lambda and serverless architecture
  • AWS DMS and RDS management
  • CloudFormation for infrastructure
  • SQL optimization and performance tuning
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
SQL
Data engineering
Apache Spark
PySpark
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. 

Key Roles & Responsibilities:

  • Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
  • Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
  • Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
  • Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
  • Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
  • Implement data governance, security, and compliance best practices.
  • Build and maintain data models, transformations, and data marts for analytics and reporting.
  • Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
  • Automate infrastructure and deployments using Terraform, Airflow, or dbt.
  • Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
  • Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.


Basic Qualifications:

  • Bachelor’s or Master’s Degree in Computer Science or Data Science.
  • 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
  • Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
  • Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
  • Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
  • Proficiency in SQL, Python, or Scala for data transformation and analytics.
  • Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
  • Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
  • Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
  • Strong understanding of data governance, access control, and encryption strategies.
  • Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.


Preferred Qualifications:

  • Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
  • Experience in BI and analytics tools (Tableau, Power BI, Looker).
  • Familiarity with data observability tools (Monte Carlo, Great Expectations).
  • Experience with machine learning feature engineering pipelines in Databricks.
  • Contributions to open-source data engineering projects.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Pune, Mumbai, Bengaluru (Bangalore), Chennai
4 - 7 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
Glue semantics
Amazon Redshift
+1 more

Job Overview:

We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
  • Integrate data from diverse sources and ensure its quality, consistency, and reliability.
  • Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
  • Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
  • Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Automate data validation, transformation, and loading processes to support real-time and batch data processing.
  • Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.

Required Skills:

  • 5 to 7 years of hands-on experience in data engineering roles.
  • Strong proficiency in Python and PySpark for data transformation and scripting.
  • Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
  • Solid understanding of SQL and database optimization techniques.
  • Experience working with large-scale data pipelines and high-volume data environments.
  • Good knowledge of data modeling, warehousing, and performance tuning.

Preferred/Good to Have:

  • Experience with workflow orchestration tools like Airflow or Step Functions.
  • Familiarity with CI/CD for data pipelines.
  • Knowledge of data governance and security best practices on AWS.
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Mumbai, Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹5L - ₹10L / yr
ETL
SQL
skill iconAmazon Web Services (AWS)
PySpark
KPI

Role - ETL Developer

Work ModeHybrid

Experience- 4+ years

Location - Pune, Gurgaon, Bengaluru, Mumbai

Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL

Required Skills:

  • 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
  • Experience in Pyspark, AWS, AWS Glue
  • Experience in AWS ,Migration
  • Experience with automated scripting and tracking KPIs/metrics for database performance
  • Proficiency in shell scripting and ETL.
  • Strong communication skills and a collaborative team player
  • Knowledge of Python and AWS RDS is a plus


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Indore
4 - 6 yrs
₹10L - ₹18L / yr
skill iconAmazon Web Services (AWS)
skill iconJavascript
skill iconPHP
SQL

🚀 We’re Hiring- PHP Developer Deqode

📍 Location: Pune (Hybrid)

🕒Experience: 4–6 Years

⏱️ Notice Period: Immediate Joiner


We're looking for a skilled PHP Developer to join our team. If you have a strong grasp of secure coding practices, are experienced in PHP upgrades, and thrive in a fast-paced deployment environment, we’d love to connect with you!


🔧 Key Skills:

- PHP | MySQL | JavaScript | Jenkins | Nginx | AWS


🔐 Security-Focused Responsibilities Include:

- Remediation of PenTest findings

- XSS mitigation (input/output sanitization)

- API rate limiting

- 2FA integration

- PHP version upgrade

- Use of AWS Secrets Manager

- Secure session and password policies



Read more
Top tier global IT consulting company

Top tier global IT consulting company

Agency job
via AccioJob by AccioJobHiring Board
Pune, Hyderabad, Gurugram, Chennai
0 - 1 yrs
₹11.1L - ₹11.1L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
SQL
Any programming language

AccioJob is conducting an exclusive diversity hiring drive with a reputed global IT consulting company for female candidates only.


Apply Here: https://links.acciojob.com/3SmQ0Bw


Key Details:

• Role: Application Developer

• CTC: ₹11.1 LPA

• Work Location: Pune, Chennai, Hyderabad, Gurgaon (Onsite)

• Required Skills: DSA, OOPs, SQL, and proficiency in any programming language


Eligibility Criteria:

• Graduation Year: 2024–2025

• Degree: B.E/B.Tech or M.E/M.Tech

• CS/IT branches: No prior experience required

• Non-CS/IT branches: Minimum 6 months of technical experience

• Minimum 60% in UG


Selection Process:

Offline Assessment at AccioJob Skill Center(s) in:

• Pune

• Hyderabad

• Noida

• Delhi

• Greater Noida


Further Rounds for Shortlisted Candidates Only:

• Coding Test

• Code Pairing Round

• Technical Interview

• Leadership Round


Note: Candidates must bring their own laptop & earphones for the assessment.


Apply Here: https://links.acciojob.com/3SmQ0Bw

Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
9 - 12 yrs
Best in industry
skill iconPython
PySpark
skill iconMachine Learning (ML)
SQL
skill iconData Science
+1 more

Roles & Responsibilities:  

We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.  


We are looking for a Lead Data Scientist who will be responsible for  

  • Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture  
  • Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery  
  • Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies  
  • Visualize and publish model performance results and insights to internal and external audiences  


Qualifications:  

  • Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)  
  • Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  • Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)  
  • Proficiency in Python and SQL required; PySpark/Spark experience a plus  
  • Ability to conduct a productive peer review and proper code structure in Github
  • Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)  
  • Working knowledge of modern CI/CD methods  


This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level. 

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Pune
4 - 8 yrs
₹7L - ₹18L / yr
skill iconJava
skill iconJavascript
skill iconHTML/CSS
skill iconPostgreSQL
SQL
+8 more

Key Responsibilities would include: 


1. Design, develop, and maintain enterprise-level Java applications. 

2. Collaborate with cross-functional teams to gather and analyze requirements, and implement solutions. 

3. Develop & customize the application using HTML5, CSS, and jQuery to create dynamic and responsive user interfaces. 

4. Integrate with relational databases (RDBMS) to manage and retrieve data efficiently. 

5. Write clean, maintainable, and efficient code following best practices and coding standards. 

6. Participate in code reviews, debugging, and testing to ensure high-quality deliverables. 

7. Troubleshoot and resolve issues in existing applications and systems. 


Qualification requirement - 


1. 4 years of hands-on experience in Java / J2ee development, preferably with enterprise-level projects.

2. Spring Framework including - SOA, AoP and Spring security 

3. Proficiency in web technologies including HTML5, CSS, jQuery, and JavaScript.

4. Experience with RESTful APIs and web services.

5. Knowledge of build tools like Maven or Gradle

6. Strong knowledge of relational databases (e.g., MySQL, PostgreSQL, Oracle) and experience with SQL.

7. Experience with version control systems like Git.

8. Understanding of software development lifecycle (SDLC) 

9. Strong problem-solving skills and attention to details.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Ahmedabad
4 - 9 yrs
₹10L - ₹35L / yr
skill iconPython
pytest
skill iconAmazon Web Services (AWS)
Test Automation (QA)
SQL

At least 5 years of experience in testing and developing automation tests.

A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.

Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.

Familiarity with Playwright or other browser application testing frameworks is a significant advantage.

Proficiency in object-oriented programming and principles is required.

Extensive knowledge of AWS services is essential.

Strong expertise in REST API testing and SQL is required.

A solid understanding of testing and development life cycle methodologies is necessary.

Knowledge of the financial industry and trading systems is a plus

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
3 - 5 yrs
₹6L - ₹15L / yr
Software Testing (QA)
SQL
TestNG
Selenium
Automation

Job Title: Sr. QA Engineer

Location: Pune, Banner

Mode - Hybrid


Major Responsibilities:


  • Understand product requirements and design test plans/ test cases.
  • Collaborate with developers for discussing story design/ test cases/code walkthrough etc.
  • Design automation strategy for regression test cases.
  • Execute tests and collaborate with developers in case of issues.
  • Review unit test coverage/ enhance existing unit test coverage
  • Automate integration/end-to-end tests using Junit/ Mockito /Selenium/Cypress


Requirements: 


  • Experience of web application testing/ test automation
  • Good analytical skills
  • Exposure to test design techniques
  • Exposure to Agile Development methodology, Scrums
  • Should be able to read and understand code.
  • Review and understand unit test cases/ suggest additional unit-level coverage points.
  • Exposure to multi-tier web application deployment/architecture (SpringBoot)
  • Good exposure to SQL query language
  • Exposure to Configuration management tool for code investigation - GitHub
  • Exposure to Web Service / API testing
  • Cucumber – use case-driven test automation
  • System understanding, writing test cases from scratch, requirement analysis, thinking from a user perspective, test designing, and requirement analysis


Read more
Innominds

at Innominds

1 video
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
5yrs+
Upto ₹35L / yr (Varies
)
skill iconJava
skill iconAmazon Web Services (AWS)
SQL
Internet of Things (IOT)
Spring
+1 more

In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.


The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.


We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.


  • A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
  • At least 5 years of experience as Software Engineer, in an enterprise context
  • Experience in design, development and deployment of large-scale cloud-based applications and services
  • Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
  • Experience with IoT products, backend services and design principles
  • Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
  • Passionate about code quality, security and testing
  • Microservice development experience with Java (Spring) is a plus
  • Good command of English in both Oral & Written


Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Bengaluru (Bangalore), Pune, Gurugram, Chennai, Bhopal, Jaipur
5 - 10 yrs
₹15L - ₹24L / yr
Tableau
SQL

Job Description:

We are seeking a Tableau Developer with 5+ years of experience to join our Core Analytics team. The candidate will work on large-scale BI projects using Tableau and related tools.


Must Have:

  • Strong expertise in Tableau Desktop and Server, including add-ons like Data and Server Management.
  • Ability to interpret business requirements, build wireframes, and finalize KPIs, calculations, and designs.
  • Participate in design discussions to implement best practices for dashboards and reports.
  • Build scalable BI and Analytics products based on feedback while adhering to best practices.
  • Propose multiple solutions for a given problem, leveraging toolset functionality.
  • Optimize data sources and dashboards while ensuring business requirements are met.
  • Collaborate with product, platform, and program teams for timely delivery of dashboards and reports.
  • Provide suggestions and take feedback to deliver future-ready dashboards.
  • Peer review team members’ dashboards, offering constructive feedback to improve overall design.
  • Proficient in SQL, UI/UX practices, and alation, with an understanding of good data models for reporting.
  • Mentor less experienced team members.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Navi Mumbai, Kolkata, Rajasthan
5 - 24 yrs
₹9L - ₹70L / yr
skill iconC
skill iconC++
Visual C++
Embedded C++
Artificial Intelligence (AI)
+32 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-fri role, In office, with excellent perks and benefits!


Position Overview

We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9


Key Responsibilities:

1. System Architecture & Design

● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.

● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.

● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.


2. Perception & AI Integration

● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.

● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.

● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.


3. Embedded & Real-Time Systems

● Design high-performance embedded software stacks for real-time robotic control and autonomy.

● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.

● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.


4. Robotics Simulation & Digital Twins

● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.

● Leverage synthetic data generation (Omniverse Replicator) for training AI models.

● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.


5. Navigation & Motion Planning

● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.

● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.

● Implement reinforcement learning-based policies using Isaac Gym.


6. Performance Optimization & Scalability

● Ensure low-latency AI inference and real-time execution of robotics applications.

● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.

● Develop benchmarking and profiling tools to measure software performance on edge AI devices.


Required Qualifications:

● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.

● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.

● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.

● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.

● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.

● Strong background in robotic perception, planning, and real-time control.

● Experience with cloud-edge AI deployment and scalable architectures.


Preferred Qualifications

● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym

● Knowledge of robot kinematics, control systems, and reinforcement learning

● Expertise in distributed computing, containerization (Docker), and cloud robotics

● Familiarity with automotive, industrial automation, or warehouse robotics

● Experience designing architectures for autonomous systems or multi-robot systems.

● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics

● Experience with microservices or service-oriented architecture (SOA)

● Knowledge of machine learning and AI integration within robotic systems

● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
SQL
skill iconJava
skill iconSpring Boot
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:


* Establish formal data practice for the organisation.

* Build & operate scalable and robust data architectures.

* Create pipelines for the self-service introduction and usage of new data.

* Implement DataOps practices

* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.

* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.

* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.

 

Who You Are:


* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.

* Experience working with public clouds like GCP/AWS.

* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.

* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.

* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.

* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc

* Good communication skills with the ability to collaborate with both technical and non-technical people.

* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
7yrs+
Upto ₹35L / yr (Varies
)
SailPoint
IIQ
IdentityNow
skill iconJava
skill iconXML
+2 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.


Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.


Position Summary:

As Architect, you will be responsible for designing, implementing, and managing SailPoint IdentityNow (IIQ) solutions to ensure effective identity governance and access management across our enterprise. You will work closely with stakeholders to understand their requirements, develop solutions that align with business objectives, and oversee the deployment and integration of SailPoint technologies.


Key Responsibilities:


Architect and Design Solutions:

= Design and architect SailPoint IIQ solutions that meet business needs and align with IT strategy.

= Develop detailed technical designs, including integration points, workflows, and data models.

Implementation and Integration:

= Lead the implementation and configuration of SailPoint IIQ, including connectors, identity governance, and compliance features.

= Integrate SailPoint with various systems, applications, and directories (e.g., Active Directory, LDAP, databases).

Project Management

= Manage project timelines, resources, and deliverables to ensure successful deployment of SailPoint IIQ solutions.

Coordinate with cross-functional teams to address project requirements, risks, and issues.

Customization and Development:

Customize SailPoint IIQ functionalities, including developing custom connectors, workflows, and rules.

Develop and maintain documentation related to architecture, configurations, and customizations.

Support and Troubleshooting:

= Provide ongoing support for SailPoint IIQ implementations, including troubleshooting and resolving technical issues.

= Conduct regular reviews and performance tuning to optimize the SailPoint environment.

Compliance and Best Practices:

= Ensure SailPoint IIQ implementations adhere to industry best practices, security policies, and regulatory requirements.

= Stay current with SailPoint updates and advancements, and recommend improvements and enhancements.

Collaboration and Training:

= Collaborate with business and IT stakeholders to understand requirements and translate them into technical solutions.

= Provide training and support to end-users and internal teams on SailPoint IIQ functionalities and best practices.



Education and Experience:

  1. Bachelor’s degree in computer science, Information Technology, or a related field.
  2. Minimum of 5 years of experience with identity and access management (IAM) solutions, with a strong focus on SailPoint IIQ.
  3. Proven experience in designing and implementing SailPoint IIQ solutions in complex environments.

 


Technical Skills:

  1. Expertise in SailPoint IIQ architecture, configuration, and customization.
  2. Strong knowledge of identity governance, compliance, and role-based access control (RBAC).
  3. Experience with integration of SailPoint with various systems and applications.
  4. Proficiency in Java, XML, SQL, and other relevant technologies.


Certification Preferred:

1 SailPoint IIQ Certification (e.g., SailPoint Certified Implementation Engineer).

2 Other relevant IAM or security certifications (e.g., CISSP, CISM).

Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
DataToBiz Pvt. Ltd.

at DataToBiz Pvt. Ltd.

2 recruiters
Vibhanshi Bakliwal
Posted by Vibhanshi Bakliwal
Pune
8 - 12 yrs
₹15L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.


Location - Pune (Hybrid 3 days)


Responsibilities:


Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.

Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.

Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.

Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.

Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.

Troubleshoot and resolve technical issues related to Power BI dashboards and reports.

Provide technical guidance and mentorship to junior team members.

Stay abreast of the latest trends and technologies in the Power BI ecosystem.

Ensure data security, governance, and compliance with industry best practices.

Contribute to the development and improvement of the organization's data and analytics strategy.

May lead and mentor a team of junior Power BI developers.


Qualifications:


8-12 years of experience in Business Intelligence and Data Analytics.

Proven expertise in Power BI development, including DAX, advanced data modeling techniques.

Strong SQL skills, including writing complex queries, stored procedures, and views.

Experience with ETL/ELT processes and tools.

Experience with data warehousing concepts and methodologies.

Excellent analytical, problem-solving, and communication skills.

Strong teamwork and collaboration skills.

Ability to work independently and proactively.

Bachelor's degree in Computer Science, Information Systems, or a related field preferred.

Read more
Intellikart Ventures LLP
Prajwal Shinde
Posted by Prajwal Shinde
Pune
2 - 5 yrs
₹9L - ₹15L / yr
PowerBI
SQL
ETL
snowflake
Apache Kafka
+1 more

Experience: 4+ years.

Location: Vadodara & Pune

Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines

What you'll be doing:

  • Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
  • Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
  • Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
  • Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
  • Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
  • Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
  • Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
  • Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
  • Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.


What you need:

Basic Skills:


  • 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
  • Strong experience with Apache Kafka for stream processing and real-time data integration.
  • Proficiency in SQL and ETL/ELT processes.
  • Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
  • Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
  • Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
  • Knowledge of data governance, security, and compliance best practices.
  • Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
  • Ability to work in a collaborative team environment and communicate effectively with cross-functional teams


Responsibilities:

  • Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
  • Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
  • Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
  • Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
  • Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
  • Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
  • Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
  • Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
  • Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
  • Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform. 


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort