Cutshort logo

50+ SQL Jobs in India

Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!

Sql jobs in other cities
ESQL JobsESQL Jobs in PuneMySQL JobsMySQL Jobs in AhmedabadMySQL Jobs in Bangalore (Bengaluru)MySQL Jobs in BhubaneswarMySQL Jobs in ChandigarhMySQL Jobs in ChennaiMySQL Jobs in CoimbatoreMySQL Jobs in Delhi, NCR and GurgaonMySQL Jobs in HyderabadMySQL Jobs in IndoreMySQL Jobs in JaipurMySQL Jobs in Kochi (Cochin)MySQL Jobs in KolkataMySQL Jobs in MumbaiMySQL Jobs in PunePL/SQL JobsPL/SQL Jobs in AhmedabadPL/SQL Jobs in Bangalore (Bengaluru)PL/SQL Jobs in ChandigarhPL/SQL Jobs in ChennaiPL/SQL Jobs in CoimbatorePL/SQL Jobs in Delhi, NCR and GurgaonPL/SQL Jobs in HyderabadPL/SQL Jobs in IndorePL/SQL Jobs in JaipurPL/SQL Jobs in KolkataPL/SQL Jobs in MumbaiPL/SQL Jobs in PunePostgreSQL JobsPostgreSQL Jobs in AhmedabadPostgreSQL Jobs in Bangalore (Bengaluru)PostgreSQL Jobs in BhubaneswarPostgreSQL Jobs in ChandigarhPostgreSQL Jobs in ChennaiPostgreSQL Jobs in CoimbatorePostgreSQL Jobs in Delhi, NCR and GurgaonPostgreSQL Jobs in HyderabadPostgreSQL Jobs in IndorePostgreSQL Jobs in JaipurPostgreSQL Jobs in Kochi (Cochin)PostgreSQL Jobs in KolkataPostgreSQL Jobs in MumbaiPostgreSQL Jobs in PunePSQL JobsPSQL Jobs in Bangalore (Bengaluru)PSQL Jobs in ChennaiPSQL Jobs in Delhi, NCR and GurgaonPSQL Jobs in HyderabadPSQL Jobs in MumbaiPSQL Jobs in PuneRemote SQL JobsRpgSQL JobsRpgSQL Jobs in HyderabadSQL Jobs in AhmedabadSQL Jobs in Bangalore (Bengaluru)SQL Jobs in BhubaneswarSQL Jobs in ChandigarhSQL Jobs in ChennaiSQL Jobs in CoimbatoreSQL Jobs in Delhi, NCR and GurgaonSQL Jobs in HyderabadSQL Jobs in IndoreSQL Jobs in JaipurSQL Jobs in Kochi (Cochin)SQL Jobs in KolkataSQL Jobs in MumbaiSQL Jobs in PuneTransact-SQL JobsTransact-SQL Jobs in Bangalore (Bengaluru)Transact-SQL Jobs in ChennaiTransact-SQL Jobs in HyderabadTransact-SQL Jobs in JaipurTransact-SQL Jobs in Pune
icon
Staffnixcom
Bengaluru (Bangalore)
4 - 8 yrs
₹21L - ₹27L / yr
SQL
skill iconPython

Strong Pharma Analytics Profile

Mandatory (Experience) : Must have 4+ years of experience as an analytics consultant with atleast 2 years in pharma domain

Mandatory (Skill 1) : Must have hands-on experience working with patient-level datasets (claims, EHR, lab, pharmacy data)

Mandatory (Skill 2) : Must have worked on patient journey analysis, treatment patterns, disease progression and advanced analytics

Mandatory (Skill 3) : Must have experience with SQL, Python and Predictive modelling (regression, classification, clustering)

Mandatory (Skill 4) : Must have experience combining multiple healthcare datasets and building longitudinal patient views

Mandatory (Skill 5) : Ability to translate complex analysis into actionable business/clinical insights

Mandatory (Skill 6): Must have experience with time-series analysis and/or survival analysis - specifically to study treatment duration, patient drop-off, or retention trends

Mandatory (Skill 7): Must have experience building risk stratification models using ML techniques to prioritize patients based on clinical or behavioural risk factors

Mandatory (Company) : PharmaTech/life sciences companies

Read more
Redtring
Hyderabad
0 - 3 yrs
₹6L - ₹18L / yr
zapier
n8n
Selenium
Workflow management
skill iconPython
+4 more

Job Title: Full Stack & Automation Engineer (FinTech Preferred)

Location: On-site | 6 Days/Week

Salary: ₹50,000 – ₹1,50,000/month (Based on experience)

About the Role:

Nova Orbit is looking for a highly capable technical operator who can build scalable systems, automate internal workflows, and strengthen backend infrastructure for a fast-growing finance and investment ecosystem focused on unlisted markets and institutional operations.

Key Responsibilities:

  • Build and manage full stack applications for internal tools, dashboards, and business operations
  • Design scalable backend architecture with strong system design, APIs, database, and security practices
  • Deploy and manage cloud infrastructure across AWS / Azure
  • Automate repetitive tasks across departments including operations, CRM, finance, reporting, and customer workflows
  • Build workflow automation systems, integrations, bots, and process pipelines
  • Improve efficiency by reducing manual processes through technology

Required Skills:

  • Strong full stack development (Frontend + Backend)
  • Deep backend architecture and system design expertise
  • AWS / Azure cloud deployment and infrastructure management
  • Proficiency in Python, Node.js, JavaScript/TypeScript, SQL
  • API development, microservices, database optimization
  • Automation tools like Zapier, n8n, Make, Power Automate, Selenium, or custom scripting
  • Workflow design, process mapping, task automation, and RPA concepts
  • DevOps, CI/CD pipelines, Docker, Git
  • Strong analytical and problem-solving skills

Preferred:

  • FinTech / BFSI experience (payments, investment platforms, compliance, CRM automation)
  • Experience with internal business process automation across multiple departments

Ideal Candidate:

A systems-driven builder who can combine software engineering, backend scalability, cloud expertise, and workflow automation to optimize company-wide operations.


https://loopx.redstring.co.in/form/6a044e6103b64ed11120a12d

Read more
Amura Health

at Amura Health

3 candid answers
1 video
Sangeetha A
Posted by Sangeetha A
Chennai
4 - 7 yrs
₹1L - ₹30L / yr
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
ELT
ETL

Data Engineer at Amura


Amura’s Vision

We believe that the most under-appreciated route to releasing untapped human potential is to build a healthier body, and through which a better brain. This allows us to do more of everything that is important to each one of us. Billions of healthier brains, sitting in healthier bodies, can take up more complex problems that defy solutions today, including many existential threats, and solve them in just a few decades.

Billions of healthier brains will make the world richer beyond what we can imagine today. The surplus wealth, combined with better human capabilities, will lead us to a new renaissance, giving us a richer and more beautiful culture. These healthier brains will be equipped with deeper intellect, be less acrimonious, more magnanimous, and have a kinder outlook on the world, resulting in a world that is better than any previous time.

We find this vision of the future exhilarating. Our hopes and dreams are to create this future as quickly as possible and ensure that it is widely distributed and optimized to maximize all forms of human excellence.


Role Overview

We are looking for a hands-on Data Engineer to design, build, and maintain scalable data pipelines and data platforms. You will work on ingesting, transforming, and serving data reliably for analytics, reporting, and downstream applications, collaborating closely with backend engineers, analysts, and data scientists. This role is ideal for someone who enjoys building robust data systems, working with large datasets, and writing clean, production-grade code.


Key Responsibilities


Data Pipelines & Development

  • Build and maintain reliable ETL/ELT pipelines for batch and near-real-time data processing.
  • Ingest data from multiple sources (databases, APIs, event streams, files).
  • Transform raw data into clean, analytics-ready datasets.
  • Optimize pipelines for performance, scalability, and cost.


Data Storage & Modeling

  • Design and manage data models in data warehouses or data lakes.
  • Work with SQL and NoSQL databases and modern data warehouses.
  • Implement partitioning, indexing, and efficient query patterns.
  • Maintain documentation for schemas, pipelines, and transformations.


Cloud & Tooling

  • Build data solutions on cloud platforms (AWS preferred).
  • Use services such as S3, Redshift, Athena, Glue, EMR, Lambda, Kinesis, or equivalents.
  • Work with orchestration tools like Airflow or similar schedulers.
  • Use version control, CI/CD, and Infrastructure-as-Code where applicable.


Data Quality & Reliability

  • Implement data validation, monitoring, and alerting for pipelines.
  • Troubleshoot data issues and ensure pipeline reliability.
  • Collaborate with stakeholders to resolve data discrepancies.


Collaboration

  • Partner with analytics, product, and engineering teams to understand data needs.
  • Support analysts and data scientists with clean, accessible datasets.
  • Participate in code reviews and contribute to data engineering best practices.

What We’re Looking For


  • Experience: 4-6+ years of experience as a Data Engineer / Data Developer.
  • Programming: Strong programming skills in Python.
  • Databases: Excellent knowledge of SQL and relational data modeling.
  • Pipelines: Experience building ETL/ELT pipelines in production.
  • Cloud: Hands-on experience with cloud-based data platforms (AWS preferred).
  • Concepts: Understanding of data warehousing concepts and best practices.


Nice to Have:

  • Experience with Spark, Kafka, dbt, or Flink.
  • Familiarity with orchestration tools like Airflow.
  • Experience with streaming or event-driven data pipelines.
  • Exposure to data quality or observability tools.
  • Experience working with large-scale or high-volume datasets.

Additional Information

  • Office Location: Chennai (Velachery).
  • Work Model: Work from Office - because great stories are built in person!.
  • Online Presence: https://amura.ai (@AmuraHealth on all social media).


Read more
VRT Management Group
Archana Chakali
Posted by Archana Chakali
Hyderabad
0 - 3 yrs
₹3.5L - ₹6.5L / yr
SQL
PowerBI
Microsoft Excel
MySQL
skill iconPostgreSQL
+3 more

Job Title: BI Analyst

Company: VRT Management Group, LLC

Location: Santosh Nagar, Hyderabad (Onsite)

Type: Full-time

 

About VRT Management Group

VRT Management Group is an entrepreneurial consulting company founded in 2008 in the USA. Our mission is to empower small and medium-scale business leaders across the United States through high-impact programs such as EGA, EGOS, and Entrepreneurial Edge.

 

With our growing Hyderabad operations, we are building a strong Data & Analytics team to support business intelligence, reporting, and data-driven decision-making across the organization.

 

Role Overview

As a BI Analyst, you will play a key role in transforming raw data into meaningful insights that support business growth and strategic decision-making. This is a hands-on role focused on data analysis, dashboard development, reporting automation, and KPI tracking using Power BI and SQL.

 

You will work closely with leadership and cross-functional teams to understand business requirements, build dashboards, and deliver actionable insights that improve operational and business performance.

 

Key Responsibilities

  • Develop and maintain interactive dashboards and reports using Power BI.
  • Extract, clean, validate, and analyze data from multiple sources.
  • Build and manage SQL queries, datasets, and reporting workflows.
  • Track and monitor KPIs and business performance metrics.
  • Automate recurring reports and dashboard refresh processes.
  • Translate business requirements into clear and actionable dashboards.
  • Ensure data accuracy, consistency, and reporting reliability.
  • Present insights in a business-friendly and easy-to-understand manner.
  • Collaborate with cross-functional teams to support data-driven decision-making.
  • Maintain proper documentation for reports, dashboards, and workflows.
  • Continuously improve reporting processes and dashboard usability.

 

 

Required Skills / What We’re Looking For

  • 1+years of experience in Business Intelligence, Data Analytics, or related roles.
  • Strong hands-on experience with Power BI and dashboard creation.
  • Good knowledge of SQL and relational databases (MySQL/PostgreSQL).
  • Understanding data cleaning, transformation, and reporting workflows.
  • Ability to understand business requirements and convert them into reports and insights.
  • Strong analytical and problem-solving skills.
  • Good communication and presentation skills.
  • High ownership, attention to detail, and execution mindset.
  • Ability to work in a fast-paced and collaborative environment.

 

Tools Exposure 

  • Excel / Google Sheets
  • SQL / MySQL / PostgreSQL
  •  Power BI
  • Python basics for analytics
  • Data automation and reporting tools

 

What You’ll Gain

  • Real-world experience in business intelligence and analytics.
  • Opportunity to work closely with leadership and business teams.
  • Exposure to KPI reporting, dashboarding, and data-driven strategy execution.
  • Hands-on experience with business reporting systems and automation.
  • Strong learning and growth opportunities within a fast-paced entrepreneurial environment.
  • Friendly and positive office work environment.
  • Competitive corporate salary package.

 

 

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
3 - 4 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
.NET Compact Framework
SQL
+7 more

Company Description:

NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.


Brief Description:

NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.


Responsibilities:

  • Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
  • Write clean, scalable, and efficient code while following best practices
  • Develop and optimize APIs and microservices
  • Work with SQL Server and other databases to ensure high performance and reliability
  • Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
  • Participate in code reviews and provide constructive feedback
  • Troubleshoot, debug, and enhance existing applications
  • Ensure compliance with security and performance standards, especially for healthcare-related applications


Qualifications & Skills:

  • Strong experience in .NET Core/.NET Framework and C#
  • Proficiency in building RESTful APIs and microservices architecture
  • Experience with Entity Framework, LINQ, and SQL Server
  • Familiarity with front-end technologies like React, Angular, or Blazor is a plus
  • Knowledge of cloud services (Azure/AWS) is a plus
  • Experience with version control (Git) and CI/CD pipelines
  • Strong understanding of object-oriented programming (OOP) and design patterns
  • Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus


Why Join Us?

  • Opportunity to work on a cutting-edge healthcare product
  • A collaborative and learning-driven environment
  • Exposure to AI and software engineering innovations
  • Excellent work ethics and culture

If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Mumbai
7 - 10 yrs
₹5L - ₹14L / yr
Tableau
SQL
Data Visualization

Requirement Details:

  • Role: Sr. Tableau Developer
  • Experience: 7+ Years
  • Engagement Type: Freelancer / Consultant
  • Work Duration: 6–8 Hours Daily
  • Location: Mumbai (Hybrid)
  • Work Mode: 2 Days Work from Office / As & When Required by Management for a Few Hours
  • Rate: ₹500 – ₹700 per hour

Required Skills:

  • Strong experience in Tableau Development & Dashboard Creation
  • Expertise in SQL and Data Visualization
  • Experience with Data Warehousing concepts
  • Ability to work independently with business stakeholders
  • Good communication and analytical skills

Interested candidates can share:

  • Updated Resume
  • Current Rate / Expected Rate
  • Availability to Join
  • Current Location
  • Total & Relevant Tableau Experience


Read more
Wissen Technology

at Wissen Technology

4 recruiters
khushbu parida
Posted by khushbu parida
Bengaluru (Bangalore)
8 - 15 yrs
Best in industry
Automation Anywhere
A360
end-to-end RPA solutions
AARI
COPILOT
+4 more

Looking for a Solution Architect in Automation Anywhere A360 who can design and scale enterprise-grade RPA solutions. The role is a mix of architecture + hands-on + leadership, where you’ll be working on things like AARI (Copilot), Document Automation, and APA capabilities.

You’ll be driving end-to-end solution design, ensuring scalability, governance, and working closely with business stakeholders while mentoring developers.


Requirements:

1. Strong A360 Expertise

  • Hands-on experience with Automation Anywhere A360
  • Built and deployed bots in enterprise environments (not POCs)

2. Architecture Mindset

  • Designed end-to-end RPA solutions
  • Understands scalability, security, and governance frameworks

3. New Capabilities Exposure

  • Experience (or at least exposure) to:
  • AARI (Copilot)
  • Document Automation
  • APA (Automation Process Automation)

4. Leadership + Stakeholder Handling

  • Has led teams / mentored developers
  • Comfortable working with business stakeholders

5. Tech Add-ons (Good to have)

  • Python / SQL
  • Exposure to AI / LLM integrations
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Gurugram, Noida, Delhi
2 - 3 yrs
₹5L - ₹7L / yr
skill iconPython
SQL
Linux/Unix

Strong hands-on experience in Python development.


Good understanding of SQL and database concepts.


Hands-on working experience in Linux/Unix environment.


Knowledge of scripting, debugging, and application troubleshooting.


Understanding of APIs, data processing, and backend development concepts.

Read more
Remote only
5 - 7 yrs
₹10L - ₹25L / yr
skill iconPython
SQL
RDMS
ETL
Database Design
+7 more

Sr. DE / Data Engineer (Healthcare Data & SQL Expert)

Experience Level: 5–7 Years

Focus: Database Design, Advanced SQL, ETL/ELT Pipelines, and Healthcare Interoperability.

Summary

We are looking for a highly skilled Senior Data Engineer to join our healthcare data team. This role is perfect for a technical powerhouse who excels at building robust data pipelines and deeply understands database internals. You will be responsible for designing schemas, writing complex stored procedures, and optimizing SQL performance to handle clinical and claims data at scale. You will bridge the gap between raw data ingestion and high-performance analytics, ensuring all solutions meet HIPAA and FHIR standards.

What You’ll Do

1. Advanced SQL & Database Development

  • Schema Design: Design and implement relational schemas (MSSQL, PostgreSQL, Oracle) ensuring data integrity through constraints, triggers, and normalized structures.
  • Programmability: Write and maintain sophisticated Stored Procedures, Functions, and Views to handle complex business logic within the database layer.
  • Performance Tuning: Own query optimization. You should be the expert in reading EXPLAIN/ANALYZE plans, implementing advanced indexing strategies (Clustered, Non-Clustered, Columnstore), and managing partitioning.
  • Data Modeling: Build and manage dimensional models (Star/Snowflake) and implement Slowly Changing Dimensions (SCD Types 1, 2, and 4).

  • Getty Images

2. Data Engineering & Ingestion

  • Pipeline Development: Build and operate scalable ETL/ELT pipelines using Python and SQL to ingest data from EHRs, REST APIs, and flat files.
  • Orchestration: Use Apache Airflow to schedule jobs, manage dependencies, and implement robust retry/alerting logic.
  • API Integration: Develop Python-based ingestion frameworks that handle OAuth, pagination, and throttling for third-party healthcare data partners.

3. Healthcare Interoperability & Compliance

  • Standards: Map complex clinical data to HL7 FHIR resources and curated analytic layers.
  • Security: Implement "Privacy by Design" by enforcing HIPAA safeguards, including encryption at rest, access controls, and PII/PHI de-identification.

4. Operational Excellence

  • CI/CD: Use GitHub and automated pipelines to deploy database changes and data code.
  • Observability: Implement data quality tests (using tools like dbt or custom Python/SQL checks) to monitor freshness and accuracy.

What You’ll Bring

  • Experience: 5–7 years of professional data engineering experience, with a heavy emphasis on backend database development.
  • The SQL Expert Toolkit:
  • Expert SQL: Window functions, CTEs, recursive queries, and set-based transformations.
  • DB Internals: Deep knowledge of MSSQL, PostgreSQL, or Oracle. You should understand how the engine stores and retrieves data.
  • Optimization: Proven track record of turning "slow" queries into high-performance assets via indexing and refactoring.
  • The Engineering Toolkit:
  • Python: Intermediate to advanced (Pandas/Polars, Requests, SQLAlchemy, or PySpark).
  • Orchestration: Practical experience with Airflow (or Prefect/Dagster).
  • Legacy/Cloud mix: Proficiency in SSIS/SSMA or PowerShell is a plus for migrating legacy workloads to modern platforms.
  • The Domain Knowledge: Familiarity with FHIR/HL7 and an understanding of the importance of data governance in a regulated environment.

Technical "Must-Haves" for the Interview

  • Ability to whiteboard a complex Database Schema from scratch.
  • Ability to debug a long-running SQL query and explain the IO/CPU trade-offs of different index types.
  • Experience handling JSON/BSON data types within a relational database context.

Nice to Have

  • Experience with NoSQL systems like MongoDB or Elasticsearch.
  • Cloud experience (Azure, AWS, or GCP) specifically regarding managed SQL services.
  • Knowledge of dbt (data build tool) for managing transformations in the warehouse.


Read more
Hyderabad
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconFlask
skill iconDjango
SQL

Job Title: Freelance Python Team Lead & Developer (Hyderabad-based)


Project Overview

We are a service-based IT firm seeking a Senior Python Developer (3–5 years experience) for a long-term freelance engagement. This is a unique "Player-Coach" role where you will balance high-level individual coding with the leadership of a junior intern team.


Logistics & Commitment

  • Location: In-office (Hyderabad). Candidates must be local.
  • Time Commitment: 3–4 hours per day, 5 days a week (approx. 20 hours/week).
  • Duration: Ongoing freelance contract.


Core Responsibilities

  • Team Leadership: Lead and mentor a team of interns. You will be responsible for task delegation, technical guidance, and conducting rigorous code reviews to ensure production-quality output.
  • Individual Contribution: Act as the lead developer for complex backend modules, API architecture, and system integrations using Python (Django/FastAPI).
  • Delivery Management: Ensure that the intern team meets project milestones and adheres to clean coding standards (PEP 8).


Requirements

  • 3–5 years of professional experience in Python development.
  • Proven experience in a service-based company environment.
  • Strong proficiency in Django, Flask, or FastAPI and SQL/NoSQL databases.
  • Excellent communication skills with a natural ability to mentor and explain technical logic to juniors.
  • Availability to work from our Hyderabad office during standard business hours.


Please include your resume and a brief summary of your experience leading teams or mentoring juniors.

Read more
Pragyaware Informatics Private Limited
Mansi Singh
Posted by Mansi Singh
Panchkula
4 - 5 yrs
₹4L - ₹6L / yr
.NET Core C# Web API SQL Server Angular
Web API
SQL
skill icon.NET
MVC Framework
+1 more

Job Description

Job Title: Senior Software Developer

Status: Full Time

Skills: Microsoft .net C#,ReportingServices,Jquery,

JavaScriptWebservices, JavaScript frameworks,CMS

Experience: 5 years

JOB DUTIES & RESPONSIBILITIES:

Working with Project Managers to determine needs and applying / customizing existing

technology to meet those needs

Maintaining and supporting multiple projects and deadlines

Recording work progress on a weekly basis

Documentation


QUALIFICATIONS & RELATED EXPERIENCE:

BTech, MCA, Msc-IT in one of the following subject areas: Information Technology, Computer Science, Business Administration or related field preferred Microsoft dot net certified professionals will be preferred

Proven experience with Microsoft .NET technologies including, ASP.NET, ADO.NET

Languages: C#, VB.Net, SQL/T-SQL, JavaScript/DHTML, VBScript, HTML, XML.*

Experience developing websites using a Content Management System (CMS)

Some experience with f ront end UI design preferred

Experience in backend software design in SQL Server 2008 or 2012, Stored procedures,

ASP.NET, VB.NET, C#

5 years of actual experience in the coding environment with visual studio

Must have the knowledge of FrontEnd frameworks like Jquery,twitterBootstrap,Css

Must have experience of creating Accounts ,Inventory and HR based softwares

Must have experience of working on an ERP

Must have experience of creating and consuming webservices/API

SKILLS:

Ability to complete all phases of software development life cycle including analysis, design,

functionality, testing and support

Ability to develop large scale web/database applications

Ability to work on multiple projects with multiple deadlines

Ability to communicate clearly with business users and project manager

Ability to innovate and provide functional applications with intuitive interfaces

Ability to construct user guides and documentation

Project Management skills

E-commerce integration skills

Excellent knowledge of Transact SQL

Working experience with Content Management Systems

Read more
NA
Remote only
17 - 20 yrs
₹30L - ₹55L / yr
Data modeling
ADF
databricks
PySpark
SQL
+1 more

Company Description

VDart is a global leader in digital solutions, product development, and professional services. Headquartered in Atlanta, GA, USA, the company has a robust global presence across North America, Europe, the Middle East, and Asia. VDart Digital specializes in delivering cutting-edge digital transformation solutions, leveraging technologies like AI/ML, blockchain, cloud computing, IoT, and data analytics. Its innovative product portfolio includes offerings such as TestSamurAI, LendSmartAI, IDocLens, and more, which are designed to optimize operations and drive business growth.


Role Description

We are looking for a seasoned data leader to design, build, and own enterprise-scale data platforms on Azure. This role goes beyond development — it requires end-to-end accountability for architecture, data pipelines, transformation frameworks, and production readiness.

You will act as the critical link between business stakeholders, data engineering teams, and analytics functions, ensuring scalable and high-performance data solutions are delivered and maintained.


Key Responsibilities:

  • Design and implement robust data pipelines using Azure Data Factory (ADF), including integration with REST APIs and external data sources
  • Build scalable data transformation workflows using Databricks (PySpark), handling complex and nested JSON datasets
  • Architect and implement Delta Lake-based data platforms, including fact and dimension models (star schema)
  • Define and enforce best practices for data modeling, performance optimization, and cost efficiency
  • Own end-to-end data platform lifecycle — from architecture and deployment to monitoring and operational support
  • Establish production readiness frameworks, including logging, alerting, and data quality checks
  • Collaborate closely with business and analytics teams to translate requirements into scalable technical solutions
  • Mentor engineering teams and drive architectural governance across projects 


Required Experience & Skills:


•                Experience building pipelines with Azure Data Factory 

•                Experience connecting to REST API sources using Azure Data Factory 

•                Experience building transformations with Databricks using PySpark 

•                Experience handling complex nested JSON files using PySpark 

•                Experience designing dimensional models/star schema 

•                Experience implementing facts and dimension tables in Databricks Delta Lake 

•                Around 15-20 years of solid experience in building, managing, and optimizing enterprise data platforms with at least 5 years in Azure cloud data services

•                Act as a bridge between business, data engineering, and analytics teams to ensure requirements are clearly understood and implemented correctly

•                Own end-to-end production readiness of the data platform, including architectural design, deployment patterns, monitoring strategy and operational support.

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Gurugram
6 - 9 yrs
₹20L - ₹22L / yr
Google Cloud Platform (GCP)
Apache Airflow
BQ
Dataproc
SQL
+1 more

Airflow , BQ and Dataproc.

GCP

Knowledge base in SQL and cloud Technologies like Airflow and Bigquery.

Read more
medha ai

medha ai

Agency job
via AccioJob by lokit poddar
Bengaluru (Bangalore)
0 - 1 yrs
₹6L - ₹7L / yr
skill iconPython
SQL
pandas

AccioJob is conducting a Walk-In Hiring Drive with Medha AI for the position of Data Engineer.


Apply Now:

https://go.acciojob.com/rWVc8X


Required Skills: SQL, Python, Pandas


Eligibility:

Degree: BTech./BE

Branch: All

Graduation Year: 2025, 2026


Work Details:

Work Location: Bangalore Urban (Onsite)

CTC: ₹6 LPA to ₹7 LPA


Evaluation Process:

Round 1: Offline Assessment at AccioJob Bangalore Centre

Further Rounds (for shortlisted candidates only):

Resume Evaluation, Technical Interview 1, Technical Interview 2, HR Discussion


Important Note: Bring your laptop & earphones for the test.


Register here: https://go.acciojob.com/rWVc8X


👇 FAST SLOT BOOKING 👇

[ 📲 DOWNLOAD ACCIOJOB APP ]

https://go.acciojob.com/3rSqgZ

Read more
Eassy Onboard
Pawan Beesetti
Posted by Pawan Beesetti
Remote only
4 - 8 yrs
₹25L - ₹50L / yr
PySpark
SQL
Spark
databricks
Google Cloud Platform (GCP)
+2 more

Company Description


Eassy Onboard LLP is a team of Databricks Certified Data Engineers committed to empowering businesses through data-driven solutions. Specializing in automated workflows, scalable architectures, optimized data pipelines, and AI solutions, we help organizations reduce manual effort, optimize costs, and achieve reliable insights. We ensure secure data operations with robust validation processes and strong data integrity. As an Employer of Record (EOR), we assist global companies in hiring top Indian talent, managing payroll, compliance, and regulatory requirements. Our mission is to accelerate enterprise transformation and enable companies to build future-ready, compliant teams.


Role Description


We are seeking a Senior Data Engineer with deep expertise in Spark/PySpark/SQL to join our data team.

This is a hands-on technical role for someone passionate about building scalable data systems, mentoring engineers, and shaping data strategy.

You will architect systems that power high-performance data processing, enable advanced analytics, and accelerate AI initiatives.


What You'll Do

  • Design and evolve scalable, distributed data infrastructure across cloud platforms including GCP and AWS.
  • Build and maintain real-time and batch data processing pipelines supporting AI/ML workloads, consumer applications, and analytics.
  • Develop and manage integrations with third-party e-commerce platforms to expand the data ecosystem.
  • Ensure data availability, reliability, and quality through monitoring and automated auditing.
  • Partner with engineering, AI, and product teams on data solutions for business-critical needs.
  • Mentor and support data engineers, establishing best practices and code quality standards.



Qualifications

  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
  • 5+ years of software development and data engineering experience with ownership of production-grade data infrastructure.
  • Deep expertise scaling Spark, PySpark, and SQL in production, including Databricks or DataProc on GCP.
  • Strong understanding of distributed computing and modern data modeling for scalable systems.
  • Proficient in Python with experience implementing software engineering best practices.
  • Hands-on experience with both relational and NoSQL systems including MySQL, MongoDB, and Elasticsearch.
  • Strong communicator with experience influencing cross-functional stakeholders.


Nice to Have

  • Experience with job orchestration and containerization tools such as Airflow and Docker.
  • Experience working with vector stores and knowledge graphs.
  • Experience working in early-stage, high-growth environments.
  • Familiarity with MLOps pipelines and integrating ML models into data workflows.
  • A proactive, problem-solving mindset with a passion for innovative solutions.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Pankhuri Shayad
Posted by Pankhuri Shayad
Mumbai, Pune
6 - 10 yrs
Best in industry
skill iconReact.js
skill icon.NET
skill iconC#
SQL

Position: Full-Stack Developer – React / C# / Python / SQL

Location: Mumbai / Pune

Experience: 6–8 Years

Employment Type: Full-time


About the Role

We are looking for a versatile Full-Stack Developer who has working experience in React, C# or Python, and SQL. The candidate doesn’t need to be an expert in all these technologies but should be comfortable taking end-to-end ownership of features with the support of modern AI tools.


Key Responsibilities

  • Develop, test, and maintain scalable frontend applications using React.
  • Build and integrate backend services using C# (.NET) or Python.
  • Write and optimize SQL queries, procedures, and data models.
  • Work closely with product and design teams to deliver high-quality features.
  • Use AI-assisted development tools (like GitHub Copilot / ChatGPT) to speed up coding, debugging, documentation, and solution design.
  • Participate in code reviews, troubleshooting, and performance improvements.
  • Ensure best practices in code quality, security, and deployment.


Required Skill Set

  • Frontend

React.js (Hooks, components, state management, API integration)


  • Backend (any one or both)

C# (.NET Core)

Python (FastAPI / Django / Flask)


  • Database

SQL (MySQL / PostgreSQL / MSSQL)

Experience writing queries, joins, stored procedures, and handling schemas


Good to Have

  • REST API development
  • Basic DevOps understanding (CI/CD, version control – Git)
  • Familiarity with cloud platforms (AWS/Azure/GCP)
  • Ability to learn quickly with AI tools and follow best practices
  • Problem-solving and ownership mindset


What We Are Looking For

  • Someone who can handle full-stack tasks with confidence
  • Not necessary to be an expert in everything
  • Curious, adaptable, and open to using AI tools to deliver faster
  • Strong communication skills and team collaboration
Read more
Remote only
6 - 12 yrs
₹15L - ₹30L / yr
skill iconPython
ETL
SQL
Database migration
Cloud transformation
+22 more

Lead / Sr. Data Engineer (Architect & Engineering Owner)

The Role

We are seeking a Lead Data Engineer who operates at the intersection of high-scale engineering and enterprise architecture. In this role, you will "own" our healthcare data platform end-to-end. You aren't just building pipelines; you are designing the blueprint for how clinical, claims, and sales data flow through our ecosystem. You will bridge the gap between legacy systems (MSSQL/Oracle) and modern cloud warehouses (Snowflake/Redshift/Databricks), ensuring our data is governed, HIPAA-compliant, and optimized for advanced analytics.

What You’ll Do

1. Architecture & Strategic Leadership

  • Design the Blueprint: Own the enterprise data architecture (Staging, Integration, Warehouse, and Semantic layers). Define the evolution from monolithic databases to scalable cloud-hosted analytics.
  • Modeling Mastery: Lead the design of complex Dimensional Models (Star/Snowflake) and implement advanced Slowly Changing Dimension (SCD) strategies to track historical clinical events.
  • Set the Standard: Establish coding, version control (GitHub), and CI/CD standards. Conduct design reviews and mentor a team of engineers to move from "task-takers" to "system-builders."

2. Advanced Data Engineering (Hands-on)

  • Modern ELT/ETL: Build and orchestrate production-grade pipelines using Python, Airflow, and dbt. Manage automated ingestion via Fivetran or custom-built frameworks for APIs and EHRs.
  • Multi-Engine Expertise: Operate seamlessly across PostgreSQL, MSSQL, and Oracle, while optimizing petabyte-scale cloud warehouses like Snowflake or Redshift.
  • Performance Tuning: Own query optimization. You should be the expert at using EXPLAIN/ANALYZE, partitioning, and indexing to reduce compute costs and latency.
  • Quality & Reconciliation: Design robust validation frameworks to ensure data integrity—essential for healthcare compliance and clinical trust.

3. Healthcare Interoperability & Governance

  • Data Standards: Map diverse datasets (EHR, API, Flat Files) to HL7 FHIR resources and curated analytic layers.
  • Privacy by Design: Embed HIPAA Security Rule safeguards (encryption, audit trails, and access controls) directly into the code and infrastructure.
  • Interoperability: Handle complex semi-structured data (JSON/XML) from third-party partners and EMR systems.

What You’ll Bring

  • Experience: 8–12+ years in Data Engineering/Architecture. You should have a track record of leading technical projects or mentoring teams.
  • The "Hybrid" Stack: * Expert SQL/PL-SQL: Deep experience with performance tuning in relational environments (Oracle/MSSQL).
  • Modern Tools: Practical experience with Snowflake/Redshift, dbt, and Airflow.
  • Programming: High proficiency in Python (Pandas, PySpark) or Java/Scala for custom ETL routines.
  • Architectural Depth: Clear understanding of SDLC, Agile (Scrum), and Data Modeling frameworks.
  • Healthcare Domain: Exposure to pharmaceutical or clinical data (Life Sciences, EMR, or Claims) is highly preferred.
  • Soft Skills: The ability to translate "clinical business needs" into "technical runbooks" and communicate effectively with stakeholders.

Nice to Have

  • AI/ML Integration: Experience supporting Data Science teams with feature extraction and model deployment (SageMaker/Azure ML).
  • Advanced Tooling: Familiarity with NoSQL (MongoDB), search engines (Elasticsearch), or niche ETL tools (Talend/Informatica) for migration purposes.
  • Cloud Infrastructure: Hands-on experience with AWS Glue, Lambda, or Azure Data Factory.


Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Pune
3 - 5 yrs
₹10L - ₹17L / yr
SQL
skill iconPython
PowerBI
Effective communication

Role Objective


Develop business-relevant, high-quality, scalable Power BI Dashboards keeping customer requirements at the core.


Roles and Responsibilities


  • Technical Expertise: Very clear understanding of core concepts like Data Modelling, DAX, Power Query, and Visualization principles.
  • Programming Languages: Expertise in Python and SQL for data manipulation, analysis, and automation. Additionally, leverage DAX and M Query for complex data transformations within Power BI.
  • Dashboard Development & Quality Measures: Design, develop, and maintain Power BI dashboards ensuring high quality, data accuracy, user-friendly visualization, and adherence to timelines. Define and meet dashboard quality standards such as data accuracy, visualization clarity, and performance benchmarks.


  • Translate Business Problems: Collaborate with stakeholders to translate complex business problems into data-centric solutions that address functional, non-functional, and commercial concerns, such as reliability, scalability, and maintainability
  • Hypothesis Development: Decompose business problems into a series of testable hypotheses, identifying relevant data assets required for evaluation
  • Quality Measures: Define and meet the dashboard quality measures like data accuracy, data visualization, timelines, support, etc.
  • Solution Design: Based on the business requirements design wireframes, POC, and final product.
  • Performance Optimization: Improve user experience by continuously enhancing performance, maintainability and scalability.
  • Troubleshooting: Quick issue resolution as defined by SLA. Resolve issues about access, latency, data accuracy, security, etc.


Job Requirements


1. Education- MBA, B. Tech (Comp. Sc), BBA, BCA, MCA or equivalent


2. Experience- 4+ years of experience in developing Power BI Dashboards.


3. Behavioral Skills-


  • Clear and Assertive communication
  • Ability to comprehend the business requirement
  • Teamwork and collaboration
  • Analytics thinking
  • Time Management


4. Technical Skills-


  • Understand various datasets as play and create a high-quality, robust, and scalable data model in Power BI.
  • Should be able to write data transformation steps using M-Language and not just UI.
  • Demonstrated ability to write and comprehend complex DAX.
  • Understanding of visualization best practices. We encourage insightful dashboards NOT colorful.
  • Conceptual clarity and Hands-on experience with SQL and Python.
  • Analytical skills to create powerful data stories
  • Experience with using DAX Studio and Tabular Editor is a plus
  • Expertise in High Volume Data Processing Production environment is a plus


Note: Candidates based in Pune or those willing to relocate will be preferred.


Why Ven Analytics?


At Ven, we are a fast-paced, innovative startup committed to delivering cutting-edge solutions. Our dynamic work environment fosters creativity, collaboration, and professional growth.


We offer a range of employee benefits, including:


  • Competitive Compensation: Attractive salary packages and performance-based incentives.
  • Career Development: Opportunities for growth, learning, and advancement in a rapidly expanding company.
  • Health Benefits: Wellness programs and resources to support your physical and mental well-being.


Join us at Ven Analytics, where you'll have the chance to make an impact, grow professionally, and be part of an exciting journey!

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Pune
2.5 - 4 yrs
₹7L - ₹10L / yr
skill iconDjango
skill iconPython
SQL
CI/CD
skill iconHTML/CSS
+1 more

Role Objective

Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.

Roles & Responsibilities

Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.  

Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP). 

Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.  

Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.  

Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application. 

Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans. 

* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc. 

Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must. 

**Requirements**

Education-B. Tech (Comp. Sc, IT) or equivalent 

Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS 

Behavioural Skills-

  1. Clear and Assertive communication 

  2. Ability to comprehend the business requirement  

  3. Teamwork and collaboration 

  4. Analytics thinking 

  5. Time Management 

  6. Strong Trouble shooting and problem-solving skills 

Technical Skills-

  1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS. 

  2. Cloud Technologies: AWS, GCP and Azure 

  3. Big Data Technologies: Hadoop and Spark 

  4. Containerized Deployment: Dockers and Kubernetes is a plus.

  5. Other: Understanding of Golang is a plus.

AI / LLM Engineering — Good to Have

  • Candidates with exposure to AI/LLM engineering will have a strong advantage as we build intelligent, AI-augmented AdTech solutions. None of the below is mandatory.
  • LLMs: OpenAI (GPT-4/4o), Anthropic (Claude), Meta (Llama)
  • Orchestration & Agents: LangChain, LangGraph, LlamaIndex
  • Tool Calling / MCP: Function Calling (OpenAI / Anthropic), FastMCP or Custom MCP Servers
  • RAG (Retrieval-Augmented Generation): RAG pipeline design, LlamaIndex, LangChain retrievers and chains
  • Vector Databases: Pinecone, Weaviate, FAISS
  • Embeddings: OpenAI Embeddings, Hugging Face Sentence Transformers
  • Observability: LangSmith, Sentry
  • Backend / Infra for AI: Django REST Framework, FastAPI.


Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Pune
1.5 - 4 yrs
₹7L - ₹10L / yr
skill iconDjango
skill iconPython
SQL
skill iconReact.js
skill iconHTML/CSS
+1 more

Role Objective

Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.

Roles & Responsibilities

Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.  

Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP). 

Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.  

Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.  

Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application. 

Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans. 

* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc. 

Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must. 

**Requirements**

Education-B. Tech (Comp. Sc, IT) or equivalent 

Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS 

Behavioural Skills-

  1. Clear and Assertive communication 

  2. Ability to comprehend the business requirement  

  3. Teamwork and collaboration 

  4. Analytics thinking 

  5. Time Management 

  6. Strong Trouble shooting and problem-solving skills 

Technical Skills-

  1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS. 

  2. Cloud Technologies: AWS, GCP and Azure 

  3. Big Data Technologies: Hadoop and Spark 

  4. Containerized Deployment: Dockers and Kubernetes is a plus.

  5. Other: Understanding of Golang is a plus.

AI / LLM Engineering — Good to Have

  • Candidates with exposure to AI/LLM engineering will have a strong advantage as we build intelligent, AI-augmented AdTech solutions. None of the below is mandatory.
  • LLMs: OpenAI (GPT-4/4o), Anthropic (Claude), Meta (Llama)
  • Orchestration & Agents: LangChain, LangGraph, LlamaIndex
  • Tool Calling / MCP: Function Calling (OpenAI / Anthropic), FastMCP or Custom MCP Servers
  • RAG (Retrieval-Augmented Generation): RAG pipeline design, LlamaIndex, LangChain retrievers and chains
  • Vector Databases: Pinecone, Weaviate, FAISS
  • Embeddings: OpenAI Embeddings, Hugging Face Sentence Transformers
  • Observability: LangSmith, Sentry
  • Backend / Infra for AI: Django REST Framework, FastAPI.

Employment Type

Fulltime

Experience Level

Associate

Work Experience (years)

1.5- 4 Years

Annual Compensation 

INR 700,000 - 1,000,000

Skills

No of Openings

2


Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Pune
0 - 1 yrs
₹1.5L - ₹1.8L / yr
skill iconDjango
skill iconPython
SQL
skill iconNodeJS (Node.js)
skill iconReact.js

Role Objective

As a Fullstack Intern, you will gain hands-on experience in developing business-relevant, high-quality, and scalable web applications. You will work closely with our dynamic AdTech team to solve real-world challenges in the Media and Entertainment sector.

Roles & Responsibilities

  • Application Design: Work with the team to understand requirements, contribute to user stories, and support the design process. Assist in reviewing designs, giving feedback, and ensuring alignment with user expectations.
  • Architecture: Learn and contribute to designing scalable and robust system architectures (on-prem or cloud – Azure, AWS, or GCP).
  • Development: Assist in front-end and back-end development using technologies such as SQL, Django, Angular/React, HTML, and CSS. Exposure to GoLang and Big Data will be a plus.
  • Deployment: Support the implementation of scalable and cost-effective deployment strategies. Contribute to CI/CD processes on IIS or Linux. Familiarity with Dockers is a plus.
  • Maintenance: Help in maintaining development and production environments, troubleshooting issues, fixing bugs, and suggesting improvements.
  • Data Migration: Learn and assist in planning and implementing database migration strategies.
  • Documentation: Contribute to technical documentation, including HLD, technical diagrams, script design, and SOPs.
  • Client Interaction: Gain exposure to client communication and understand how to translate business requirements into technical solutions.

Requirements

Education – B.Tech (Computer Science, IT) or equivalent, currently pursuing or recently completed.

Experience – Previous internship experience is a plus.

Behavioural Skills

  • Clear and assertive communication
  • Ability to comprehend business requirements
  • Teamwork and collaboration
  • Analytical thinking
  • Time management
  • Problem-solving and troubleshooting skills.

Technical Skills

  • Back-end & Front-end: Django, Angular/React, HTML, CSS
  • Cloud Technologies: AWS, GCP, Azure
  • Big Data: Hadoop, Spark (knowledge is a plus)
  • Containerized Deployment: Dockers/Kubernetes (a plus)
  • Other: Understanding of GoLang is a plus


Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai, Gurugram
5 - 7 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconSpring Boot
Rest API
Microservices
SQL
+1 more
  • 5+ years of experience in Java backend development
  • Strong proficiency in Core Java (Java 8+)
  • Hands-on experience with multithreading, concurrency, and performance tuning
  • Strong understanding of data structures and algorithms
  • Experience with Spring Boot and REST API development
  • Experience in microservices architecture
  • Good understanding of SQL/NoSQL databases
  • Strong debugging and problem-solving skills
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai, Gurugram
5 - 6 yrs
₹10L - ₹13L / yr
skill iconPython
Object Oriented Programming (OOPs)
RESTful APIs
SQL
  • Demonstrated experience building production-grade applications with an emphasis on scalability, maintainability, and performance
  • Strong expertise in concurrency and parallelism, including: 
  • Multithreading and multiprocessing
  • Synchronous and asynchronous programming (e.g., async/await)
  • Designing for throughput, latency, and safe shared-state handling
  • Proven experience integrating with external systems via application interfaces, including:
  • Building and consuming RESTful APIs
  • Authentication/authorization patterns (e.g., API keys, OAuth where applicable)
  • Reliable integration patterns (timeouts, retries, idempotency, error handling)
  • Strong SQL skills, including the ability to write efficient, complex queries (joins, aggregations, window functions) and optimize performance where needed. 


Read more
A leading data & analytics intelligence technology solutions provider

A leading data & analytics intelligence technology solutions provider

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 5 yrs
₹12L - ₹18L / yr
PowerBI
Data modeling
DAX
SQL
ETL
+4 more

Key Skills:

Technical Skills

  • Power BI Development: 4-5 years of hands-on experience developing Power BI reports, dashboards, and data models
  • DAX: Strong proficiency in DAX (Data Analysis Expressions) for creating measures, calculated columns, and complex calculations
  • Power Query / M Language: Expertise in data transformation and ETL processes using Power Query
  • Data Modeling: Solid understanding of dimensional modeling, star schema, and data warehouse concepts
  • SQL: Proficient in SQL for data extraction, manipulation, and querying relational databases
  • Power BI Service: Experience with Power BI Service administration, workspace management, scheduled refreshes, and deployment pipelines
  • Custom Visualizations: Experience creating and configuring custom visuals, including use of AppSource visuals and custom visual development using Power BI Visuals SDK
  • API Integration: Hands-on experience with Power BI REST APIs for automating deployments, managing workspaces, and embedding reports
  • Knowledge of data visualization best practices and UI/UX principles for dashboard design
  • Experience with data source connectivity (SQL Server, Azure SQL, Oracle, SAP, Excel, APIs, web services)

Additional Required Qualifications

  • Bachelor’s degree in computer science, Information Systems, Business Analytics, or related field
  • Strong analytical and problem-solving abilities
  • Excellent communication skills to work with both technical and non-technical stakeholders
  • Ability to manage multiple projects and prioritize tasks effectively
  • Detail-oriented with commitment to delivering high-quality work
  • Client-facing experience with ability to gather requirements and present solutions

Preferred Qualifications

  • Microsoft Power BI certification (PL-300 or equivalent)
  • Experience with Azure ecosystem (Azure Data Factory, Azure Synapse Analytics, Azure SQL Database)
  • Knowledge of other Microsoft BI tools (SSRS, SSAS, Excel Power Pivot)
  • Familiarity with Python or R for advanced analytics integration
  • Experience with Dataflows and incremental refresh strategies
  • Understanding of API development for custom visuals or Power BI embedded solutions
  • Experience working in Agile/Scrum development environments
Read more
Pearl
Abhishek Batni
Posted by Abhishek Batni
Remote only
6 - 12 yrs
₹15L - ₹35L / yr
A/B Testing
Exploratory testing
Statistical Analysis
Regression Testing
EDA
+3 more

About Us

Pearl is AI for professional services at global scale—combining advanced AI with verified human expertise to deliver help that’s accurate, accountable, and fast. Since 2003, our network has connected millions of customers with licensed professionals across 196 countries, making real expertise available anytime, anywhere.


Our Values

  • Data driven: Data decides, not egos
  • Courageous: We take risks and challenge the status quo
  • Innovative: We're constantly learning, creating, and adapting
  • Lean: We focus on customers, using lean testing to learn how to serve them best
  • Humble: Past success is not a guarantee of future success


About the Role

As a Senior Analyst at Pearl, you will be a Subject Matter Expert (SME) in the professional services domain, driving long-term growth and leveraging the latest technologies and methodologies.


In this role, you will drive tangible business impact by delivering high-quality insights and recommendations that are grounded in your ability to combine strategic thinking and problem-solving with detailed analysis. This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncovering valuable business insights, devising optimization strategies, and validating them through experiments.


What You’ll Do

  • Collaborate with Product and Analytics leadership to conceive and structure analysis and deliver highly actionable insights from “deep dives” into specific areas of our business
  • Conduct analysis of a tremendous amount of internal & external data to find growth & optimization opportunities for the business
  • Package and communicate findings and recommendations to a broad audience (including senior leadership)
  • Perform both Descriptive & Prescriptive Analytics including experimentations (AB, MAB), build Reporting for understanding trends
  • Implement and track business metrics that will help drive the business
  • Help determine growth strategy from a marketing and operations perspective
  • Responsible/Opportunity to operate in an individual capacity as a lead analyst to understand the customer and guide informed strategic decisions & executions


What We’re Looking For

  • 7+ years of experience in e-commerce/customer experience products
  • Proficiency in analysis and business modeling using Excel
  • Experience with Google Analytics, BigQuery, PowerBI, and Python ./ R would be a plus
  • Experience with SQL with a strong ability to write complex queries to extract information and arrive at an answer
  • Expertise in Descriptive Statistical Analysis, Inferential Statistical Analysis
  • Strong experience in setting up and driving A/B Testing or Hypothesis Testing and analyzing the results
  • Ability to translate analysis results into business recommendations, and excellent written and confident verbal communication skills
  • Ability to communicate effectively with all levels of management and partners from a variety of business functions
  • Advanced English level


Our Commitment to an Inclusive Workplace

We welcome people from all backgrounds who seek the opportunity to help build a future where professional services are readily available to all. If you have curiosity, passion, and a collaborative spirit, come work with us. Pearl is committed to an inclusive workplace. Pearl is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, disability, age, or other legally protected status.


AI Disclosure & Informed Consent

Artificial intelligence (AI) technology may be used during the hiring process to record, transcribe, analyze, and rank interview responses. By submitting your application and participating in the interview process, you acknowledge and consent to the use of AI technology in the hiring process. For more information see our AI Disclosure and Consent Policy.

Read more
FloBiz
Naman Sharma
Posted by Naman Sharma
Remote only
3 - 5 yrs
₹25L - ₹35L / yr
skill iconJava
skill iconRuby on Rails (ROR)
skill iconPython
SQL

About FloBiz

Website : https : //flobiz.in/

FloBiz is India's first neobusiness platform, revolutionizing the way Small and

Medium-sized Enterprises (SMEs) operate in India. Our mission is to digitize 65 million

MSMEs in the country, and we are well on our way to achieving this goal. Our flagship

product, myBillBook, has already empowered over 10 million businesses across 2000+

towns with its billing, accounting, inventory management, and payment collection

solutions. With over $25 billion in annual transactions, we are proud to be a rapidly

growing tech startup serving the needs of SMBs in India.

Our Flagship product : MyBillBook

myBillBook is India's leading GST billing & accounting software with mobile, web app & native desktop offerings and runs on Android as well as iOS. myBillBook has been designed to aid SMB owners to conduct their operations from anywhere and anytime and provides a secure platform for business owners to record transactions & track business performance on the go. It is an ideal software for GST registered businesses where invoicing is one of the core business activities. Also, businesses looking to digitise their operations to understand their financial position better can use this software. It helps them create bills (GST & non-GST), record purchases & expenses, manage inventory and track payables/receivables directly from their mobile phones or computers. Also, the app generates 25 critical business reports that help business owners make effective business decisions. myBillBook is currently available in English, Hindi, Gujarati & Tamil.

Currently, the app has been downloaded by over 6.5M SMBs across the country with over 10x growth in user base in the last 12 months alone. Even with such pace of adoption of the product, myBillBook continues to be the highest rated application in its category on Google Play Store.

Key Responsibilities :

• Design, develop, maintain and optimise complex, scalable and distributed systems capable of handling large-scale datasets and high-throughput workloads.

• Optimise performance, reliability and availability across the whole system.

• Write clean, efficient, and maintainable code in multiple programming languages as needed.

• Contribute to architectural decisions and help improve engineering best practices.

• Work with a builder mindset, contribute and collaborate across cross-functional teams, to unblock and accelerate delivery. No role silos.

• Actively mentor juniors through code reviews, design discussions and pairing.

• Leverage LLM-assisted tools (Claude, Cursor, LLM-powered code review and testing) to accelerate development velocity and improve code quality.

• Build and evolve the platform to be LLM-ready - design APIs, data pipelines and system interfaces that enable seamless LLM integration and automation.

Required Qualifications :

• 3-5 years of experience in back-end software development focusing on large-scale distributed systems.

• BE/B.Tech in Computer Science or a related technical field (or equivalent practical experience).

• Strong software development skills in one or more languages such as Java / Ruby on Rails / Python.

• Working experience with SQL and NoSQL databases (e.g. PostgreSQL, MongoDB) with ability to design effective schema and perform various optimisations for large-scale data.

• Deep understanding of system design principles and best practices for building scalable and resilient systems with microservices.

• Excellent problem-solving with experience in incident management, monitoring, alerting, and root cause analysis.

• Experience with event-driven architectures (Kafka, SQS, RabbitMQ, or similar).

• Experience in building intelligent AI agents and systems powered by Large Language Models.

• Hands-on experience with cloud platforms like AWS or Google Cloud Platform.

• Deep understanding of software development best practices, patterns and code reviews.

• Effective communication skills to coordinate with cross-functional teams during large-scale projects.

Perks @ Benefits :

• Competitive salary with performance-linked rewards and recognitions.

• An extensive medical insurance that looks out for our employees & their dependants. Well love you and take care of you, our promise

• Flobiz Academy : Helps you in terms of Learning and enhancing your skills

• A reward system that celebrates hard work and milestones and Performances throughout the year.

• A cool work-from-home setup that makes you feel right at home. An environment so comfortable that you won't miss your home.

Location : Remote (WFH) 5 days working

Read more
Tap Invest
Anusree TP
Posted by Anusree TP
Bengaluru (Bangalore)
1 - 2 yrs
₹3L - ₹5L / yr
SQL
skill iconPython
pandas
skill iconData Analytics
Business Analysis

As an Analyst at Tap Invest, you’ll turn data into decisions. You’ll work with teams across

Product, Ops, Marketing, and Sales to uncover insights, solve real business problems, and

drive strategy.

This role is for someone who is comfortable working with data independently and can

support business teams with reliable analysis and reporting.

Key Responsibilities

● Gather, organize, and clean data from various sources including databases,

spreadsheets, and external sources to ensure accuracy and completeness.

● Write SQL queries to pull, validate, and clean data from production databases.

● Build and maintain dashboards, and generate KPI reports. Track performance

against targets and identify areas for optimization.

● Analyze user funnels and investment patterns to surface actionable insights.

● Prepare and present clear, concise reports and visualizations to communicate

findings and recommendations to stakeholders across teams.

● Document data definitions, metrics, and assumptions clearly for consistency and

reuse.

What We’re Looking For

● 1 to 2 years of experience in Data Analytics, Business Analytics, or a similar role.

● Comfortable writing in SQL and validating queries.

● Solid with Excel / Google Sheets (pivot tables, lookups, charts).

● Genuine curiosity about how businesses use data to make decisions.

● Experience with scripts for data automations.

● Prior projects involving production datasets.

Nice to Have

● Familiarity with pandas or any data manipulation library for advanced automations.

● Interest in capital markets, bonds, fixed income or FinTech.

● Exposure to AI tools

Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 5 yrs
₹7L - ₹12L / yr
RMS
OMS
Linux/Unix
SQL
API

🚨 We’re Hiring | Support Engineer – Trading & Capital Markets 🚨

📍 Location: Bangalore

🕘 Shift: Night Shift

🎯 Domain Priority: Trading / Capital Markets ONLY


We are looking for a Support Engineer with hands-on experience in trading systems and capital markets, who thrives in fast-paced, high-availability environments and is passionate about client-facing technical support.


Must-Have Qualifications

  • Bachelor’s degree in Computer Science, IT, or a related field
  • 2+ years of experience in Application / Technical Support, preferably in the broking or trading domain
  • Strong understanding of Capital Markets – Equity, F&O, Currency, Commodities
  • Solid technical troubleshooting skills:
  • Linux/Unix
  • SQL
  • Log analysis
  • Familiarity with Trading Systems, RMS, OMS, APIs (REST/FIX), and order lifecycle
  • Excellent communication and interpersonal skills for effective client interaction
  • Ability to work under pressure during live trading hours and manage multiple priorities
  • Customer-centric mindset with strong problem-solving and relationship-building abilities

🔍 Key Responsibilities

  • Act as the primary point of contact for clients reporting issues related to trading applications and platforms
  • Log, track, and monitor incidents using internal tools and ensure resolution within defined TATs
  • Coordinate with Development, QA, Infrastructure, and other internal teams to drive timely resolution
  • Provide clear, proactive, and regular updates to clients and internal stakeholders
  • Maintain detailed logs of incidents, escalations, resolutions, and fixes for audits and future reference
  • Support clients with queries related to system functionality, performance, and usage
  • Communicate proactively with clients regarding product enhancements, features, and updates


⚠️ Note: Candidates with experience in the Trading / Capital Markets domain will be given first priority.

📩 Interested candidates can comment “Interested” or share their resume via DM.

Let’s connect and build reliable trading support systems together!

Read more
Deltek
Shamitha ID
Posted by Shamitha ID
Remote only
10 - 15 yrs
Best in industry
Database architecture
skill icon.NET
Data architecture
SQL
SQL server

Position Responsibilities :


The Database Architect is responsible for the design, optimization, and evolution of the database layer supporting enterprise applications in cloud environments. This role focuses on ensuring efficient data access patterns, scalable query workloads, and robust database architecture capable of supporting high-volume and high-concurrency systems.

This role goes beyond traditional database administration and requires deep technical expertise in SQL query optimization, database internals, and performance diagnostics. The Database Architect analyzes how applications interact with the database and guides improvements in schema design, data access patterns, and system scalability.

As a senior technical leader, the Database Architect helps define long-term strategies for scalable and efficient data architecture while working closely with engineering teams to promote best practices for database design and SQL development.


KEY RESPONSIBILITIES

Database Architecture & Optimization

  • Design and evolve database architectures for scalable enterprise systems.
  • Define efficient data access patterns that support high concurrency and large datasets.
  • Improve schema design, indexing strategies, and query patterns.
  • Ensure database designs support both transactional and data consumption workloads.

SQL Performance Engineering

  • Analyze and optimize complex SQL queries and execution plans.
  • Improve database performance through indexing strategies, statistics management, and query tuning.
  • Investigate workload behaviour and recommend architectural improvements.

Data Access & Systems Thinking

  • Provide guidance on scalable approaches for retrieving and delivering data for data-intensive application features.
  • Recommend architectural strategies such as data aggregation, caching, or pre-computed datasets where appropriate.
  • Apply systems thinking to improve how data is modeled, accessed, and delivered across the application.

Advanced Diagnostics

  • Diagnose database behaviour using tools such as Query Store, Extended Events, and execution plan analysis.
  • Analyze query performance, wait statistics, and workload patterns to identify optimization opportunities.

Collaboration & Technical Leadership

  • Partner with engineering teams to guide scalable SQL development and data access practices.
  • Participate in architecture and design discussions involving database interactions.
  • Document best practices and architectural recommendations.

AI-Assisted Engineering

  • Use AI-assisted tools to accelerate query analysis, diagnostics, and workload investigations.
  • Validate AI-generated insights through empirical testing and database telemetry.


Qualifications :

TECHNICAL SKILLS & EXPERTISE


Database & SQL Server (Required)

  • Advanced SQL Server performance tuning, including query optimization, execution plan analysis, and index design
  • Strong experience diagnosing and resolving deadlocks using Extended Events and deadlock graphs
  • Deep understanding of locking, blocking, and transaction behaviour, including wait statistics and lock escalation
  • Experience optimizing stored procedures, including mitigation of parameter sniffing and plan cache management
  • Strong knowledge of indexing strategies, including covering indexes and filtered indexes
  • Solid understanding of statistics, cardinality estimation, and query optimizer behaviour


Performance Analysis Tools (Required)

  • Experience using SQL Server Profiler and Extended Events for workload diagnostics
  • Advanced execution plan analysis using SSMS or Azure Data Studio
  • Familiarity with SET STATISTICS IO/TIME for query performance evaluation
  • Strong experience using Query Store to analyse query performance and plan behaviour
  • Ability to diagnose issues through wait statistics and blocking chain analysis


Enterprise Application Data Architecture

  • Strong understanding of database design within multi-tier enterprise applications
  • Experience optimizing database workloads supporting high-concurrency systems and large datasets
  • Understanding how application query patterns influence database performance
  • Familiarity working with application platforms such as .NET, APIs, or modern web frameworks


Cloud & Enterprise Database Environments

  • Experience working with cloud-hosted database environments
  • Understanding of scalability considerations in enterprise systems
  • Experience analyzing and optimizing database workloads in production environments


QUALIFICATIONS

  • 8+ years of experience working with enterprise database systems
  • Proven expertise in SQL performance tuning and database workload optimization
  • Strong experience in analysing execution plans and database performance behaviour
  • Experience collaborating with engineering teams on data architecture and query design
  • Strong analytical and problem-solving skills


AI-FIRST MINDSET REQUIREMENT

We value engineers who view AI as a productivity multiplier. The ideal candidate actively leverages AI tools to accelerate diagnostics, analyze database workloads, and uncover optimization opportunities while applying strong engineering judgment to validate results

Read more
WINIT
Aishwarya SURENDRAN
Posted by Aishwarya SURENDRAN
Hyderabad
0 - 2 yrs
₹3L - ₹7L / yr
skill iconReact.js
skill iconReact Native
SQL
AI Coding Tools

WINIT is looking for a Full Stack Developer with expertise in React Native, React.js & Backend


Company Name- WINIT (WINIT Mobile Sales Force Automation | WINIT (winitsoftware.com))

Qualification- Any Graduate

Work experience- 0-2 years

Location- Hyderabad


Job Summary

We are looking for a talented and innovative Full Stack Developer with expertise in React Native, React.js, and backend technologies to join our team as a Vibe Coder. In this role, you’ll develop cutting-edge web and mobile applications while leveraging modern AI tools and best-in-class development practices to enhance productivity, performance, and user experience.

Key Responsibilities

  • Develop and maintain responsive web applications using React.js.
  • Build high-performance cross-platform mobile apps using React Native.
  • Design and develop scalable backend systems using Node.js, Express.js, and related technologies.
  • Integrate third-party services, RESTful APIs, and cloud platforms.
  • Optimize performance across frontend and backend components.
  • Use AI tools to streamline development tasks such as code generation, debugging, testing, and UX improvement.
  • Work collaboratively with product managers, UI/UX designers, and QA teams.
  • Participate in code reviews, contribute to technical documentation, and drive innovation within the development team.

Tech Stack & Tools (Vibe Coder Stack)

  • Frontend: React.js, React Native, Redux, JavaScript, TypeScript, HTML5, CSS3
  • Backend: Node.js, Express.js, REST APIs, Firebase Functions
  • Databases: MongoDB, PostgreSQL, Firebase Firestore
  • Dev Tools: Git, GitHub, VS Code, Postman, Docker, Swagger
  • Cloud & Deployment: AWS, Firebase, Vercel, Netlify
  • CI/CD & PM Tools: GitHub Actions, Trello, Jira, Notion

AI Tools & Utilities

  • GitHub Copilot / Amazon CodeWhisperer – AI pair programming
  • Cursor AI / Gemini / Claude code – Code assistance, debugging, documentation

Requirements

  • 1+ years of hands-on experience in full stack development.
  • Proficient in React Native, React.js, and Node.js.
  • Strong understanding of JavaScript and frontend/backend principles.
  • Experience with cloud-based deployments and mobile app publishing.
  • Familiar with Git-based workflows, API integration, and database systems.
  • Excellent problem-solving, debugging, and communication skills.
  • Ability to independently manage tasks and meet project deadlines.

Preferred (Not Mandatory)

  • Experience with GraphQL, TypeScript, or Next.js.
  • Familiarity with Agile/Scrum methodologies.
  • Exposure to AI/ML concepts or LLM integration in applications.


About our company:


We are an mSFA technology company that has evolved from the industry expertise we have gained over 25+ years. With over 600 success stories in mobility, digitization, and consultation, we are today the leaders in mSFA, with over 75+ Enterprises trusting WINIT mSFA across the globe.

Our state-of-the-art support center provides 24x7 support to our customers worldwide. We continuously strive to help organizations improve their efficiency, effectiveness, market cap, brand recognition, distribution and logistics, regulatory and planogram compliance, and many more through our cutting-edge WINIT mSFA application.

We are committed to enabling our customers to be autonomous with our continuous R&D and improvement in WINIT mSFA. Our application provides customers with machine learning capability so that they can innovate, attain sustainable growth, and become more resilient.

At WINIT, we value diversity, personal and professional growth, and celebrate our global team of passionate individuals who are continuously innovating our technology to help companies tackle real-world problems head-on.



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore), Mumbai
4 - 8 yrs
Best in industry
Snowflake
Data Transformation Tool (DBT)
SQL
Snow flake schema
skill iconPython
+1 more

JD - 

 

We are looking for a strong Data Engineer having hands on experience in building pipelines using Snowflake and DBT.

Key Responsibilities:

  • Develop, maintain, and optimize data pipelines using DBT and SQL on Snowflake DB.
  • Collaborate with data analysts, QA and business teams to build scalable data models.
  • Implement data transformations, testing, and documentation within the DBT framework.
  • Work on Snowflake for data warehousing tasks, including data ingestion, query optimization, and performance tuning.
  • Use Python (preferred) for automation, scripting, and additional data processing as needed.

Required Skills:

  • 6+ years of experience in building data engineering pipelines.
  • Strong hands-on expertise with DBT and advanced SQL.
  • Experience working with modern columnar/MPP data warehouses, preferably Snowflake.
  • Knowledge of Python for data manipulation and workflow automation (preferred).
  • Good understanding of data modeling concepts, ETL/ELT processes, and best practice.
Read more
Uni Cards

at Uni Cards

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
4yrs+
Upto ₹35L / yr (Varies
)
Product Management
AI Coding Tools
SQL
MS-Excel
Startups
+1 more

Job Title: Product Manager 2

Role Overview

We are looking for a driven and analytical Product Manager to own end-to-end product

initiatives across our fintech ecosystem. You will work closely with Engineering, Growth, Risk,

and Operations teams to build scalable, customer-centric solutions that drive measurable

business impact.

Key Responsibilities

  • Own the product lifecycle from problem discovery → PRD → launch → iteration
  • Write clear and structured PRDs, user stories, and acceptance criteria
  • Collaborate cross-functionally with Engineering, Design, Growth, Risk, and Business
  • teams
  • Define product roadmaps and prioritize based on impact and feasibility
  • Analyze product performance, funnels, and user behavior using data
  • Drive experimentation, A/B testing, and continuous optimization
  • Translate business goals into scalable product solutions

Required Skills & Qualifications

  • 3–7 years of Product Management experience (preferably in fintech/startup environments)
  • Strong analytical skills with hands-on experience in SQL and Excel/Google Sheets
  • Experience working in Agile environments with engineering teams
  • Ability to think structurally and solve complex product problems
  • Strong stakeholder management and communication skills
  • Working understanding of APIs, integrations, and system workflows
  • Comfort using AI tools to enhance productivity, documentation, research, and product discovery
  • Basic understanding of prompt structuring to improve research, analysis, and workflow efficiency
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹28L / yr
SQL
skill iconPython
Informatica
Data Transformation Tool (DBT)

Job Description:

We are looking for a skilled Database Developer with strong hands-on experience in SQL + Informatica and programming knowledge in Java or Python. The ideal candidate will design, develop, and maintain robust ETL pipelines and database solutions while collaborating with cross-functional teams to support business data needs and analytics initiatives. 

Key Responsibilities: 

  • Design, develop, and optimize SQL queries, stored procedures, triggers, and views for high performance and scalability. 
  • Develop and maintain ETL workflows using Informatica PowerCenter (or Informatica Cloud). 
  • Integrate and automate data flows between systems using Java or Python for custom scripts and applications. 
  • Perform data analysis, validation, and troubleshooting to ensure data accuracy and consistency across systems. 
  • Work closely with business analysts, data engineers, and application teams to understand data requirements and translate them into efficient database solutions. 
  • Implement performance tuning, query optimization, and indexing strategies for large datasets. 
  • Maintain data security, compliance, and documentation of ETL and database processes. 

Required Skills & Experience: 

  • Bachelor’s degree in Computer Science, Information Technology, or related field. 
  • 5–8 years of hands-on experience as a SQL Developer or ETL Developer
  • Strong proficiency in SQL (Oracle, SQL Server, or PostgreSQL). 
  • Hands-on experience with Informatica PowerCenter / Informatica Cloud
  • Programming experience in Java or Python (for automation, data integration, or API handling). 
  • Good understanding of data warehousing concepts, ETL best practices, and performance tuning
  • Experience working with version control systems (e.g., Git) and Agile/Scrum methodologies. 

Good to Have: 

  • Exposure to cloud data platforms (AWS, Azure, or GCP). 
  • Familiarity with Unix/Linux scripting
  • Experience in data modeling and data governance frameworks.

 

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai, Ahmedabad
4 - 6 yrs
₹8L - ₹15L / yr
ASP.NET
.net core
mvc
skill iconC#
SQL
+13 more

Position: Microsoft .NET Full Stack Developer

Experience: 4–6 Years

Open Positions: 10

Location: PAN India (Final Round – Face-to-Face Interview)

Budget: Up to 15 LPA

Notice Period: Immediate joiners preferred

Key Responsibilities:

· Work on highly distributed and scalable system architecture

· Design, develop, test, and maintain high-quality software solutions

· Ensure performance, security, and maintainability of applications

· Collaborate with cross-functional teams and stakeholders

· Perform system testing and resolve technical issues


Required Skills:

· Strong experience in ASP.NET, C#, .NET Core, MVC

· Hands-on experience with SQL Server / PostgreSQL

· Experience in Angular / React (Frontend technologies)

· Knowledge of microservices architecture & RESTful APIs

· Familiarity with CQRS pattern

· Exposure to AWS / Docker / Kubernetes

· Experience with CI/CD pipelines (Azure DevOps, Jenkins)

· Knowledge of Node.js is an added advantage

· Understanding of Agile methodology

· Good exposure to cybersecurity and compliance


Technology Stack:

· Microsoft .NET technologies (primary)

· Cloud platforms: AWS (SaaS/PaaS/IaaS)

· Databases: MSSQL, MongoDB, PostgreSQL

· Caching: Redis, Memcached

· Messaging queues: RabbitMQ, Kafka, SQS

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Tableau
SQL

 Role Summary

We are seeking a skilled and business-oriented BI Developer with 4 to 5 years of relevant experience in business intelligence, reporting, and data analytics. This role is ideal for a hands-on analyst who can translate business requirements into impactful dashboards, reports, and actionable insights using Tableau, Power BI, and SQL.

 

The successful candidate will work in a cross-functional environment, partnering with business stakeholders, data teams, and technology teams to support reporting modernization, improve data visibility, and enable better decision-making. This role requires strong technical capability, a solid understanding of data modeling fundamentals, and the ability to communicate insights clearly to both technical and non-technical audiences.

As a BI Developer, you will play a key role in designing, developing, and maintaining reporting and analytics solutions that support operational and strategic business needs. You will be responsible for building intuitive dashboards, analyzing data trends, defining KPIs, and contributing to data-driven process improvement initiatives.

Key Responsibilities

  • Design, develop, and maintain interactive dashboards and reports using Tableau and Power BI.
  • Partner with business users to understand reporting needs, define requirements, and translate them into effective BI solutions.
  • Write, optimize, and maintain SQL queries for data extraction, transformation, validation, and analysis.
  • Perform data analysis to identify trends, anomalies, performance drivers, and opportunities for business improvement.
  • Support KPI definition, metric standardization, and reporting governance across business functions.
  • Work with source data from structured systems and help ensure reporting accuracy and consistency.
  • Contribute to data modeling activities by understanding relationships between datasets, business rules, and reporting logic.
  • Create clear technical and functional documentation including report specifications, data mappings, business logic, and user guides.
  • Collaborate with data engineering, application development, and business teams to support reporting modernization initiatives.
  • Participate in testing activities including unit testing, data validation, user acceptance testing, and post-deployment support.

 

Must-Have Skills & Experience

  • Strong hands-on experience in Tableau and/or Power BI for dashboard development, reporting, and data visualization.
  • Strong SQL skills with experience writing complex queries, joins, aggregations, subqueries, and performance-tuned reporting queries.
  • Solid understanding of BI and analytics concepts including KPIs, trend analysis, scorecards, and management reporting.
  • Good knowledge of data modeling fundamentals, including table relationships, dimensions, facts, hierarchies, and basic star-schema concepts.
  • Experience working with structured datasets and multiple data sources to create meaningful analytical outputs.
  • Ability to gather requirements from business stakeholders and convert them into functional reporting solutions.
  • Strong analytical and problem-solving skills with high attention to detail and data accuracy.
  • Ability to work independently, troubleshoot issues, and document solutions clearly.
  • Effective written and verbal communication skills to collaborate with business, technology, and data teams.

 

Good-to-Have Skills

  • Exposure to Power BI Paginated Reports, DAX, or business process automation use cases.
  • Familiarity with Power Query, Excel advanced features, and data modeling concepts.
  • Understanding of ETL concepts and data pipeline dependencies.
  • Basic experience with Python for data analysis or reporting automation.
  • Exposure to UI/UX principles for dashboard usability and visual storytelling.
  • Familiarity with Agile delivery methodologies and working within sprint-based teams.
  • Experience supporting reporting transformation or modernization programs.

 

Experience Requirements

  • 4 to 5 years of relevant experience in business intelligence, reporting, analytics, or related roles.
  • Demonstrated experience building dashboards and reports in Tableau and Power BI within enterprise or business-facing environments.
  • Proven experience using SQL for data analysis, reporting, and data validation.
  • Experience working directly with business stakeholders to define requirements and deliver reporting solutions.
  • Experience in managing reporting deliverables across the full lifecycle, from requirements gathering through development, testing, deployment, and support. 


Read more
Remote only
3 - 6 yrs
₹3L - ₹6L / yr
skill iconPHP
skill iconNodeJS (Node.js)
Human Resource Management System (HRMS)
Data Structures
SQL
+2 more

Software Developer (HRMS Focus | High-Volume Systems | WFH)

We are looking for a highly skilled software developer with strong expertise in HRMS (Human Resource Management Systems) and proven experience in handling bulk data and high-volume transactions.

The ideal candidate should have hands-on experience in building and scaling HRMS modules such as Payroll, Attendance, Leave Management, and Employee Lifecycle, with solid technical skills in PHP and/or Node.js.


This role offers Permanent Work From Home.

Key Responsibilities

  • Design, develop, and maintain robust HRMS modules (core focus):
  • Payroll processing (large-scale calculations)
  • Attendance & Leave Management
  • Employee lifecycle management
  • Handle bulk data operations (salary processing, employee records, financial data) with performance optimization.
  • Ensure high scalability and performance for systems handling large datasets and concurrent users.
  • Build and optimize REST APIs / GraphQL services.
  • Optimize databases for high-volume transactions and reporting systems.
  • Integrate third-party services (payment gateways, SMS, email, compliance tools).
  • Contribute to additional ERP modules (Education domain as secondary) like Admissions, Fees, LMS, etc.
  • Conduct code reviews and maintain coding standards.

Required Skills & Qualifications

  • Strong experience in:
  • PHP (Core PHP) and/or Node.js
  • Must-have: Deep HRMS expertise and payroll.
  • Attendance systems
  • Leave & policy management
  • Proven experience in handling bulk data / large datasets / high-load systems
  • Strong database skills:
  • MySQL, MongoDB, PostgreSQL (query optimization, indexing, performance tuning)
  • Experience with:
  • REST APIs / GraphQL
  • High-performance backend systems

Good to Have:

  • Experience in Education ERP systems
  • Frontend: JavaScript, React, Vue
  • Tools: Docker, CI/CD pipelines
  • Cloud: AWS / Azure / GCP
  • Experience with enterprise-scale or high-traffic applications

Preferred Experience

  • 3+ years of development experience
  • Minimum 2 years in HRMS development (strongly preferred)
  • Experience managing large-scale employee data and payroll systems


Read more
Oddr

at Oddr

Deepika Madgunki
Posted by Deepika Madgunki
Remote only
2 - 6 yrs
₹1L - ₹18L / yr
ETL
API
Microsoft Windows Azure
Integration
BOOMI
+2 more

Job Title: Integration Engineer


Integration Engineers are responsible for defining, developing, delivering, maintaining and supporting end-to-end Enterprise Integration solutions. Using a designated IPaaS solution (e.g. Boomi), Integration Engineers integrate multiple cloud and on-premise applications which help customers publish and consume data between Oddr and third party systems for a variety of tasks.


Job Summary:

We are seeking a skilled and experienced Integration Engineer to join our Technology team in India. The ideal candidate will have a strong background in implementing low-code/no-code integration platforms as a service (iPaaS), with a preference for experience in Boomi. The role requires an in-depth understanding of SQL and RESTful APIs. Experience with Intapp's Integration Builder is a significant plus.


Key Responsibilities:

- Design and implement integration solutions using iPaaS tools.

- Collaborate with customers, product, engineering and business stakeholders to translate business requirements into robust and scalable integration processes.

- Develop and maintain SQL queries and scripts to facilitate data manipulation and integration.

- Utilize RESTful API design and consumption to ensure seamless data flow between various systems and applications.

- Lead the configuration, deployment, and ongoing management of integration projects.

- Troubleshoot and resolve technical issues related to integration solutions.

- Document integration processes and create user guides for internal and external users.

- Stay current with the latest developments in iPaaS technologies and best practices.


Qualifications:

- Bachelor’s degree in Computer Science, Information Technology, or a related field.

- Minimum of 2 years’ experience in an integration engineering role with hands-on experience in an iPaaS tool, preferably Boomi.

- Proficiency in SQL and experience with database management and data integration patterns.

- Strong understanding of integration patterns and solutions, API design, and cloud-based technologies.

- Good understanding of RESTful APIs and integration.

- Excellent problem-solving and analytical skills.

- Strong communication and interpersonal skills, with the ability to work effectively in a team environment.

- Experience with various integration protocols (REST, SOAP, FTP, etc.) and data formats (JSON, XML, etc.).


Preferred Skills:

- Boomi (or other iPaaS) certifications

- Experience with Intapp's Integration Builder is highly desirable but not mandatory.

- SQL Knowledge is important

- Experience in building E2E integrations and communicating with stakeholders

- Knowledge of Azure Functions, LogicApps, And other Azure Services is highly desirable


What we offer:

- Competitive salary and benefits package.

- Dynamic and innovative work environment.

- Opportunities for professional growth and advancement.

Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Gurugram
3 - 6 yrs
₹15L - ₹18L / yr
skill iconElastic Search
OpenSearch
NET
SQL
TypeScript
+1 more

Role & Responsibilities

  • Design, develop, and test new features in the application.
  • Regular communication and collaboration with team members throughoutthe development process.
  • Implement,test, and fix bugs in application features.
  • Participate in fully agile Scrum deliveries as an active team member.
  • Design, build, and maintain efficient and reliable C# and Angular code.

Ideal Candidate

  • Strong full stack software engineer profile
  • Mandatory (Experience): Must have 3+ years of experience as a Fullstack developer
  • Mandatory (Backend): Must have strong backend developement experience in C#, .NET and building RESTful APIs
  • Mandatory (Frontend): Must have hands-on frontend development experience in Angular 14+ and TypeScript/JavaScript
  • Mandatory (Core Skill): Must have working experience in Elasticsearch/OpenSearch (Non-negotiable)
  • Mandatory (DB): Exposure to SQL (Relational DBs)
  • Mandatory (Caching): Must have experience in caching mechanisms (in-memory/shared cache) and database scaling techniques like sharding & replication
  • Mandatory (Authentication): Familiarity with IdentityServer4 and Git
  • Mandatory (Engineering Practices): Must have experience writing unit tests and working in Agile/Scrum environments
  • Mandatory (Architecture Exposure): Candidates should have experience working on microservices architectures, event-driven systems, or distributed systems
  • Mandatory (Company): Product companies
  • Mandatory (Note 2): Please make sure candidate has detailed experience about above skills set in resume
  • Preferred (Skill): Familiarity with deployment processes and packaging libraries for NPM


Read more
TalentXO
Remote only
6 - 10 yrs
₹30L - ₹40L / yr
Agentic AI
Data Product Designer
AI/ML
UX
skill iconFigma
+4 more

Role & Responsibilities

Own the user experience for Dentsu's AI-powered agentic tools and client-facing data products. This is a senior design role responsible for making complex multi-agent systems, Genie spaces, and automated workflows feel simple and intuitive for media teams and clients who are not technical. You will work at the intersection of AI capability and human usability, designing the interfaces that turn powerful backend intelligence into tools people actually want to use.

Key Responsibilities-

  • Lead end-to-end design for agentic AI products: from discovery and user research through wireframes, prototypes, and production-ready specs
  • Design intuitive interfaces for multi-agent systems that serve media planners, analysts, and clients with varying levels of technical sophistication
  • Create UX flows for Genie spaces, conversational data exploration, and automated reporting dashboards that surface insights without requiring SQL or code
  • Develop and maintain a design system for the Decisioning practice's AI product suite, ensuring visual and interaction consistency across all tools
  • Conduct user research with internal media teams and client stakeholders to identify pain points, map workflows, and validate design decisions
  • Design transparency and trust patterns for AI-driven experiences: how users understand what the system did, why, and how to correct it
  • Prototype and test interaction models for agent-to-human handoff, error recovery, and multi-step automated workflows
  • Collaborate closely with AI engineers and data scientists to ensure designs are technically feasible and ship at high fidelity
  • Design onboarding flows and training materials that accelerate adoption of new AI tools across agencies
  • Create client-facing presentation materials, demos, and visual assets that communicate tool capabilities and business value

Ideal Candidate

  • Strong Agentic AI & Data Product Designer Profile
  • Mandatory (Experience 1): Must have 6+ years of total experience in design, with 5+ years in Product Design for data-heavy or complex digital products — enterprise dashboards, analytics tools, workflow platforms, or similar complex environments — with shipped work at scale.
  • Mandatory (Experience 2): Must have 6 months+ experience designing for AI/ML-powered products like gen ai features, agentic ai related features, AI automation tools etc
  • Mandatory (Skill 1): Must have demonstrated expertise in complex workflow design, data visualization, and enterprise UX at scale — designing interfaces that surface insights and enable non-technical users to navigate powerful backend systems
  • Mandatory (Skill 2): Must have strong understanding of design systems and component-based design methodology, with experience building, contributing to, or maintaining systems that ensure visual and interaction consistency across a product suite
  • Mandatory (Skill 3 ): Must have the ability to design transparency and trust patterns for AI-driven experiences — including how users understand what the system did, why, and how to correct it; plus interaction models for agent-to-human handoff, error recovery, and multi-step automated workflows
  • Mandatory (Tools): Must have deep proficiency in Figma, including component libraries, auto-layout, and interactive prototyping
  • Mandatory (Stakeholder Mgmt & Communication): Must have excellent communication skills for presenting design rationale to engineering, product, and business stakeholders
  • Mandatory (Portfolio): Must have a strong portfolio demonstrating complex workflow design, data visualization work, and ideally AI/agentic or conversational interface projects.
  • Preferred (AI Interaction Design): Experience specifically designing chatbot, copilot, or agent-based interaction patterns
  • Preferred (Industry): Experience in media, advertising, or marketing technology industries


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai
4 - 12 yrs
Best in industry
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
PySpark

We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.

In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).

You will collaborate closely with AI use case teams to develop:

  • Scalable data pipelines
  • Reusable data products
  • Foundational data infrastructure

Your work will support advanced AI solutions such as:

  • GenAI
  • RAG (Retrieval-Augmented Generation)
  • Document Intelligence

Key Responsibilities

  • Design and develop scalable ETL/ELT pipelines for AI workloads
  • Build and optimize data pipelines for structured & unstructured data
  • Enable context processing & vector store integrations
  • Support streaming data workflows and batch processing
  • Ensure adherence to enterprise data models, governance, and security standards
  • Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
  • Contribute to data lifecycle management for AI platforms

Required Skills

  • 5–7 years of hands-on experience in Data Engineering
  • Strong expertise in Python and advanced SQL
  • Experience with GCP and/or Azure cloud-native data services
  • Hands-on experience with PySpark / Spark SQL
  • Experience building data pipelines for ML/AI workloads
  • Understanding of CI/CD, Git, and Agile methodologies
  • Knowledge of data quality, governance, and security practices
  • Strong collaboration and stakeholder management skills

Nice-to-Have Skills

  • Experience with Vector Databases / Vector Stores (for RAG pipelines)
  • Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
  • Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
  • Experience with DBT (Data Build Tool)
  • Knowledge of Infrastructure-as-Code (Terraform)
  • Experience in multi-cloud deployments (Azure + GCP)
  • Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations

Ideal Candidate Profile

  • Strong data engineering foundation with AI/ML exposure
  • Experience working in multi-cloud environments
  • Ability to build production-grade, scalable data systems
  • Comfortable working in cross-functional, fast-paced environments
Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹20.4L - ₹24L / yr
skill iconPython
API
SQL
Systems design
Software deployment

Location: Bangalore

Experience: 2–5 years

Type: Full-time | On-site

Open Roles: 2

Start: Immediate

Why this role exists

Most systems work at a low scale.

Very few survive real production load, complex workflows, and enterprise edge cases.

We are building a platform that must:

  • Scale from 500K → 20M+ interactions/month
  • Handle complex insurance workflows reliably
  • Become easier to deploy as it grows, not harder

This role exists to build the backend foundation that makes this possible.

What you’ll do

You will not just write services.

You will design and own core platform systems.

1. Scale the platform without breaking architecture

  • Scale from 50K → 2M+ interactions/month
  • Ensure:
  • High availability
  • Low latency
  • Fault tolerance
  • Avoid large rewrites — build systems that evolve cleanly

2. Build the workflow automation (WA) engine

  • Design a flexible system with:
  • States
  • Stages
  • Cohorts
  • Dynamic workflows
  • Ensure workflows:
  • Handle edge cases reliably
  • Can be configured easily
  • Move from:
  • Hardcoded flows → configurable execution engine

3. Build the insurance-specific data layer

  • Design data models for:
  • Policy states
  • Claim workflows
  • Consent tracking
  • Ensure the system works across:
  • Multiple insurers
  • Multiple use cases
  • Build a platform-first data layer, not use-case-specific hacks

4. Make deployment and setup simple

  • Ensure workflows and data models are:
  • Easy to configure
  • Easy to launch
  • Reduce friction for:
  • Product teams
  • Deployment teams

5. Create a compounding data advantage

  • Build a data layer that:
  • Improves with every deployment
  • Captures structured signals
  • Ensure data becomes a long-term edge, not just storage

6. Own production reliability

  • Participate in on-call rotation across 3 engineers
  • Ensure:
  • Incidents are handled quickly
  • Root causes are fixed permanently
  • Build systems where reliability is shared, not individual

What success looks like

  • Platform scales to 2M+ interactions/month smoothly
  • Workflow engine supports complex, dynamic use cases
  • Data layer enables fast deployment across accounts
  • Edge cases are handled without constant firefighting
  • System becomes easier to use as it grows
  • Production issues are rare and predictable

Who you are

  • You have 2-5 years of backend engineering experience
  • You have built:
  • Scalable systems
  • Distributed services
  • You think in:
  • Systems
  • Data models
  • Trade-offs
  • You are comfortable owning:
  • Architecture
  • Production systems

What will make you stand out

  • Experience building:
  • Workflow engines
  • State machines
  • Data-heavy platforms
  • Strong understanding of:
  • System design
  • Distributed systems
  • Failure handling
  • Experience working in:
  • High-scale production environments

Why join

  • You will build the core backend of an AI platform
  • Your work directly impacts:
  • Scale
  • Reliability
  • Product capability
  • You will design systems that move from:
  • Use-case specific → platform-level infrastructure

What this role is not

  • Not just API development
  • Not limited to feature-level work
  • Not disconnected from production realities

What this role is

  • A system architect
  • A builder of scalable platforms
  • A driver of long-term technical advantage

One question to self-evaluate

Can you design backend systems that scale, handle edge cases, and become easier to use as they grow?


Read more
LearnTube.ai

at LearnTube.ai

2 candid answers
Vinayak Sharan
Posted by Vinayak Sharan
Remote, Mumbai
3 - 6 yrs
₹14L - ₹32L / yr
skill iconPython
FastAPI
skill iconDocker
skill iconAmazon Web Services (AWS)
SQL
+3 more

Role Overview:


As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.


Skip the wait and get noticed faster by completing our AI-powered screening. Click this link to start your quick interview. It only takes a few minutes and could be your shortcut to landing the job! -https://bit.ly/LT_Python


What You'll Do:


At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:

  • Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
  • Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
  • Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
  • Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
  • Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
  • Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
  • Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
  • Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.


What makes you a great fit?


Must-Haves:

  • 3+ yrs Python back-end experience (FastAPI)
  • Strong with Docker & container orchestration
  • Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
  • SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals

Nice-to-Haves

  • k8s at scale, Terraform,
  • Experience with AI/ML inference services (LLMs, vector DBs)
  • Go / Rust for high-perf services
  • Observability: Prometheus, Grafana, OpenTelemetry


About Us: 


At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:

  • AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
  • Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.


Meet the Founders: 


LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.


Why Work With Us? 


At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:

  • Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
  • Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
  • Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
  • Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
  • Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
  • Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Read more
SDS softwares

at SDS softwares

2 candid answers
1 recruiter
Tanavee Sharma
Posted by Tanavee Sharma
Remote only
0.6 - 0.8 yrs
₹0.8L - ₹0.9L / yr
Business Analysis
PowerBI
BRD
Tableau
MS-Excel
+5 more

Job Title: Business Analyst (BA)

Job Type: Full-Time | Remote | 5 Days Working

Salary: ₹7,000 – ₹8,000 per month

Experience Required: 6 months to 1 year (Freshers with internship experience can apply)

Joining: Immediate Joiners Only

About the Role:

We are looking for freshers who have strong foundational skills and knowledge in both Business Analysis. This is a position where you will be responsible for manually handling tasks related to both business testing functions.

Key Responsibilities:

  • Gather and analyze business requirements from stakeholders
  • Create documentation such as BRDs, FRDs, user stories, and process flows
  • Perform manual testing of software applications
  • Prepare test cases, test plans, and report bugs clearly
  • Collaborate with development and business teams to ensure product quality and requirement clarity
  • Provide timely updates and reports on progress and findings

Requirements:

  • Must have skills and knowledge in Business Analysis
  • Must be able to manage both roles manually and independently
  • Proficiency in tools related to BA
  • Excellent communication skills in English (spoken and written)
  • Must have a personal laptop and a stable internet connection
  • Must be available to join immediately

Who Should Apply:

  • Freshers with 6 months to 1 year of experience in relevant roles
  • Candidates who are confident in handling BA
  • Individuals looking to build a strong foundation in both domains in a remote, full-time role


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Mumbai, Bengaluru (Bangalore)
4 - 6 yrs
₹3L - ₹11L / yr
skill icon.NET
ASP.NET
skill iconC#
skill iconDocker
Microservices
+1 more

🚀 Hiring: .NET Develoepr at Deqode

⭐ Experience: 4+ Years

📍 Location: Mumbai and Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)



We are looking for a skilled .NET Developer to design and develop scalable microservices and enterprise-grade applications. The role involves building secure REST APIs, writing clean and testable code, working with Docker-based deployments, and collaborating with cross-functional teams.


Key Responsibilities:

  • Develop .NET Core microservices
  • Build and secure REST APIs
  • Write unit & integration tests
  • Deploy applications using Docker
  • Ensure performance optimization and code quality


3 Mandatory Skills

  1. .NET Core / ASP.NET Core Web API
  2. Microservices & Docker
  3. REST API development with Unit Testing





Read more
BigThinkCode Technologies
Kumar AGS
Posted by Kumar AGS
Chennai
4 - 6 yrs
₹1L - ₹13L / yr
SQL
Data modeling
Pipeline management
Apache
Google BigQuery

At BigThinkCode, our technology solves complex problems. We are looking for talented Data engineer to join our Data team at Chennai.

 

Our ideal candidate will have expert knowledge of software development processes, programming, and problem-solving skills. This is an opportunity to join a growing team and make a substantial impact at BigThinkCode Technologies.

 

Please see below our job description, if interested apply / reply sharing your profile to connect and discuss.

 

Company: BigThinkCode Technologies

URL: https://www.bigthinkcode.com/

Work location: Chennai (work from office)

Experience required: 4 - 6 years

Work location: Chennai

Joining time: Immediate – 4 weeks

Work Mode: Work from office (Hybrid)

 

Job Overview:

We are seeking a skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. The ideal candidate will design, build, and optimize scalable data pipelines with strong SQL proficiency, data modelling expertise.

Key Responsibilities:

  • Design, develop, and maintain scalable pipelines to process structured and unstructured data.
  • Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
  • Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
  • Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
  • Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
  • Ensure data quality, governance, and security across pipelines and storage systems.
  • Document architectures, processes, and workflows for clarity and reproducibility.

Required Technical Skills:

  • 4 or more years of experience in Data Engineering file.
  • Expertise in SQL (complex queries, optimization, and database design).
  • Write optimized and production-grade SQL scripts for transformations and data validation.
  • Solid understanding and hands on experience in creating data pipelines and patterns.
  • Proficiency in any programming languages like Python or R for scripting, automation, and pipeline development.
  • Hands-on experience with Google Bigquery and Apache Airflow.
  • Experience working on any cloud-based platforms like AWS or GCP or Azure.
  • Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
  • Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
  • Knowledge of version control systems (e.g., Git) and CI/CD practices.

Why Join Us:

·      Collaborative work environment.

·      Exposure to modern tools and scalable application architectures.

·      Medical cover for employee and eligible dependents.

·      Tax beneficial salary structure.

·      Comprehensive leave policy

·      Competency development training programs.

 

 

Read more
Remote, Noida, Gurugram, Pune, Nagpur, Jaipur, Gandhinagar
8 - 14 yrs
₹12L - ₹18L / yr
skill iconPython
SQL
PySpark
databricks
Snow flake schema
+6 more

Senior Data Engineer (Databricks, BigQuery, Snowflake)

Experience: 8+ Years in Data Engineering

Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)

Budget: Open / Competitive


Job Summary:

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.

In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.


Key Responsibilities:

  • Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
  • ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
  • ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
  • Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
  • Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
  • Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
  • Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
  • Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
  • AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.


Technical Requirements:

  • Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
  • Languages: Expert proficiency in SQL and Python for complex data processing.
  • Big Data: Hands-on experience with PySpark and large-scale distributed computing.
  • Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
  • Cloud Platforms: Deep working knowledge of AWS and Azure.
  • Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
  • Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
  • AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.


Preferred Qualifications:

  • Exposure to data governance and cataloging tools (e.g., Unity Catalog).
  • Knowledge of performance tuning for massive-scale big data systems.
  • Familiarity with real-time data processing frameworks.
  • Experience in digital transformation and sustainability-focused data projects.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Meghana Shinde
Posted by Meghana Shinde
Pune
8 - 12 yrs
Best in industry
Business Analysis
Risk Management
BRD
FRD
SQL

Job Description: Business Analyst (Capital Markets / Investment Management)

Position Summary

We are seeking an experienced Business Analyst with strong techno-functional expertise in Capital Markets, Investment Banking, Asset Management, and Risk Management. The ideal candidate will have hands-on experience across the full trade lifecycleUAT leadershiprisk frameworksFIX protocol, and digital transformation initiatives. This role requires close collaboration with front, middle, and back-office stakeholders, IT teams, and external vendors to deliver critical business and regulatory solutions.

Key Responsibilities

Business Analysis & Requirements Management

· Lead requirements gathering, documentation (BRD, FRD, User Stories), workflow mapping, and gap analysis.

· Conduct JAD sessions with Trading Desks, Portfolio Managers, Risk, Compliance, and Technology teams.

· Translate business requirements into detailed functional specifications and acceptance criteria.

· Manage and prioritize product backlogs using Agile/Scrum methodologies.

Trade Lifecycle & Capital Markets Expertise

· Support end-to-end trade flows across Equities, Derivatives, Fixed Income, Forex, Options, ETFs, Private Equity, and Structured Products.

· Validate front-to-back trade processes including order placement, execution, allocations, settlement, reconciliation, and reporting.

· Work with OMS/EMS platforms, market connectivity, and brokerage systems.

Risk Management (Market, Model, Liquidity, Credit)

· Analyze VaR, stress testing, scenario analysis, exposure calculations, and liquidity metrics (LCR/NSFR).

· Contribute to market risk policy formulation, governance, and regulatory compliance.

· Identify risk hotspots, process gaps, and control weaknesses with actionable remediation plans.

· Support regulatory reporting including Mark-to-Market and Notional Change requirements.

UAT, QA & Testing Leadership

· Lead end-to-end UAT cycles for trading, risk, and investment applications.

· Create test plans, test cases, and defect logs; track issues through JIRA until closure.

· Perform regression, functional, and production validation testing.

· Coordinate with QA, development teams, and Front Office for seamless deployment.

FIX Protocol & System Integrations

· Gather and validate FIX requirements for OMS/EMS integration.

· Support FIX message mapping, configuration, certification, and UAT.

· Collaborate with brokers, exchanges, and internal development teams for connectivity and workflow enhancements.

Client Management & Onboarding (Buy-side/Sell-side)

· Manage onboarding for clients such as Hedge Funds, Family Offices, Asset Managers, and Prime Brokers.

· Conduct requirement workshops, product demos, trainings, and post-implementation support.

· Serve as the primary point of contact for issue resolution, escalations, and enhancement discussions.

Project & Stakeholder Management

· Drive project plans, milestones, and sprint activities (Planning, Grooming, Stand-ups, Retrospectives).

· Ensure alignment between business needs and technology delivery.

· Prepare executive-level dashboards, presentations, and risk summaries for senior stakeholders.

Skills & Competencies

Technical Skills

· Tools & Platforms: Bloomberg, Refinitiv, FactSet, BlackRock Aladdin, Robinhood, IRIS, Falcon

· Databases: SQL, Excel (advanced), data reconciliation tools

· Project Tools: JIRA, Monday.com, Confluence, MS Visio, Axure

· Risk Systems: VAR models, stress testing tools, exposure monitoring systems

Core Competencies

· Strong stakeholder management & communication

· Business rules analysis & functional documentation

· UI/UX requirement mapping

· Data migration & system integration

· Analytical thinking & problem-solving

· Cross-functional collaboration

Qualifications

· 

9+  years of experience in Capital Markets, Investment Management, and Trading/Risk Systems.

· 

· MBA Finance (preferred) / BBA Finance.

· Certifications:

o NISM – Equity, Derivatives, Options Strategies

o CFI – Fixed Income Fundamentals

o Microsoft – Career Essentials in Business Analysis

o FRM (GARP) – Pursuing

Preferred Experience

· Working on end-to-end trading platform implementations.

· Exposure to Hedge Funds, PMS, AIF, Private Equity, and Wealth Management workflows.

· Knowledge of regulatory frameworks (Basel II–IV, SEBI, Risk Governance).

· Experience authoring policies, SOPs, and process documentation.

Soft Skills

· Excellent verbal and written communication.

· Strong analytical and quantitative capabilities.

· Ability to translate technical concepts to business stakeholders.

· High ownership, deadline orientation, and team collaboration skills.

 

 

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹10L / yr
databricks
PySpark
Apache Spark
ETL
CI/CD
+10 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -

* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and

Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform

modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as

GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical

solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily

engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt

engineering, workspace usage, and automation.

Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework

development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical

functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows.

* Proficiency in data modeling and ETL/ELT pipeline design.


* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Gradera AI Technologies
Sirisha Jonnada
Posted by Sirisha Jonnada
Hyderabad
4 - 7 yrs
₹20L - ₹50L / yr
skill iconPython
SQL
databricks

Role & Responsibilities

 

·      Collect, clean, and analyze large structured and unstructured datasets from multiple internal and external sources

·      Conduct thorough exploratory data analysis (EDA) to understand data distributions, relationships, outliers, and missing value patterns

·      Profile and audit datasets to assess data quality, completeness, consistency, and fitness for modeling

·      Investigate and document data lineage — understanding where data originates, how it flows, and how it transforms across systems

·      Identify and resolve data anomalies, inconsistencies, and integrity issues in collaboration with data engineering teams

·      Develop a deep understanding of the business domain and the underlying data that represents it — including what each field means, how it is captured, and what its limitations are

·      Translate raw, messy, real-world data into clean, well-understood analytical datasets ready for modeling and reporting

·      Apply statistical techniques such as correlation analysis, hypothesis testing, variance analysis, and distribution fitting to extract meaningful signals from noise

·      Build and deploy machine learning models including regression, classification, clustering, NLP, and time-series analysis

·      Design, evaluate, and analyze A/B experiments and controlled tests using causal inference techniques

·      Develop data-driven recommendations backed by rigorous statistical reasoning

·      Write clean, production-ready code in Python or R

·      Collaborate with data engineers to build reliable data pipelines and feature stores

·      Deploy and monitor ML models using MLOps best practices on cloud infrastructure

·      Build dashboards and self-serve analytics tools to support stakeholder decision-making

 

Data Understanding & Analysis Skills

 

·      Strong ability to interrogate unfamiliar datasets and quickly develop a working understanding of their structure, semantics, and quirks

·      Experience working with messy, incomplete, or poorly documented real-world data

·      Skilled in identifying hidden patterns, trends, seasonality, and anomalies through visual and statistical exploration

·      Ability to ask the right questions about data — challenging assumptions, validating sources, and understanding the context in which data was collected

·      Proficiency in data profiling, descriptive statistics, and summary reporting to communicate the shape and health of a dataset

·      Experience creating data dictionaries, documentation, and data quality reports to support team-wide data understanding

·      Comfort working across structured (relational tables), semi-structured (JSON, XML), and unstructured (text, logs, sensor streams) data formats

 

Technical Skills Required

 

·      Proficiency in Python (pandas, NumPy, scikit-learn, PyTorch or TensorFlow) and/or R

·      Strong SQL skills with hands-on experience in DB2 and SQL Server

·      Experience with Databricks for large-scale data processing, feature engineering, and model training

·      Familiarity with cloud platforms: Azure or AWS

·      Experience with data warehouses and big data platforms (Databricks, Snowflake, or Redshift)

·      Knowledge of MLOps tools such as MLflow, Kubeflow, or Airflow

·      Experience with streaming data technologies such as Kafka or Spark

·      Solid foundation in probability, statistics, linear algebra, and experimental design

 

Nice to Have

 

·      Experience with deep learning, NLP, computer vision, or Bayesian methods

·      Familiarity with real-time or streaming data pipelines

·      Open-source contributions or published research

Read more
Global MNC serving 40+ Fortune 500 Companies

Global MNC serving 40+ Fortune 500 Companies

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹26L / yr
Generative AI
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
LangGraph
langchain
+11 more

Want to work on exciting GenAI projects for Fortune 500 companies across multiple sectors? Then read on..


About Company:

CSG is a multi-national company having a presence in 20 countries with 1600+ Engineers. Company works with more than 40 Fortune 500 customers such as Sony, Samsung, ABB, Thyssenkrup, Toyota, Mitsubishi and many more.


Job Description:

We are looking for a talented Generative AI Developer to join our dynamic AI/ML team. This position offers an exciting opportunity to leverage cutting-edge Generative AI (GenAI) technologies to drive innovation to solve real world problems. You will be responsible for developing and optimizing GenAI-based applications, implementing advanced techniques like Retrieval-Augmented Generation (RAG), RIG (Retrieval Interleaved Generation), Agentic Frameworks and vector databases. This is a collaborative role where you will work directly with customers cross-functional teams to design, implement, and optimize AI-driven solutions. Exposure to cloud-native AI platforms such as Amazon Bedrock and Microsoft Azure OpenAI is highly desirable.


Key Responsibilities

Generative AI Application Development:

Design, develop, and deploy GenAI-driven applications to address complex industrial challenges.

Implement Retrieval-Augmented Generation (RAG) and Agentic frameworks


Data Management & Optimization:

Design and optimize document chunking strategies tailored to specific datasets and use cases.

Build, manage, and optimize data embeddings for high-performance similarity searches across vector databases.


Collaboration & Integration:

Work closely with data engineers and scientists to integrate AI solutions into existing pipelines.

Collaborate with cross-functional teams to ensure seamless AI implementation.


Cloud & AI Platform Utilization:

Explore and implement best practices for utilizing cloud-native AI platforms, such as Amazon Bedrock and Azure OpenAI, to enhance solution delivery.

Continuous Learning & Innovation:

Stay updated with the latest trends and emerging technologies in the GenAI and AI/ML fields, ensuring our solutions remain cutting-edge.


Requirements:

The ideal candidate will have strong experience in Generative AI technologies, particularly in the areas of RAG, document chunking, and vector database management. They will be able to quickly adapt to evolving AI frameworks and leverage cloud-native platforms to create efficient, scalable solutions. You will be working in a fast-paced and collaborative environment, where innovation and the ability to learn and grow are key to success.

- 3 to 5 years of overall experience in software development, with 3 years focused on AI/ML.

- Minimum 2 years of experience specifically working with Generative AI (GenAI) technologies.

- Python, PySpark and SQL knowledge is necessary for tasks

- Proven ability to work in a collaborative, fast-paced, and innovative environment.


Technical Skills:

- Generative AI Frameworks & Technologies:

- Expertise in Generative AI frameworks, including prompt engineering, fine-tuning, and few-shot learning.

- Familiarity with frameworks such as T5 (Text-to-Text Transfer Transformation), LangChain, Lang Graph, Open-source tech stalk Ollama, Mistral, DeepSeek.

- Strong knowledge of Retrieval-Augmented Generation (RAG) for combining LLMs with external data retrieval systems.


Data Management:

- Experience in designing chunking strategies for different datasets.

- Expertise in data embedding techniques and experience with vector databases like Pinecone, ChromaDB etc

- Programming & AI/ML Libraries:

- Strong programming skills in Python.

- Experience with AI/ML libraries such as TensorFlow, PyTorch, and Hugging Face Transformers.


Cloud Platforms & Integration:

- Familiarity with cloud services for AI/ML workloads (AWS, Azure).

- Experience with API integration for AI services and building scalable applications.

- Certifications (Optional but Desirable):

- Certification in AI/ML (e.g., TensorFlow, AWS Certified Machine Learning Specialty).

- Certification or coursework in Generative AI or related technologies.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort