50+ SQL Jobs in India
Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!
Responsibilities:
- Develop and maintain cross-platform applications using .NET MAUI.
- Convert UI/UX designs into responsive and pixel-perfect layouts using XAML.
- Implement new features, resolve bugs, and optimize application performance across Android, iOS, Windows, and macOS.
- Integrate iOS and Android native libraries and handle native library interop.
- Utilize at least 1 year of experience in native Android and iOS development (Kotlin/Java, Swift/Objective-C).
- Integrate applications with RESTful APIs, third-party services, authentication mechanisms, and local storage solutions such as SQLite.
- Apply strong knowledge of MVVM architecture, design patterns, and SOLID principles to ensure clean, maintainable, and scalable code.
- Collaborate closely with product managers, UI/UX designers, QA teams, and backend developers to deliver high-quality releases.
- Uphold code quality through code reviews, unit testing, and adherence to engineering best practices.
- Participate actively in sprint planning, technical discussions, and architectural decision-making.
- Prepare and maintain project documentation, technical specifications, workflows, and implementation guides.
- Mentor junior developers and provide technical leadership and guidance.
Role & Responsibilities
- Design, develop, and maintain scalable full stack applications using modern backend and frontend technologies.
- Build and maintain backend services and APIs using technologies such as C#, .NET, Java, or similar backend frameworks.
- Develop responsive and efficient frontend applications using Angular (14+), TypeScript, and JavaScript or similar frontend framework.
- Work on applications deployed in on-premise infrastructure environments, ensuring stability and performance.
- Implement and optimize search capabilities using OpenSearch.
- Design and maintain database structures using relational databases (SQL) and NoSQL databases such as MongoDB.
- Collaborate with cross-functional teams to design, implement, test, and deploy new product features.
- Troubleshoot issues, debug applications, and ensure high reliability and performance of the platform.
- Participate in Agile/Scrum development processes, collaborating closely with team members throughout the development lifecycle.
- Contribute to technical discussions, architecture decisions, and engineering best practices.
Ideal Candidate
- Strong Full stack software engineer having on premise applications development experience
- Mandatory (Experience 1): Must have 5+ years of experience as a Fullstack developer
- Mandatory (Experience 2): Must have hands-on experience in developing and supporting applications deployed on on-premise infrastructure (Not cloud)
- Mandatory (Backend): Must have strong backend development experience using technologies such as C#, .NET, Java, or similar backend frameworks
- Mandatory (Frontend): Must have strong frontend development experience using technologies such as React, Angular, TypeScript, JavaScript or similar frontend frameworks
- Mandatory (Core Skill): Must have exposure to OpenSearch
- Mandatory (DB): Exposure to SQL (Relational DBs) & NoSQL databases like MongoDB
- Mandatory (Company): B2B SaaS companies
- Mandatory (Note 1): This is a hybrid role in Udyog Vihar, Gurgaon
- Mandatory (Note 2): Role will convert into core team member, so need strong intent candidate
- Preferred (Skill): Experience leading technical design discussions, mentoring engineers, and setting engineering standards or architectural guidelines
About Forbes Advisor
Forbes Digital Marketing Inc. is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our global teams bring deep expertise across journalism, product, performance marketing, data, and analytics.
The Role
We’re hiring a Data Scientist to help us unlock growth through advanced analytics and machine learning. This role sits at the intersection of marketing performance, product optimization, and decision science.
You’ll partner closely with Paid Media, Product, and Engineering to build models, generate insight, and influence how we acquire, retain, and monetize users. From campaign ROI to user segmentation and funnel optimization, your work will directly shape how we grow.This role is ideal for someone who thrives on business impact, communicates clearly, and wants to build re-usable, production-ready insights — not just run one-off analyses.
What You’ll Do
Marketing & Revenue Modelling
• Own end-to-end modelling of LTV, user segmentation, retention, and marketing
efficiency to inform media optimization and value attribution.
• Collaborate with Paid Media and RevOps to optimize SEM performance, predict high-
value cohorts, and power strategic bidding and targeting.
Product & Growth Analytics
• Work closely with Product Insights and General Managers (GMs) to define core metrics, KPIs, and success frameworks for new launches and features.
• Conduct deep-dive analysis of user behaviour, funnel performance, and product engagement to uncover actionable insights.
• Monitor and explain changes in key product metrics, identifying root causes and business impact.
• Work closely with Data Engineering to design and maintain scalable data pipelines that
support machine learning workflows, model retraining, and real-time inference.
Predictive Modelling & Machine Learning
• Build predictive models for conversion, churn, revenue, and engagement using regression, classification, or time-series approaches.
• Identify opportunities for prescriptive analytics and automation in key product and marketing workflows.
• Support development of reusable ML pipelines for production-scale use cases in product recommendation, lead scoring, and SEM planning.
Collaboration & Communication
• Present insights and recommendations to a variety of stakeholders — from ICs to executives — in a clear and compelling manner.
• Translate business needs into data problems, and complex findings into strategic action plans.
• Work cross-functionally with Engineering, Product, BI, and Marketing to deliver and deploy your work.
What You’ll Bring
Minimum Qualifications
• Bachelor’s degree in a quantitative field (Mathematics, Statistics, CS, Engineering, etc.).
• 4+ years in data science, growth analytics, or decision science roles.
• Strong SQL and Python skills (Pandas, Scikit-learn, NumPy).
• Hands-on experience with Tableau, Looker, or similar BI tools.
• Familiarity with LTV modelling, retention curves, cohort analysis, and media attribution.
• Experience with GA4, Google Ads, Meta, or other performance marketing platforms.
• Clear communication skills and a track record of turning data into decisions.
Nice to Have
• Experience with BigQuery and Google Cloud Platform (or equivalent).
• Familiarity with affiliate or lead-gen business models.
• Exposure to NLP, LLMs, embeddings, or agent-based analytics.
• Ability to contribute to model deployment workflows (e.g., using Vertex AI, Airflow, or Composer).
Why Join Us?
• Remote-first and flexible — work from anywhere in India with global exposure.
• Monthly long weekends (every third Friday off).
• Generous wellness stipends and parental leave.
• A collaborative team where your voice is heard and your work drives real impact.
• Opportunity to help shape the future of data science at one of the world’s most trusted
brands.
About the Role
Pendo is looking for a Software Engineer to help build and scale the platform that powers our integrations with enterprise systems such as Salesforce, Slack, Segment, and other partner tools. This team develops the services, APIs, data pipelines, and user interfaces that enable customers to seamlessly connect Pendo into their product and data ecosystems.
In this role, you will primarily focus on building scalable backend systems while also contributing to the frontend experiences that allow customers to configure, manage, and monitor integrations. You’ll collaborate closely with product managers, designers, and infrastructure teams to deliver reliable, high-performance capabilities used by millions of users.
What You'll Do
- Design and build scalable backend services and APIs that power Pendo’s integrations platform.
- Develop and maintain distributed, event-driven data pipelines that process and sync high volumes of behavioral and product analytics data.
- Contribute to frontend applications that allow customers to configure, manage, and monitor integrations and data workflows.
- Lead technical initiatives from design through implementation, testing, and production rollout.
- Integrate with third-party APIs and enterprise platforms using technologies such as REST, webhooks, and OAuth.
- Collaborate with product, design, infrastructure, and partner teams to translate business needs into high-quality technical solutions.
- Use modern development workflows and AI-powered tools to improve developer productivity and streamline engineering processes.
- Participate in design reviews and promote best practices in testing, observability, performance, and system reliability.
- Contribute to improving platform scalability, availability, and operational excellence.
What We're Looking For
- Experience building backend services, APIs, or distributed systems.
- Experience developing modern web applications using frameworks such as Vue, React, or Angular.
- Strong proficiency in at least one backend language such as Go, Java, Python, or C++.
- Experience working with cloud infrastructure such as AWS or GCP.
- Familiarity with distributed systems, event-driven architectures, or high-throughput data pipelines.
- Experience writing and maintaining unit, integration, and end-to-end tests.
- Strong collaboration and communication skills.
Nice to Have
- Experience building integration platforms or working with third-party APIs.
- Familiarity with authentication models such as OAuth and enterprise SaaS integrations.
- Experience working with analytics or behavioral event data.
- Experience leveraging AI-assisted development tools or working with modern AI workflows.
Technologies We Use
- Frontend: Vue, Vuex, React, Angular, Highcharts, Jest, Cypress
- Backend: Go, Java, Python, C++
- Cloud & Data: AWS, GCP, Redis, Pub/Sub, SQL/NoSQL
- AI / ML: GenAI, LLMs, LangChain, MLOps
Core Technical Skills
- Strong in Core Java, Java 8, OOPs
- Hands-on experience with Spring Boot, Spring MVC, Spring Data JPA
- Experience in Microservices Architecture & REST API development
- Good knowledge of SQL databases (MySQL / SQL Server / PostgreSQL)
- Experience with AWS services (Lambda, S3, DynamoDB, EC2)
- Familiarity with Kafka / Event-driven architecture (good to have)
- Knowledge of Spring Security, JWT, OAuth 2.0
- Experience with Docker, Jenkins, Git
🔧 Development Responsibilities
- Design and develop scalable REST APIs and microservices
- Work on backend systems handling large data and real-time processing
- Optimize database queries, performance, and indexing
- Collaborate with cross-functional teams for end-to-end delivery
🛠️ Support (L2) Responsibilities
- Handle production issues, bug fixing, and root cause analysis
- Provide post-deployment and migration support
- Troubleshoot performance issues, DB deadlocks, and API failures
- Work on incident resolution and system stability improvements
- Coordinate with L1/support teams (if applicable)
Description
As a Power Apps Developer, you will be at the forefront of crafting innovative, low‑code solutions that streamline business processes and empower end‑users across the organization. You will collaborate closely with functional analysts, business stakeholders, and fellow developers to translate complex requirements into intuitive, scalable applications on the Microsoft Power Platform. The role offers a dynamic environment where continuous learning is encouraged, providing access to the latest Power Apps features, Azure services, and integration techniques. You will contribute to a culture of knowledge sharing, participate in code reviews, and mentor junior team members, ensuring high‑quality deliverables that drive operational efficiency and measurable business impact.
Requirements:
- 5–15 years of experience developing enterprise‑grade solutions using Microsoft Power Apps, Power Automate, and Power BI.
- Strong proficiency in Canvas and Model‑Driven apps, Common Data Service (Dataverse), and integration with Azure services (e.g., Azure Functions, Logic Apps).
- Solid understanding of relational databases, SQL, and data modeling concepts.
- Experience with JavaScript, TypeScript, and RESTful APIs for extending Power Apps functionality.
- Excellent problem‑solving abilities, strong communication skills, and a collaborative mindset.
- Relevant certifications such as Microsoft Power Platform Developer Associate (PL‑400) are a plus.
Roles and Responsibilities:
- Design, develop, and deploy custom Power Apps solutions that meet business requirements and adhere to best practices.
- Create and maintain automated workflows using Power Automate to streamline repetitive tasks and improve efficiency.
- Integrate Power Apps with external systems via connectors, APIs, and Azure services to ensure seamless data flow.
- Perform performance tuning, debugging, and troubleshooting of applications to ensure optimal user experience.
- Collaborate with business analysts and stakeholders to gather requirements, provide technical guidance, and deliver prototypes.
- Conduct code reviews, enforce governance standards, and contribute to the development of a reusable component library.
- Stay updated with the latest Power Platform releases, evaluate new features, and recommend adoption strategies.
- Provide training and mentorship to junior developers and end‑users to foster platform adoption.
Must have skills
Power apps - 5 years
Microsoft Power Automate - 1 years
Nice to have skills
Canvas App Development and Scripting - 4 years
Canvas Apps Development - 4 years
SQL - 2 years
SharePoint APIs - 1 years
Power Fx - 2 years
C Sharp - 3 years
RESTful API - 2 years
Role- Data Analyst
Experience- 2 to 5 years
Location-Bangalore
Job Role-
● Experience: Minimum of 2+ years of professional experience in a data-heavy
environment (E-commerce or Fintech experience is a plus).
● SQL Mastery: Exceptional ability to write complex joins, window functions, Analytical
functions, and CTEs. Experience with high-scale databases (e.g., BigQuery, Hive, or
Postgres).
● Scripting: Functional knowledge of Python for data manipulation (Pandas, NumPy)
and basic automation scripts.
● Systems Thinking: Ability to understand upstream data flows and how they impact
downstream reporting.
● Problem-Solving: A "detective" mindset—you enjoy digging into a Rs 600Cr discrepancy until you find the root cause
Company Name: WINIT (Hyderabad)
Role: Xamarin + .NET Developer
Experience: 2+ Years
Role Overview
We are looking for a skilled Xamarin Developer to build, maintain, and optimize cross-platform mobile applications. The ideal candidate should have strong experience in C#, .NET / .NET Core, and mobile application architecture, along with hands-on experience in mobile app deployment.
Key Responsibilities
- Develop cross-platform mobile applications using Xamarin / Xamarin.Forms.
- Implement clean architecture patterns such as MVVM.
- Integrate REST APIs and third-party services.
- Work with local databases such as SQLite.
- Optimize application performance and ensure responsiveness.
- Handle application deployment, updates, and release management.
- Debug, test, and maintain application stability.
- Collaborate with backend (.NET / .NET Core) and UI/UX teams for seamless integration.
- Support application migration or enhancement projects when required.
Required Skills
- Hands-on experience with Xamarin.Forms.
- Strong knowledge of C# and .NET / .NET Core.
- Experience with REST API integration.
- Knowledge of SQLite / local storage.
- Experience with Git version control.
- Understanding of CI/CD pipelines.
Good to Have
- Experience with Firebase integration.
- Exposure to Microsoft Azure.
- Experience with mobile app performance optimization.
- Experience handling production mobile applications.
About our company:
We are an mSFA technology company that has evolved from the industry expertise we have gained
over 25+ years. With over 600 success stories in mobility, digitization, and consultation, we are today
the leaders in mSFA, with over 75+ Enterprises trusting WINIT mSFA across the globe.
Our state-of-the-art support center provides 24x7 support to our customers worldwide. We
continuously strive to help organizations improve their efficiency, effectiveness, market cap, brand
recognition, distribution and logistics, regulatory and planogram compliance, and many more
through our cutting-edge WINIT mSFA application.
We are committed to enabling our customers to be autonomous with our continuous R&D and
improvement in WINIT mSFA. Our application provides customers with machine learning capability
so that they can innovate, attain sustainable growth, and become more resilient.
At WINIT, we value diversity, personal and professional growth, and celebrate our global team of
passionate individuals who are continuously innovating our technology to help companies tackle
real-world problems head-on.
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
Who are we aka "About Us":
We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries.
What are we looking for a.k.a “The JD” :
We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.
Responsibilities:
- Proficient in writing complex SQL Queries
- Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
- Perform database optimization, indexing, and query tuning to ensure high performance.
- Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
- Design, configure, and maintain PostgreSQL databases
- Set up and manage database clusters, replication, and backups for disaster recovery
Preferred Qualifications:
- Intermediate-level Excel skills for data analysis and reporting.
- Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
- Detail-oriented mindset with a commitment to data accuracy and quality.
*(Only Applicants who have finished their educational commitments are requested to apply)
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
- You want complete ownership for your role & be able to drive it the way you think is right.
- You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
- Are looking to stick around for the long term and grow with the company.
We have an urgent opening for a skilled and detail-oriented professional for the below role:
Quick Role Overview:
- Role: QA Automation Engineer
- Location: Hyderabad
- Job Type: Full-Time
- Experience: 6 – 10 Years
- Notice Period: Immediate to 30 Days Preferred
Job Description:
We are looking for a highly analytical QA Engineer with strong expertise in data validation and SQL-based testing. The ideal candidate will be responsible for ensuring the quality and integrity of data-driven applications by designing effective test strategies and executing comprehensive test plans.
This role requires hands-on experience with SQL, defect management, CI/CD pipelines, and automation tools, along with the ability to understand business requirements and translate them into test scenarios.
Key Responsibilities:
- Analyze business requirements and translate them into detailed test plans, scenarios, and test cases
- Perform data validation and ensure data accuracy using SQL queries
- Design, develop, and execute manual and automated test cases
- Manage defects by logging, tracking, and ensuring timely resolution
- Work closely with development and business teams to ensure quality deliverables
- Maintain and produce high-quality testing documentation
- Ensure system data integrity for daily operations
- Participate in CI/CD processes and support release cycles
- Collaborate using tools like Jira, TeamCity, and Octopus
Desired Skills & Competencies:
Must-Have Skills:
- Strong analytical and problem-solving skills
- Good understanding of Data / Data Analysis
- Strong hands-on experience with Microsoft SQL
- Solid understanding of SQL joins and complex queries
- Experience with Jira for defect tracking
- Hands-on experience with TeamCity and Octopus
- Experience with code repository tools (preferably Bitbucket)
- Mandatory experience with any Automation Testing tool
- Knowledge of CI/CD pipelines
Good-to-Have:
- Exposure to Agile methodologies
- Experience in data-driven testing environments
ROLE SUMMARY
The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.
ROLE RESPONSIBILITIES
- Design and develop Python and Django applications that are scalable, secure, and maintainable.
- Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
- Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
- Create and optimize SQL queries, database structures, and data access logic to support application features.
- Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
- Implement accurate data mappings and transformations in accordance with business and technical requirements.
- Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
- Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
- Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
- Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.
TECHNICAL QUALIFICATIONS
- 7+ years of hands‑on experience with Python and Django, including complex application development.
- 5+ years of experience with SQL development, optimization, and database design.
- At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
- Deep expertise in application architecture, system integration, and service‑oriented design.
- Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
- Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.
GENERAL QUALIFICATIONS
- Exceptional verbal and written communication skills.
- Strong analytical, problem‑solving, and architectural reasoning abilities.
- Demonstrated leadership experience with the ability to guide and mentor technical teams.
- Proven ability to work effectively in fast‑paced, collaborative environments.
EDUCATION REQUIREMENTS
- Bachelor’s degree in Computer Science, MIS, or a related field.
- Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.
Job Summary:
As a Java Full Stack Developer, you will design, develop, and maintain scalable backend services and frontend applications using Java (Spring Boot) and React. You will work closely with cross-functional teams to deliver high-performance and reliable systems.
Key Responsibilities:
• Develop and maintain applications using Java, Spring Boot, and React
• Design and build RESTful APIs for data-driven applications
• Work on frontend development using ReactJS
• Ensure scalability, performance, and reliability of applications
• Collaborate with QA, DevOps, and Product teams
• Participate in code reviews and technical discussions
• Troubleshoot and resolve production issues
• Mentor and guide junior developers
Required Skills & Qualifications:
• Strong experience in Java and Spring Boot
• Hands-on experience with React.js
• Experience with PostgreSQL or other relational databases
• Good understanding of data modeling and backend architecture
• Strong knowledge of OOP concepts
• Familiarity with Agile/Scrum and Git workflows
• Excellent problem-solving and communication skills
Good to Have:
• Experience with Snowflake / Databricks
• Exposure to data-driven or analytics platforms
We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.
Key Responsibilities
- Collaborate with business users and stakeholders to understand business processes and data requirements
- Design and implement dimensional data models, including fact and dimension tables
- Identify, design, and implement data transformation and cleansing logic
- Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
- Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
- Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
- Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
- Provide high-level design, research, and effort estimates for data integration initiatives
- Provide production support for ETL processes to ensure data availability and SLA adherence
- Analyze and resolve data pipeline and performance issues
- Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
- Translate business requirements into well-defined technical data specifications
- Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
- Define and document BI usage through use cases, prototypes, testing, and deployment
- Support and enhance data governance and data quality processes
- Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
- Train and support business users, IT analysts, and developers
- Lead and collaborate with teams spread across multiple locations
Required Skills & Qualifications
- Bachelor’s degree in Computer Science or a related field, or equivalent work experience
- 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
- Strong expertise in data warehousing concepts, tools, and best practices
- Excellent SQL skills
- Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
- Hands-on experience with Google Cloud Platform (GCP) services, including:
- BigQuery
- Cloud SQL
- Cloud Composer (Airflow)
- Dataflow
- Dataproc
- Cloud Functions
- Google Cloud Storage (GCS)
- Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
- Strong experience integrating data using APIs, XML, JSON, and similar formats
- In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
- Solid understanding of SDLC, Agile, and Scrum methodologies
- Strong problem-solving, multitasking, and organizational skills
- Experience handling large-scale datasets and database design
- Strong verbal and written communication skills
- Experience leading teams across multiple locations
Good to Have
- Experience with SSRS and SSIS
- Exposure to AWS and/or Azure cloud platforms
- Experience working with enterprise BI and analytics tools
Why Join Us
- Opportunity to work on large-scale, enterprise data platforms
- Exposure to modern cloud-native data engineering technologies
- Collaborative environment with strong stakeholder interaction
- Career growth and leadership opportunities
About Shopflo
At Shopflo, we're trying to change the way consumers experience brands and businesses. Our first product was a cart and checkout platform for e-commerce, that allowed marketers to personalise discounts, rewards, and payments. We are currently also working on a new product that takes it a notch higher by unlocking enterprise-grade personalization for all consumer tech businesses.
Team & Company
Shopflo was founded by three co-founders:
- Ankit Bansal (ex-IIT Kharagpur, Oracle, Gupshup)
- Ishan Rakshit (ex-IIT Bombay, Parthenon, Elevation Capital)
- Priy Ranjan (ex-IIT Madras, McKinsey, Elevation Capital)
We’re a fast-growing team of ~50 people, based in HSR Layout, Bengaluru. We raised a $3.8M seed round from Tiger Global, TQ Ventures.
What you will do
- Design and develop microservice that can work in a large-scale multi-tenant environment.
- Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability.
- Working with a cross-discipline team of Design, Product, Data Science and Analytics team.
- Deploy and maintain the application in a secured AWS environment.
- Take ownership from the ideation phase to deployment and maintenance.
- Active participation in the hiring process to bring world-class programmers in the team.
You should apply if you have:
- 2-4 years of experience in server-side development
- Strong programming skills in Java, Python, Node or Golang
- Hands-on experience in API development and frameworks such as Spring, Node, or Django.
- Good Understanding of SQL and NoSQL databases.
- Experience in test-driven development. (writing unit test and API test).
- Understanding of basic cloud computing concepts and experience in using any of the major cloud service providers(AWS/GCP/Azure).
- Ability to build and deploy the application in a containerized environment.
- Understanding of application logging and monitoring systems like Prometheus or Kibana.
- B. E/B. Tech/M. E. /M. Tech/M. S. from a reputed university with a good academic record.
- Curiosity to explore cutting-edge technologies and bake them into the products.
- Zeal and drive to take end-to-end ownership.
Job Description: Product Analyst
We are looking for a Product Analyst who can operate at the intersection of product thinking, analytics, and problem solving. This is a hybrid role for someone who is comfortable working with data, understands how ecommerce businesses function, and can help uncover the "why" behind business or website performance changes.
This role is ideal for someone who enjoys solving real business problems, performing root cause analysis on live websites or running businesses, identifying actionable insights, and supporting experiments that improve outcomes. You will work with both internal teams and clients, alongside Product Managers and other analysts, to drive better decision-making through structured analysis.
What you will do
• Conduct root cause analysis (RCA) for issues affecting live websites and ecommerce businesses
• Analyze business, product, and website performance to identify trends, issues, and opportunities
• Build and maintain dashboards that help teams monitor key business and product metrics
• Perform exploratory analysis to generate insights and support decision-making
• Work with product and client teams to frame problems clearly and convert ambiguous questions into analytical investigations
• Support experimentation by helping define what should be measured, analyzing results, and identifying learnings
• Understand and work with event tracking and analytics implementations
• Collaborate with Product Managers and other team members to improve visibility into business performance
• Communicate findings clearly to both internal stakeholders and clients in a way that drives action
What we are looking for
• 2–4 years of experience in product analytics, business analytics, web analytics, or similar roles
• Strong understanding of ecommerce and web analytics
• Strong SQL skills
• Ability to go beyond data retrieval and focus on analysis, interpretation, and problem solving
• Good understanding of how to investigate changes in metrics, funnels, conversion, and business performance
• Strong structured thinking and ability to break down messy problems logically
• Comfort working across multiple business contexts and learning quickly
• Strong communication skills, especially the ability to explain insights and recommendations to clients and non-technical stakeholders
• A product mindset — someone who can think beyond reporting and connect data to user behavior, business outcomes, and next steps
Good to have
• Familiarity with Shopify
• Experience with A/B testing or experimentation analysis
• Basic understanding of attribution and broader marketing analytics
• Exposure to event tracking setup and analytics instrumentation
• Familiarity with dashboarding and analytics tools, with the ability to quickly learn new tools as needed
• Python or R exposure is a plus, but not required
Who will do well in this role
• Someone who is a strong problem solver
• Someone who enjoys finding answers in imperfect or ambiguous situations
• Someone who is curious, structured, and business-minded
• Someone who can move between product questions, business questions, and analytical investigations without getting stuck in just reporting
• Someone who is comfortable working in a client-facing environment and can present insights with clarity and confidence
Role details
• Role: Product Analyst
• Location: Remote
• Hiring location: India only
• Type: Full-time
🚀 Hiring: Data Engineer ( Azure ) at Deqode
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
✅ Implement declarative pipelines using Lakeflow
✅ Design idempotent, replayable pipelines with safe backfills
✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
👉 Structured Streaming & Autoloader Implementation
👉 Databricks SQL (DBSQL) & Data Modeling for Analytics
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
🧭 Tech Lead (Backend / Fullstack | 7–10 Years)
Location: Bangalore (On-Site, Hybrid)
Company Type: Public-Listed Product Company
We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Lead execution of mission-critical systems while staying hands-on — bridging architecture and delivery.
🧩 What You’ll Do
- Own end-to-end delivery of complex engineering initiatives (0→1, 1→N)
- Design systems across backend + frontend (if fullstack)
- Translate ambiguous problems into structured technical solutions
- Drive engineering best practices, code quality, and velocity
- Mentor engineers and elevate team performance
- Collaborate with stakeholders on roadmap and execution strategy
🧠 What We’re Looking For
- Strong experience in backend systems + optional frontend frameworks
- Proven ability to lead projects and deliver at scale
- Solid understanding of system design and architecture patterns
- Ability to balance speed vs quality vs scalability trade-offs
- Strong communication and leadership without authority
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Experience leading platform builds or major system rewrites
- Exposure to AI systems, LLM integrations, or intelligent workflows
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Background: We Commonly See (But Not Limited To)
- Our team often includes engineers from top-tier institutions and strong research or product company or DeepTech or AI Product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
Description
Power BI JD
Mandatory:
• 5+ years of Power BI Report development experience.
• Building Analysis Services reporting models.
• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.
• Connecting data sources, importing data, and transforming data for Business intelligence.
• Analytical thinking for translating data into informative reports and visuals.
• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.
• Should have an edge over making DAX queries in Power BI desktop.
• Expert in using advanced-level calculations on the data set.
• Responsible for design methodology and project documentaries.
• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.
• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS
• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.
• Must have 3-4years of experience in data-specific roles.
• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more
• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)
• Grip over data analytics
• Interact with customers to understand their business problems and provide best-in-class analytics solutions
• Proficient in SQL and Query performance tuning skills
• Understand data governance, quality and security and integrate analytics with these corporate platforms
• Attention to detail and ability to deliver accurate client outputs
• Experience of working with large and multiple datasets / data warehouses
• Ability to derive insights from data and analysis and create presentations for client teams
• Experience with performance optimization of the dashboards
• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.
• Resilience under pressure and against deadlines.
• Proactive attitude and an open outlook.
• Strong analytical problem-solving skills
• Skill in identifying data issues and anomalies during the analysis
• Strong business acumen demonstrated an aptitude for analytics that incite action
• Ability to execute on design requirements defined by business
• Ability to understand required Power BI functionality from wireframes/ requirement documents
• Ability to architect and design reporting solutions based on client needs.
• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.
• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions
Good to have:
• Experience in working with Azure and connecting synapse with Tableau
• Demonstrate strength in data modelling, ETL development, and data warehousing
• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB
• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
• Good to have knowledge of Supply Chain Domain.
Rupayya (www.rupayya.com) is looking for a passionate and skilled iOS Developer with 2–4 years of experience in building high-quality mobile applications. The ideal candidate should have strong expertise in Swift, a good understanding of iOS frameworks, and the ability to deliver scalable and performant applications.
🛠️ Key Responsibilities
- Design and develop advanced applications for the iOS platform using Swift / Objective-C
- Collaborate with cross-functional teams including UI/UX designers, backend developers, and product managers
- Integrate RESTful APIs and third-party libraries/services
- Ensure the performance, quality, and responsiveness of applications
- Identify and fix bugs, optimize performance, and improve code quality
- Participate in code reviews and follow best coding practices
- Maintain code versioning using tools like Git
- Stay updated with the latest trends and technologies in iOS development
💡 Required Skills & Qualifications
- 2–4 years of hands-on experience in iOS development
- Strong proficiency in Swift (preferred) and/or Objective-C
- Experience with iOS frameworks such as UIKit, Core Data, Core Animation
- Familiarity with MVVM / MVC architecture patterns
- Experience with RESTful APIs and JSON parsing
- Knowledge of Auto Layout, Storyboards, and SwiftUI (preferred)
- Understanding of app lifecycle, memory management, and multithreading
- Experience with version control systems like Git
⭐ Good to Have
- Experience with SwiftUI and modern iOS development practices
- Knowledge of CI/CD pipelines for mobile apps
- Familiarity with unit testing frameworks (XCTest)
- Experience with Firebase, push notifications, and analytics tools
- Exposure to App Store deployment and guidelines
🎯 Key Competencies
- Strong problem-solving and debugging skills
- Attention to detail and performance optimization
- Good communication and teamwork skills
- Ability to work in an agile environment
🎁 What We Offer
- Opportunity to work on cutting-edge mobile technologies
- Flexible work environment
- Learning and growth opportunities
- ESOPs

Senior Full Stack Developer – Job Description
Job Overview
Surety Seven Technologies Pvt Ltd is looking for an experienced and highly skilled Senior Full Stack Developer with strong expertise in Next.js, Node.js, and React.js. The ideal candidate will lead architecture decisions, build scalable applications, guide development teams, and drive technical excellence across projects.
This role requires strong ownership, leadership capability, and hands-on coding expertise in both frontend and backend technologies.
Key Responsibilities
- Lead the design and architecture of scalable full-stack applications
- Develop, maintain, and optimize web applications using Next.js, React.js, and Node.js
- Build robust RESTful APIs and backend services
- Ensure high performance, security, and responsiveness of applications
- Work closely with Product, Design, and QA teams to deliver high-quality features
- Conduct code reviews and maintain coding standards & best practices
- Mentor and guide junior and mid-level developers
- Manage CI/CD pipelines and deployment processes
- Troubleshoot complex production issues and provide solutions
- Contribute to technical documentation and system design discussions
Required Skills & Qualifications
- 5–8 years of experience in Full Stack Development
- Strong hands-on experience with Next.js, React.js, and Node.js
- Deep knowledge of JavaScript (ES6+), HTML5, CSS3
- Experience with MongoDB / MySQL / PostgreSQL
- Strong understanding of REST APIs, authentication (JWT/OAuth), and API security
- Experience with Git, CI/CD tools, and deployment on cloud platforms (AWS, Azure, or similar)
- Understanding of microservices architecture (preferred)
- Strong problem-solving and debugging skills
- Experience leading technical modules or teams
Senior Full Stack Developer – Job Description
Job Overview
Surety Seven Technologies Pvt Ltd is looking for an experienced and highly skilled Senior Full Stack Developer with strong expertise in Next.js, Node.js, and React.js. The ideal candidate will lead architecture decisions, build scalable applications, guide development teams, and drive technical excellence across projects.
This role requires strong ownership, leadership capability, and hands-on coding expertise in both frontend and backend technologies.
Key Responsibilities
- Lead the design and architecture of scalable full-stack applications
- Develop, maintain, and optimize web applications using Next.js, React.js, and Node.js
- Build robust RESTful APIs and backend services
- Ensure high performance, security, and responsiveness of applications
- Work closely with Product, Design, and QA teams to deliver high-quality features
- Conduct code reviews and maintain coding standards & best practices
- Mentor and guide junior and mid-level developers
- Manage CI/CD pipelines and deployment processes
- Troubleshoot complex production issues and provide solutions
- Contribute to technical documentation and system design discussions
Required Skills & Qualifications
- 5–8 years of experience in Full Stack Development
- Strong hands-on experience with Next.js, React.js, and Node.js
- Deep knowledge of JavaScript (ES6+), HTML5, CSS3
- Experience with MongoDB / MySQL / PostgreSQL
- Strong understanding of REST APIs, authentication (JWT/OAuth), and API security
- Experience with Git, CI/CD tools, and deployment on cloud platforms (AWS, Azure, or similar)
- Understanding of microservices architecture (preferred)
- Strong problem-solving and debugging skills
- Experience leading technical modules or teams
Java Developer (6+ Years Experience)
We are looking for an experienced Java Developer to join our dynamic team for an exciting project with a leading client.
Role Details:
Location: Bangalore
Key Requirements:
6+ years of hands-on experience in Java development
Strong expertise in Core Java, Spring Boot, Microservices
Experience with REST APIs & backend development
Good understanding of databases (SQL/NoSQL)
Familiarity with Agile methodologies
We are hiring for a Python Developer at Wissen Technology!
📍 Location: Pune (Hybrid)
💼 Experience: 3–6 Years
⏱️ Notice Period: Immediate / 15 days preferred
🔧 Key Skills:
• Strong experience in Python
• Hands-on with Pandas & NumPy
• Experience with AWS (S3, Lambda preferred)
• Good understanding of data processing & APIs
• SQL knowledge
🏢 About Wissen Technology:
Wissen Technology, part of the Wissen Group (est. 2000), is a fast-growing technology company specializing in high-end consulting across Banking, Finance, Telecom, and Healthcare domains.
✔️ Global presence – US, India, UK, Australia, Mexico & Canada
✔️ Certified Great Place to Work®
✔️ Trusted by Fortune 500 clients like Morgan Stanley, Goldman Sachs, and more
✔️ Strong growth with 400% revenue increase in recent years
🌐 Website: www.wissen.com
🔗 LinkedIn: https://www.linkedin.com/company/wissen-technology/
If you’re interested or have relevant candidates, please share your resume at [your email].
#Hiring #PythonDeveloper #PuneJobs #AWS #ImmediateJoiner
While you may already know about Wissen and the company history, here is a quick rundown for you.
About Wissen Technology:
· The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
· Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
· Our workforce has highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
· Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
· Globally present with offices US, India, UK, Australia, Mexico, and Canada.
· We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
· Wissen Technology has been certified as a Great Place to Work®.
· Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
· Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
· We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, Goldman Sachs, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
De
Job Title: Application Development Engineer (Python – Backtesting & Index Platforms)
Role Overview
Key Responsibilities
Engine Development: Design and implement modular, reusable Python components for index construction, rebalancing, and backtesting.
Large-Scale Simulation: Use Pandas, NumPy, and PySpark to run historical calculations across long time horizons and multiple index variants.
Workflow Integration: Integrate engines with orchestrators such as Airflow or Temporal using parameterized, config-driven execution.
Reference Data Consumption: Query and utilize pricing, security master, and corporate action data from Snowflake.
Quality & Reconciliation: Build automated test harnesses to validate outputs, compare against benchmarks, and guarantee reproducibility.
Performance Optimization: Improve runtime efficiency through vectorization, caching, and distributed computing patterns.
Cross-Team Collaboration: Partner with Business, Index Ops, and Platform teams to accelerate research-to-production onboarding.
Required Technical Capabilities
Python Expertise: Strong proficiency in Python application development with emphasis on clean architecture and maintainable design.
Data & Numerical Libraries: Deep experience with Pandas and NumPy; working knowledge of PySpark for distributed workloads.
Financial Computation: Ability to implement portfolio mathematics, weighting algorithms, and time-series transformations.
Config-Driven Systems: Experience building rule-based or metadata-driven processing frameworks.
Database Skills: Strong SQL and experience consuming structured data from Snowflake.
Testing Discipline: Expertise in unit testing, regression testing, and deterministic replay of calculations.
Orchestration Integration: Familiarity with Airflow, Temporal, or similar workflow engines.
Cloud Infrastructure: Solid understanding of AWS ecosystem services (S3, Lambda, IAM) and how they integrate with the Snowflake Data Cloud.
Blue Owls Solutions is looking for a mid-level Azure Data Engineer with approximately 4 years of hands-on experience to join our growing data team. In this role, you will design, build, and maintain scalable data pipelines and architectures that power business-critical analytics and reporting. You'll work closely with cross-functional teams to transform raw data into reliable, high-quality datasets that drive decision-making across the organization.
Required Skills
- 4+ years of professional experience as a Data Engineer or in a similar data-focused role
- Strong proficiency in SQL for data manipulation, querying, and performance optimization
- Hands-on experience with PySpark for large-scale data processing and transformation
- Solid working knowledge of the Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lake, Azure Synapse, etc.)
- Experience with Microsoft Fabric for end-to-end data analytics workflows
- Ability to design and implement robust data architectures including data warehouses, lakehouses, and ETL/ELT frameworks
- Strong coding and scripting skills with Python
- Proven problem-solving ability with a knack for debugging complex data issues and optimizing pipeline performance
- Understanding of data modeling concepts, dimensional modeling, and data governance best practices
Interview Process
- Take-Home Assessment
- 60-Minute Technical Interview
- Culture Fit Round
Preferred Skills & Certifications
- Microsoft Certified: Fabric Analytics Engineer Associate (DP-600)
- Microsoft Certified: Fabric Data Engineer Associate (DP-700)
- Experience with CI/CD practices for data pipelines
- Familiarity with version control systems such as Git
- Exposure to real-time streaming data solutions
- Experience working in Agile or Scrum environments
- Strong communication skills with the ability to translate technical concepts for non-technical stakeholders
What We Offer
- Competitive salary and performance-based bonuses
- Flexible hybrid options
- Opportunities for professional development, training, and certification sponsorship
- A collaborative, innovation-driven team culture
- Paid time off and company holidays
Position Responsibilities:
- Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
- Design and develop scalable, high-performance solutions using cloud technologies and containerization.
- Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
- Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
- Ensure software designs comply with specifications and security best practices.
- Recommend changes to improve application architecture, maintainability, and performance.
- Develop and optimize database queries using T-SQL.
- Prepare and produce software component releases.
- Develop and execute unit, integration, and performance tests.
- Support formal testing cycles and resolve test defects.
AI-Specific Responsibilities:
- Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
- Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
- Implement AI-based security measures to proactively detect and mitigate potential threats.
- Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
- Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Highly proficient in ASP.NET Core (C#) and full-stack development.
- Experience developing REST APIs.
- Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
- Strong database experience, particularly with T-SQL and relational database design.
- Advanced understanding of object-oriented programming (OOP) and SOLID principles.
- Experience with security best practices in web and API development.
- Knowledge of Agile SCRUM methodology and experience in collaborative environments.
- Experience with Test-Driven Development (TDD).
- Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
- Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
- High commitment to continuous learning, innovation, and improvement.
AI-Specific Qualifications:
- Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
- Knowledge of AI-based security protocols and threat detection systems.
- Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
- Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Job Title: QA Tester – FinTech (Manual + Automation Testing)
Location: Bangalore, India
Job Type: Full-Time
Experience Required: 3 Years
Industry: FinTech / Financial Services
Function: Quality Assurance / Software Testing
About the Role:
We are looking for a skilled QA Tester with 3 years of experience in both manual and automation testing, ideally in the FinTech domain. The candidate will work closely with development and product teams to ensure that our financial applications meet the highest standards of quality, performance, and security.
Key Responsibilities:
- Analyze business and functional requirements for financial products and translate them into test scenarios.
- Design, write, and execute manual test cases for new features, enhancements, and bug fixes.
- Develop and maintain automated test scripts using tools such as Selenium, TestNG, or similar frameworks.
- Conduct API testing using Postman, Rest Assured, or similar tools.
- Perform functional, regression, integration, and system testing across web and mobile platforms.
- Work in an Agile/Scrum environment and actively participate in sprint planning, stand-ups, and retrospectives.
- Log and track defects using JIRA or a similar defect management tool.
- Collaborate with developers, BAs, and DevOps teams to improve quality across the SDLC.
- Ensure test coverage for critical fintech workflows like transactions, KYC, lending, payments, and compliance.
- Assist in setting up CI/CD pipelines for automated test execution using tools like Jenkins, GitLab CI, etc.
Required Skills and Experience:
- 3+ years of hands-on experience in manual and automation testing.
- Solid understanding of QA methodologies, STLC, and SDLC.
- Experience in testing FinTech applications such as digital wallets, online banking, investment platforms, etc.
- Strong experience with Selenium WebDriver, TestNG, Postman, and JIRA.
- Knowledge of API testing, including RESTful services.
- Familiarity with SQL to validate data in databases.
- Understanding of CI/CD processes and basic scripting for automation integration.
- Good problem-solving skills and attention to detail.
- Excellent communication and documentation skills.
Preferred Qualifications:
- Exposure to financial compliance and regulatory testing (e.g., PCI DSS, AML/KYC).
- Experience with mobile app testing (iOS/Android).
- Working knowledge of test management tools like TestRail, Zephyr, or Xray.
- Performance testing experience (e.g., JMeter, LoadRunner) is a plus.
- Basic knowledge of version control systems (e.g., Git).
Role & Responsibilities
drives large-scale data modernization and AI readiness for global enterprises. We are looking for an experienced Data Modeler to design, standardize, and maintain enterprise data models across our modernization initiatives — ensuring consistency, quality, and business alignment across cloud data platforms.
The person will be responsible for translating business requirements and data flows into robust conceptual, logical, and physical data models across multiple domains (Customer, Product, Finance, Supply Chain, etc.). You will work closely with Data Architects, Engineers, and Governance teams to ensure data is structured, traceable, and optimized for analytics and interoperability across platforms like Snowflake, Dremio, and Databricks.
Key Responsibilities-
- Develop conceptual, logical, and physical data models aligned with enterprise architecture standards.
- Engage with Business Stakeholders: Collaborate with business teams, business analysts and SMEs to understand business processes, data lifecycles, and key metrics that drive value and outcomes.
- Value Chain Understanding: Analyze end-to-end customer and product value chains to identify critical data entities, relationships, and dependencies that should be represented in the data model.
- Conceptual and Logical Modeling: Translate business concepts and data requirements into conceptual and logical data models that capture enterprise semantics and support analytical and operational needs.
- Physical Data Modeling: Design and implement physical data models optimized for performance and scalability
- Semantic Layer Design: Create semantic models that enable business access to data via BI tools and data discovery platforms.
- Data Standards and Governance: Ensure models comply with enterprise data standards, naming conventions, lineage tracking, and governance practices.
- Implement naming conventions, data standards, and metadata definitions across all models.
- Collaboration with Data Engineering: Work closely with data engineers to align data pipelines with the logical and physical models, ensuring consistency and accuracy from ingestion to consumption.
- Manage version control, lineage tracking, and change documentation for models.
- Participate in data quality and governance initiatives to ensure trusted and consistent data definitions across domains.
- Create and maintain a business glossary in collaboration with the governance team.
Ideal Candidate
- Strong Enterprise Data Modeller profile (Modern Data Platforms)
- Mandatory (Experience 1) – Must have 7+ years of experience in Data Modeling or Enterprise Data Architecture, with strong hands-on expertise in designing conceptual, logical, and physical data models for enterprise data platforms
- Mandatory (Experience 2) – Must have Strong hands-on experience with enterprise data modeling tools such as Erwin, ER/Studio, PowerDesigner, SQLDBM, or similar enterprise data modeling tools
- Mandatory (Experience 3) – Must have Deep understanding of dimensional modeling (Kimball / Inmon methodologies), normalization techniques, and schema design for modern data warehouse environments.
- Mandatory (Experience 4) – Proven experience designing data models for modern data platforms such as Snowflake, Databricks, Redshift, Dremio, or similar cloud data warehouse / lakehouse systems.
- Mandatory (Experience 5) – Must have strong SQL expertise and schema design skills, with the ability to validate data model implementations and collaborate closely with data engineering teams
- Mandatory (Education) – Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related technical field.
- Preferred (Experience 1) – Should have familiarity with data governance, metadata management, lineage, and business glossary tools such as Collibra, Alation, or Microsoft Purview.
- Preferred (Experience 2) – Exposure to data integration pipelines and ETL frameworks such as Informatica, DBT, Airflow, or similar tools.
- Preferred (Data Management) – Understanding of master data management (MDM) and reference data management principles.
- Preferred (Domain) – Experience working with high-tech or manufacturing data domains, including customer, product, or supply chain data models
Technical Evaluation Engineer
Experience: 8+ years (3+ in leadership)
Location: [Add location]
Description:
We’re looking for a Head of Technical Recruiting with a strong understanding of software development to lead and scale our tech hiring. You will partner with engineering leadership to define hiring needs, build efficient interview processes, and deliver high-quality hires across backend, frontend, data, platform, and product engineering roles.
Key Responsibilities:
Lead end-to-end technical hiring strategy
Partner with CTO/Engineering leaders on workforce planning
Manage and mentor technical recruiters
Design and improve technical interview processes
Drive senior and critical tech hires
Track and improve hiring metrics (time-to-hire, quality, funnel)
Requirements:
6+ years in recruiting, primarily technical hiring
3+ years leading recruiting teams
Strong understanding of software development and tech stacks
Experience hiring across engineering roles (BE/FE/FS/Data/DevOps, etc.)
Strong stakeholder management and communication skills
Nice to Have:
Engineering or technical background
Experience in fast-growing tech companies
Highlights - Current location of candidate should be Bangalore
Total Exp - 6-12yrs
Joining Time period - Within 30 days
GCP Bigquery expert, GCP Certified
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - GCP Certification
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Full Stack Developer – React + Node.js + SQL + Team Lead + Excellent comms
Job Title: Full Stack Developer (React + Node.js + SQL)
Experience: 7+ years
Job Summary:
We are seeking a highly capable full stack developer proficient in React, Node.js, and SQL. The ideal candidate will be responsible for developing scalable web applications and APIs, integrating with relational databases, and delivering high-quality, maintainable code.
Key Responsibilities:
- Design and develop front-end interfaces using React with Redux/Context API.
- Build RESTful APIs and backend services using Node.js (Express.js).
- Integrate and optimize SQL queries with PostgreSQL, MySQL, or MS SQL Server.
- Ensure responsive design and cross-browser compatibility.
- Collaborate with UI/UX designers, testers, and backend engineers.
- Write unit and integration tests to ensure code quality and reliability.
Required Skills:
- Strong hands-on experience with React.js and Node.js.
- Proficiency in SQL and experience with relational database systems.
- Knowledge of RESTful API design and microservices architecture.
- Familiarity with version control systems (Git), CI/CD pipelines.
- Strong understanding of HTML, CSS, and JavaScript (ES6+).
Good to Have:
- Experience with GraphQL, TypeScript, or NoSQL databases.
- Cloud deployment experience (AWS/Azure).
- Containerization knowledge (Docker/Kubernetes).
- Agile/Scrum project methodology experience.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Job Title: Tech Developer Intern
Department: Technology / Product Development
Location: Hyderabad
Duration: 6 Months Internship
Experience : 3 - 6 months
Job Overview
We are looking for a motivated Tech Developer Intern who is passionate about software development and eager to gain hands-on experience in building and maintaining technology solutions. The intern will work closely with the technology team to assist in development, testing, and improvement of internal systems and digital platforms.
Key Responsibilities
- Assist in developing, testing, and maintaining software applications and internal tools.
- Support the tech team in debugging, troubleshooting, and resolving technical issues.
- Write clean, efficient, and well-documented code.
- Participate in system improvements, feature development, and product enhancements.
- Assist in database management and integration tasks.
- Collaborate with cross-functional teams such as product, operations, and design.
- Conduct basic testing and quality checks for new features and updates.
- Maintain documentation for code, processes, and system changes.
Skills & Qualifications
skills - Typescript, React, Next.js, Redux, Node.js, SQL/NoSQL
- AI - driven development is a plus
- Problem-solving mindset and willingness to learn new technologies.
- Good communication and teamwork skills.
What You Will Gain
- Hands-on experience working on real tech projects.
- Exposure to product development and startup tech environment.
- Mentorship from experienced developers.
- Opportunity to convert into a full-time role based on performance.
Backend – Software Development Engineer II
Experience: 4–7 years
Location: Bangalore
Work Mode - Hybrid
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects
What you will do
- Own backend features and migration workstreams end to end, from understanding the existing codebase and data model to delivering production-ready implementations.
- Build and enhance Java/Spring Boot services used in modernization, migration, and cloud transformation projects.
- Design and implement backend flows that are safe under retries and partial failures, with clear thinking around validation, transaction boundaries, idempotency, and downstream side effects.
- Work across application, database, and deployment layers to improve reliability, maintainability, and operational visibility.
- Model data based on access patterns and business workflows, with sound choices around schema design, indexing, and query performance.
- Investigate production issues using logs, request traces, database state, and service behavior; identify root causes and implement durable fixes.
- Collaborate with internal teams, customer engineering teams, architects, and stakeholders to deliver high-quality solutions on mission-critical projects.
- Write clean, modular, testable code and participate actively in sprint ceremonies, code reviews, design discussions, and release activities.
What we’re looking for
- 4–7 years of backend engineering experience, with strong hands-on delivery in Java-based systems.
- Solid experience with Java, Spring Boot, and microservice-style backend development.
- Demonstrated ownership of at least one meaningful backend service or feature area in production.
- Strong understanding of backend engineering fundamentals, including service reliability, data consistency, failure handling, and production-grade design considerations.
- Strong database fundamentals, including schema design, query writing, indexing, and performance reasoning.
- Strong depth in MongoDB or a relational database such as Oracle, with working comfort across both styles being a plus.
- Ability to investigate and resolve real production issues across services and databases, including consistency, performance, and reliability problems.
- Hands-on experience with testing frameworks such as JUnit and Mockito.
- Proficiency with Git, including branching, code review workflows, and conflict resolution.
- Strong communication skills and the ability to collaborate effectively with engineers, stakeholders, and customers.
Preferred qualifications
- Experience working on legacy modernization, data/service migrations, or decomposition of existing systems into cleaner service boundaries.
- Exposure to both MongoDB and relational databases, including query tuning, indexing, and production troubleshooting.
- Familiarity with Oracle PL/SQL or migration of logic from database-heavy systems into service-layer Java code.
- Exposure to CI/CD deployment and cloud environments like AWS, Azure, or GCP.
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)

Role Overview
We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.
You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.
Key Responsibilities
- Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
- Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
- Develop and optimize data models, tables, and transformations in Snowflake.
- Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
- Ensure data reliability, integrity, and performance across pipelines.
- Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
- Implement data quality validation frameworks and automated checks across pipelines.
- Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
- Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
- Collaborate with engineering, analytics, and product teams in an Agile development environment.
Required Technical Skills
Core Technologies
- Strong hands-on experience with SQL
- Python for data transformation and pipeline development
- Workflow orchestration using Apache Airflow
- Experience working with Snowflake data warehouse
Data Engineering Expertise
- Strong understanding of ETL / ELT pipeline design
- Data profiling and data quality validation techniques
- Experience building data ingestion pipelines from APIs, files, and databases
- Data modeling and schema design
Tools & Platforms
- Data Quality Tools: Deequ, Great Expectations (GX), Splink
- Data Integration Tools: Fivetran, Workato, Informatica
- Cloud Platforms: AWS (preferred)
- Version Control & DevOps: Git, CI/CD pipelines
Qualifications
- 5–8 years of experience in Data Quality Engineering / Data Engineering
- Strong expertise in SQL, Python, Airflow, and Snowflake
- Experience working with large-scale datasets and distributed data systems
- Solid understanding of data engineering best practices across the development lifecycle
- Experience working in Agile environments (Scrum, sprint planning, etc.)
- Strong analytical and problem-solving skills
What We Look For
- Passion for data accuracy, reliability, and governance
- Ability to identify and resolve complex data issues
- Strong collaboration skills across data, engineering, and analytics teams
- Ownership mindset and attention to data integrity and performance
Why Join Us
- Opportunity to work on modern data platforms and large-scale datasets
- Collaborate with high-performing data and engineering teams
- Exposure to cloud data architecture and modern data tools
- Competitive compensation and strong career growth opportunities
If you are good at writing complex queries, very good at python, and good at debugging, very good at understanding complex systems and can swim through logs to find the dropping point, and have been on the firefighting side to address bugs in live production systems, send your resume
Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp
Location : Hyderabad(Onsite)
Immediate to 15 days Joiners
Experience : 5+ to 13 Years
Role Summary
We are looking for a Senior Data Engineer who will play a foundational role in:
- Client onboarding from a data perspective
- Understanding complex insurance data flows
- Designing secure, scalable ingestion pipelines
- Establishing strong data modeling and governance standards
This role sits at the intersection of technology, data architecture, security, and business onboarding.
.
Key Responsibilities
- Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
- Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
- Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
- Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
- Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
- Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
- Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
- Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
- Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
- Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
- Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards
Required Technical Skills
- Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
- Platforms: Azure, AWS, Data Bricks, Snowflake
- ETL / Orchestration: Airflow or similar frameworks
- Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
- Visualization Exposure: Power BI
- Version Control & CI/CD: GitHub, Azure Devops, or equivalent
- Integrations: APIs, real-time data streaming, ML model integration exposure
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 5+ years of experience in data engineering or similar roles
- Strong ability to align technical solutions with business objectives
- Excellent communication and stakeholder management skills
What We Offer
- Direct collaboration with the core US data leadership team
- High ownership and trust to manage the function end-to-end
- Exposure to a global environment with advanced tools and best practices
Job Description:
Position Type: Full-Time Contract (with potential to convert to Permanent)
Location: Remote (Australian Time Zone)
Availability: Immediate Joiners Preferred
About the Role
We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.
The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.
Key Responsibilities
- Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
- Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
- Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
- Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
- Perform data profiling, data validation, and ensure data quality across systems.
- Work closely with data engineering teams to improve data structures for better reporting efficiency.
- Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
- Support deployment, version control, and documentation of BI solutions.
- Ensure availability of dashboards during Australian business hours.
Required Skills & Experience
- 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
- 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
- Advanced knowledge of SQL and performance tuning.
- Strong understanding of data modeling, ETL processes, and cloud data platforms.
- Experience working in fast-paced environments with tight delivery timelines.
- Excellent communication and stakeholder management skills.
- Ability to work independently and deliver high‑quality outputs aligned with business objectives.
Nice-to-Have Skills
- Knowledge of Python or any ETL tool.
- Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
- Tableau Server/Prep experience.
Contract Details
- Full-Time Contract for several months.
- High possibility of conversion to permanent, based on performance.
- Must be available to work on the Australian Time Zone.
- Immediate joiners are highly encouraged.
We are looking for a detail-oriented **QA / Software Tester (Fresher)** to join our development team. You will be responsible for testing applications, identifying bugs, and ensuring software quality before release. This role is ideal for fresh graduates who want to start a career in software testing and quality assurance.
Key Responsibilities
1. Test web and mobile applications for functionality and usability.
2. Identify, document, and track software defects.
3. Execute manual test cases and report results.
4. Work with developers to fix issues and verify fixes.
5. Perform regression and functional testing.
6. Ensure the product meets quality standards before release.
7. Assist in preparing test documentation and reports.
Required Skills
1. Basic understanding of software testing concepts.
2. Knowledge of manual testing.
3. Basic understanding of SDLC and STLC.
4. Familiarity with web or mobile applications.
5. Basic knowledge of SQL or databases (optional).
6. Good analytical and problem-solving skills.
7. Attention to detail.
Preferred (Optional)
1. Knowledge of any testing tool (Selenium, JIRA, etc.).
2. Understanding of automation testing basics.
3. Internship or academic project experience.
Location: Remote / Chennai
Experience: 0–2 Year (Freshers Welcome)
Education: B.E / B.Tech / BCA / MCA / Any Computer-related Degree
Apply here: https://connectsblue.com/jobs/738/qa-software-tester-fresher-at-bluepms-software-solutions-pvt-ltd
FULL STACK DEVELOPER
JOB DESCRIPTION – FULL STACK DEVELOPER
Location: Bangalore
Key Responsibilities:
Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications
Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.
Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.
Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·
Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team
Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows
Assist Software Designer/Implementers with the creation of detailed software design specifications
Participate in the system specification review process to ensure system requirements can be translated into valid software architecture
Integrate internal and external product designs into a cohesive user experience
Identify and keep track of metrics that indicate how software is performing
Handle technical and non-technical queries from the development team and stakeholders
Ensure that all development practices follow best practices and any relevant policies / procedures
Other Duties· Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.
Own the technical solution and ensure all technical aspects are implemented as designed. ·
Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability
Flexible to work in rotational shift
Required Qualification
Previous experience of leading full stack technology projects with scrum teams and stakeholder management·
BTech or MTech in computer science, or related field·
3-5 years of experience.
Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·
With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·
Azure Dev Ops, Design System, Micro front ends, Data Science·
Stakeholder management & excellent communication skills.
Must have skills
React - 3 years
React Native - 3 years
Redux - 1 years
Material UI - 1 years
Typescript - 1 years
Bootstrap - 1 years
Microservices - 2 years
SQL - 1 years
Azure - 1 years
Nice to have skills
.NET Core - 3 years
NET 8 - 3 years
AWS - 1 years
LINQ - 1 years
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are looking for an experienced Data Engineer with strong expertise in AWS, DBT, Databricks, and Apache Airflow to join our growing data engineering team.
Immediate joiners preferred
Role Overview
The ideal candidate will design, develop, and maintain scalable data pipelines and data platforms to support analytics and business intelligence initiatives.
Key Responsibilities
- Design and build scalable data pipelines using AWS, Databricks, DBT, and Airflow.
- Develop and optimize ETL/ELT workflows for large-scale data processing.
- Implement data transformation models using DBT.
- Orchestrate workflows using Apache Airflow.
- Work with Databricks for big data processing and analytics.
- Ensure data quality, reliability, and performance optimization.
- Collaborate with data analysts, engineers, and business teams.
Required Skills
- Strong experience with AWS data services
- Hands-on experience with Databricks
- Experience in DBT (Data Build Tool)
- Workflow orchestration using Apache Airflow
- Strong SQL and Python skills
- Experience in data warehousing and ETL pipelines
Description
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).
Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work From Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have skills
Machine Learning - 6 years
Python - 6 years
ETL(Extract, Transform, Load) - 6 years
SQL - 6 years
Azure - 6 years

Business Intelligence & Digital Consulting company
Description
JOB DESCRIPTION – SENIOR ANALYST – DATA SCIENTIST
Key Responsibilities ·
Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions·
Advanced skills with statistical/programming in Python and data querying languages (e.g., SQL, Hadoop/Hive, Scala)·
Solid understanding of time-series forecasting techniques·
Good hands-on skills in both feature engineering and hyperparameter optimization·
Able to write clean and tested code that can be maintained by other software engineers·
Able to clearly summarize and communicate data analysis assumptions and results·
Able to craft effective data pipelines to transform your analyses from offline to production systems·
Self-motivated and a proactive problem solver who can work independently and in teams·
Connects both externally and internally to understand industry trends, technology advances and outstanding processes or solutions·
Is collaborative and engages (strategic & tactical. Able to influence without authority, handle complex issues and implement positive change·
Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science·
Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment ·
Provide guidance and leadership to more junior data scientists, managing processes and flow of work, vetting designs, and mentoring team members to realize their full potential·
Lead discussions at peer review and use interpersonal skills to positively influence decision making·
Provide subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices·
Facilitate cross-geography sharing of new ideas, learnings, and best-practices
What We Are Looking For
Required Qualifications ·
Master's degree in a quantitative field such as Data Science, Statistics, Applied Mathematics or Bachelor's degree in engineering, computer science, or related field. ·
4 – 6 years of total work experience as data scientist or analytical role, with at least 2-3 years of experience in time series forecasting·
A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project ·
Strong experience in Time Series Forecasting and Demand Planning ·
Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) ·
Good hands-on skills in both feature engineering and hyperparameter optimization ·
Experience producing high-quality code, tests, documentation·
Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies·
Proficiency in statistical concepts and ML algorithms·
Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team·
Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results·
Self-motivated and a proactive problem solver who can work independently and in teams·
Outstanding verbal and written communication skills with the ability to effectively advocate technical solutions to engineering and business teams
Desired Qualifications ·
Experience working in one or multiple supply chain functions (e.g., procurement, planning, manufacturing, quality, logistics) is strongly preferred ·
Experience in applying AI/ML within a CPG or Healthcare business environment is strongly preferred ·
Experience in creating CI/CD pipelines for deployment using Jenkins. ·
Experience implementing MLOPs framework along with understanding of data security·
Implementation on ML models·
Exposure to visualization packages and Azure tech stack.
Must have skills
Python - 2 years
Data Science - 4 years
SQL - 2 years
Machine Learning - 2 years
Nice to have skills
Data Analysis - 4 years
Time Series Forecasting - 2 years
Demand Planning - 2 years
Hadoop - 2 years
Statistical concepts - 2 years
Supply chain functions - 2 years





















