50+ SQL Jobs in India
Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!
We are Hiring ASP.NET MVC/Core Developers
Click Here to Apply : https://prishusoft.com/jobs/junior-aspnet-mvccore-professional
Experience Level
- 1–2 years of professional experience in web application development using ASP.NET MVC and ASP.NET Core.
Key Responsibilities
- Develop, maintain, and enhance web applications using ASP.NET MVC and ASP.NET Core.
- Write clean, scalable, and maintainable code following best practices.
- Design, develop, and integrate RESTful APIs with ASP.NET Web API.
- Collaborate with front-end developers and UI/UX designers to deliver exceptional user experiences.
- Work with MSSQL databases, including writing complex T-SQL queries, stored procedures, and optimizing performance.
- Participate in code reviews and contribute to technical discussions, architecture decisions, and performance improvements.
Technical Skills & Expertise
- Strong proficiency in ASP.NET MVC with at least 1 years of project experience.
- Good working knowledge of ASP.NET Core for modern application development.
- Solid skills in C#, JavaScript, and HTML.
- Experience with .NET Framework 4.5+.
- Hands-on experience with ASP.NET Web API development and consumption.
- Expertise in MSSQL (T-SQL, indexing, performance tuning).
Soft Skills
- Strong verbal and written communication skills.
- Collaborative team player with a willingness to share knowledge and contribute to team success.
Preferred / Bonus Skills
- Experience with Angular, React, or Vue.js for dynamic front-end development.
- Exposure to unit testing frameworks (e.g., Jasmine, Karma) for front-end applications.
- Understanding of DevOps practices and CI/CD pipelines.
- Familiarity with TypeScript for scalable JavaScript development.
Work Mode: Full-time On-site / Hybrid (Ahmedabad)
About the Role:
We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.
Key Responsibilities:
Data Modelling & Visualization
- Build scalable and high-quality data models in Power BI using best practices.
- Define relationships, hierarchies, and measures to support effective storytelling.
- Ensure dashboards meet standards in accuracy, visualization principles, and timelines.
Data Transformation & ETL
- Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
- Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
- Manage and process large-scale datasets from various sources and formats.
Business Problem Translation
- Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
- Decompose business questions into testable hypotheses and identify relevant datasets for validation.
Performance & Troubleshooting
- Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
- Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.
Analytical Storytelling
- Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
- Develop data narratives that drive business impact.
Solution Design
- Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.
Required Skills & Experience:
- Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
- Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
- Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
- Familiarity with DAX Studio and Tabular Editor.
- Experience in handling high-volume data in production environments.
Preferred (Good to Have):
- Exposure to Big Data technologies such as:
- PySpark
- Hadoop
- Hive / HDFS
- Spark Streaming (optional but preferred)
Why Join Us?
- Work with a team that's passionate about data innovation.
- Exposure to modern data stack and tools.
- Flat structure and collaborative culture.
- Opportunity to influence data strategy and architecture decisions.
Database Programmer (SQL & Python)
Experience: 4 – 5 Years
Location: Remote
Employment Type: Full-Time
About the Opportunity
We are a mission-driven HealthTech organization dedicated to bridging the gap in global healthcare equity. By harnessing the power of AI-driven clinical insights and real-world evidence, we help healthcare providers and pharmaceutical companies deliver precision medicine to underrepresented populations.
We are looking for a skilled Database Programmer with a strong blend of SQL expertise and Python automation skills to help us manage, transform, and unlock the value of complex clinical data. This is a fully remote role where your work will directly contribute to improving patient outcomes and making life-saving treatments more affordable and accessible.
Key Responsibilities
- Data Architecture & Management: Design, develop, and maintain robust relational databases to store large-scale, longitudinal patient records and clinical data.
- Complex Querying: Write and optimize sophisticated SQL queries, stored procedures, and triggers to handle deep clinical datasets, ensuring high performance and data integrity.
- Python Automation: Develop Python scripts and ETL pipelines to automate data ingestion, cleaning, and transformation from diverse sources (EHRs, lab reports, and unstructured clinical notes).
- AI Support: Collaborate with Data Scientists to prepare datasets for AI-based analytics, Knowledge Graphs, and predictive modeling.
- Data Standardization: Map and transform clinical data into standardized models (such as HL7, FHIR, or proprietary formats) to ensure interoperability across healthcare ecosystems.
- Security & Compliance: Implement and maintain rigorous data security protocols, ensuring all database activities comply with global healthcare regulations (e.g., HIPAA, GDPR).
Required Skills & Qualifications
- Education: Bachelor’s degree in Computer Science, Information Technology, Statistics, or a related field.
- SQL Mastery: 4+ years of experience with relational databases (PostgreSQL, MySQL, or MS SQL Server). You should be comfortable with performance tuning and complex data modeling.
- Python Proficiency: Strong programming skills in Python, particularly for data manipulation (Pandas, NumPy) and database interaction (SQLAlchemy, Psycopg2).
- Healthcare Experience: Familiarity with healthcare data standards (HL7, FHIR) or experience working with Electronic Health Records (EHR) is highly preferred.
- ETL Expertise: Proven track record of building and managing end-to-end data pipelines for structured and unstructured data.
- Analytical Mindset: Ability to troubleshoot complex data issues and translate business requirements into efficient technical solutions.
To process your details please fill-out the google form.
About Company (GeniWay)
GeniWay Technologies is pioneering India’s first AI-native platform for personalized learning and career guidance, transforming the way students learn, grow, and determine their future path. Addressing challenges in the K-12 system such as one-size-fits-all teaching and limited career awareness, GeniWay leverages cutting-edge AI to create a tailored educational experience for every student. The core technology includes an AI-powered learning engine, a 24x7 multilingual virtual tutor and Clario, a psychometrics-backed career guidance system. Aligned with NEP 2020 policies, GeniWay is on a mission to make high-quality learning accessible to every student in India, regardless of their background or region.
What you’ll do
- Build the career assessment backbone: attempt lifecycle (create/resume/submit), timing metadata, partial attempts, idempotent APIs.
- Implement deterministic scoring pipelines with versioning and audit trails (what changed, when, why).
- Own Postgres data modeling: schemas, constraints, migrations, indexes, query performance.
- Create safe, structured GenAI context payloads (controlled vocabulary, safety flags, eval datasets) to power parent/student narratives.
- Raise reliability: tests for edge cases, monitoring, reprocessing/recalculation jobs, safe logging (no PII leakage).
Must-have skills
- Backend development in Python (FastAPI/Django/Flask) or Node (NestJS) with production API experience.
- Strong SQL + PostgreSQL fundamentals (transactions, indexes, schema design, migrations).
- Testing discipline: unit + integration tests for logic-heavy code; systematic debugging approach.
- Comfort using AI coding copilots to speed up scaffolding/tests/refactors — while validating correctness.
- Ownership mindset: cares about correctness, data integrity, and reliability.
Good to have
- Experience with rule engines, scoring systems, or audit-heavy domains (fintech, healthcare, compliance).
- Event schemas/telemetry pipelines and observability basics.
- Exposure to RAG/embeddings/vector DBs or prompt evaluation harnesses.
Location: Pune (on-site for first 3 months; hybrid/WFH flexibility thereafter)
Employment Type: Full-time
Experience: 2–3 years (correctness-first; strong learning velocity)
Compensation: Competitive (₹8–10 LPA fixed cash) + ESOP (equity ownership, founding-early employee level)
Joining Timeline: 2–3 weeks / Immediate
Why join (founding team)
- You’ll build core IP: scoring integrity and data foundations that everything else depends on.
- Rare skill-building: reliable systems + GenAI-safe context/evals (not just API calls).
- Meaningful ESOP upside at an early stage.
- High trust, high ownership, fast learning.
- High-impact mission: reduce confusion and conflict in student career decisions; help families make better choices, transform student lives by making great learning personal.
Hiring process (fast)
1. 20-min intro call (fit + expectations).
2. 45–60 min SQL & data modeling, API deep dive.
3. Practical exercise (2–3 hours max) implementing a small scoring service with tests.
4. Final conversation + offer.
How to apply
Reply with your resume/LinkedIn profile plus one example of a system/feature where you owned data modeling and backend integration (a short paragraph is fine).
Required Skills and Qualifications:
- 2–3 years of professional experience in Python development.
- Strong understanding of object-oriented programming.
- Experience with frameworks such as Django, Flask, or FastAPI.
- Knowledge of REST APIs, JSON, and web integration.
- Familiarity with SQL and database management systems.
- Experience with Git or other version control tools.
- Good problem-solving and debugging skills.
- Strong communication and teamwork abilities.
Position: Insights Manager
Location: Gurugram (Onsite)
Experience Required: 4+ Years
Working Days: 5 Days (Mon to Fri)
About the Role
We are seeking a hands-on Insights Manager to build the analytical backbone that powers decision-making. This role sits at the centre of the data ecosystem, partnering with Category, Commercial, Marketing, Sourcing, Fulfilment, Product, and Growth teams to translate data into insight, automation, and action.
You will design self-running reporting systems, maintain data quality in collaboration with data engineering, and build analytical models that directly improve pricing, customer experience, and operational efficiency. The role requires strong e-commerce domain understanding and the ability to move from data to decisions with speed and precision.
Key Responsibilities
1. Data Platform & Governance
- Partner with data engineering to ensure clean and reliable data across Shopify, GA4, Ad platforms, CRM, and ERP systems
- Define and maintain KPI frameworks (ATC, CVR, AOV, Repeat Rate, Refunds, LTV, CAC, etc.)
- Oversee pipeline monitoring, QA checks, and metric documentation
2. Reporting, Dashboards & Automation
- Build automated datamarts and dashboards for business teams
- Integrate APIs and automate data flows across multiple sources
- Create actionable visual stories and executive summaries
- Use AI and automation tools to improve insight delivery speed
3. Decision Models & Applied Analytics
- Build models for pricing, discounting, customer segmentation, inventory planning, delivery SLAs, and recommendations
- Translate analytics outputs into actionable playbooks for internal teams
4. Insights & Actionability
- Diagnose performance shifts and identify root causes
- Deliver weekly and monthly insight-driven recommendations
- Improve decision-making speed and quality across functions
Qualifications & Experience
- 4–7 years of experience in analytics or product insights (e-commerce / D2C / retail)
- Strong SQL and Python skills
- Hands-on experience with GA4, GTM, and dashboarding tools (Looker / Tableau / Power BI)
- Familiarity with CRM platforms like Klaviyo, WebEngage, or MoEngage
- Strong understanding of e-commerce KPIs and customer metrics
- Ability to communicate insights clearly to non-technical stakeholders
What We Offer
- Greenfield opportunity to build the data & insights platform from scratch
- High business impact across multiple functions
- End-to-end exposure from analytics to automation and applied modelling
- Fast-paced, transparent, and collaborative work culture
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
Job Summary
We are looking for an experienced Python DBA with strong expertise in Python scripting and SQL/NoSQL databases. The candidate will be responsible for database administration, automation, performance optimization, and ensuring availability and reliability of database systems.
Key Responsibilities
- Administer and maintain SQL and NoSQL databases
- Develop Python scripts for database automation and monitoring
- Perform database performance tuning and query optimization
- Manage backups, recovery, replication, and high availability
- Ensure data security, integrity, and compliance
- Troubleshoot and resolve database-related issues
- Collaborate with development and infrastructure teams
- Monitor database health and performance
- Maintain documentation and best practices
Required Skills
- 10+ years of experience in Database Administration
- Strong proficiency in Python
- Experience with SQL databases (PostgreSQL, MySQL, Oracle, SQL Server)
- Experience with NoSQL databases (MongoDB, Cassandra, etc.)
- Strong understanding of indexing, schema design, and performance tuning
- Good analytical and problem-solving skills
Key Responsibilities
- Develop and maintain applications using Java 8/11/17, Spring Boot, and REST APIs.
- Design and implement microservices and backend components.
- Work with SQL/NoSQL databases, API integrations, and performance optimization.
- Collaborate with cross-functional teams and participate in code reviews.
- Deploy applications using CI/CD, Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP).
Skills Required
- Strong in Core Java, OOPS, multithreading, collections.
- Hands-on with Spring Boot, Hibernate/JPA, Microservices.
- Experience with REST APIs, Git, and CI/CD pipelines.
- Knowledge of Docker/Kubernetes and cloud basics.
- Good understanding of database queries and performance tuning.
Nice to Have:
- Experience with messaging systems (Kafka/RabbitMQ).
- Basic frontend understanding (React/Angular).
Must have Strong SQL skills (queries, optimization, procedures, triggers), Hands-on experience with SQL, Automated through SQL.
Looking for candidates having 2+ years experience who has worked on large datasets with 1cr. datasets or more.
Handling the challenges and breaking.
Must have Advanced Excel skills
Should have 3+ years of relevant experience
Should have Reporting + dashboard creation experience
Should have Database development & maintenance experience
Must have Strong communication for client interactions
Should have Ability to work independently
Willingness to work from client locati
Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.
The Role
We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.
This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.
You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.
What you’ll do
● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python
● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems
● Model data using dbt, with an emphasis on quality, reuse, and documentation
● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics
● Write high-performance SQL and support analytics and reporting teams in self-serve data access
● Monitor and improve data quality, lineage, and governance across critical workflows
● Collaborate with engineers, analysts, and business partners across the US, UK, and India
What You Bring
● 4+ years of data engineering experience, ideally in a global, distributed team
● Strong Python development skills and experience
● Expert in SQL for data transformation, analysis, and debugging
● Deep knowledge of Airflow and orchestration best practices
● Proficient in DBT (data modeling, testing, release workflows)
● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus
● Strong grasp of data governance, observability, and privacy standards
● Excellent written and verbal communication skills
Nice to have
● Experience working with digital marketing and performance data, including:
Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)
● Familiarity with BI tools like Tableau or Looker
● Exposure to attribution models, media mix modeling, or A/B testing infrastructure
● Collaboration experience with data scientists or machine learning workflows
Why Join Us
● Monthly long weekends — every third Friday off
● Wellness reimbursement to support your health and balance
● Paid parental leave
● Remote-first with flexibility and trust
● Work with a world-class data and marketing team inside a globally recognized brand
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/ Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 4-8 years of experience in data engineering or data platform development
- Strong hands-on experience in SQL, Snowflake, Python, and Airflow
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Job Details
- Job Title: Java Full Stack Developer
- Industry: Global digital transformation solutions provider
- Domain: Information technology (IT)
- Experience Required: 5-7 years
- Working Mode: 3 days in office, Hybrid model.
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
SDET (Software Development Engineer in Test)
Job Responsibilities:
• Test Automation: • Develop, maintain, and execute automated test scripts using test automation frameworks. • Design and implement testing tools and frameworks to support automated testing.
• Software Development: • Participate in the design and development of software components to improve testability. • Write code actively, contribute to the development of tools, and work closely with developers to debunk complex issues.
• Quality Assurance: • Collaborate with the development team to understand software features and technical implementations. • Develop quality assurance standards and ensure adherence to the best testing practices.
• Integration Testing: • Conduct integration and functional testing to ensure that components work as expected individually and when combined.
• Performance and Scalability Testing: • Perform performance and scalability testing to identify bottlenecks and optimize application performance. • Test Planning and Execution: • Create detailed, comprehensive, and well-structured test plans and test cases. • Execute manual and/or automated tests and analyze results to ensure product quality.
• Bug Tracking and Resolution: • Identify, document, and track software defects using bug tracking tools. • Verify fixes and work closely with developers to resolve issues. • Continuous Improvement: • Stay updated on emerging tools and technologies relevant to the SDET role. • Constantly look for ways to improve testing processes and frameworks.
Skills and Qualifications: • Strong programming skills, particularly in languages such as COBOL, JCL, Java, C#, Python, or JavaScript. • Strong experience in Mainframe environments. • Experience with test automation tools and frameworks like Selenium, JUnit, TestNG, or Cucumber. • Excellent problem-solving skills and attention to detail. • Familiarity with CI/CD tools and practices, such as Jenkins, Git, Docker, etc. • Good understanding of web technologies and databases is often beneficial. • Strong communication skills for interfacing with cross-functional teams.
Qualifications • 5+ years of experience as a software developer, QA Engineer, or SDET. • 5+ years of hands-on experience with Java or Selenium. • 5+ years of hands-on experience with Mainframe environments. • 4+ years designing, implementing, and running test cases. • 4+ years working with test processes, methodologies, tools, and technology. • 4+ years performing functional and UI testing, quality reporting. • 3+ years of technical QA management experience leading on and offshore resources. • Passion around driving best practices in the testing space. • Thorough understanding of Functional, Stress, Performance, various forms of regression testing and mobile testing. • Knowledge of software engineering practices and agile approaches. • Experience building or improving test automation frameworks. • Proficiency CICD integration and pipeline development in Jenkins, Spinnaker or other similar tools. • Proficiency in UI automation (Serenity/Selenium, Robot, Watir). • Experience in Gherkin (BDD /TDD). • Ability to quickly tackle and diagnose issues within the quality assurance environment and communicate that knowledge to a varied audience of technical and non-technical partners. • Strong desire for establishing and improving product quality. • Willingness to take challenges head on while being part of a team. • Ability to work under tight deadlines and within a team environment. • Experience in test automation using UFT and Selenium. • UFT/Selenium experience in building object repositories, standard & custom checkpoints, parameterization, reusable functions, recovery scenarios, descriptive programming and API testing. • Knowledge of VBScript, C#, Java, HTML, and SQL. • Experience using GIT or other Version Control Systems. • Experience developing, supporting, and/or testing web applications. • Understanding of the need for testing of security requirements. • Ability to understand API – JSON and XML formats with experience using API testing tools like Postman, Swagger or SoapUI. • Excellent communication, collaboration, reporting, analytical and problem-solving skills. • Solid understanding of Release Cycle and QA /testing methodologies • ISTQB certification is a plus.
Skills: Python, Mainframe, C#
Notice period - 0 to 15days only
Job Title
Web Analyst (Google Analytics Specialist)
Company
Suntek AI – AI-driven e-commerce solutions provider helping global retailers streamline operations and accelerate growth.
Location
India — Remote-only
Role Overview
We’re looking for a data-savvy Web Analyst with 2–4 years of hands-on Google Analytics (GA4) experience—preferably gained
while working on Shopify or other e-commerce storefronts. You’ll own the end-to-end analytics workflow—from tagging and data quality to reporting, experimentation, and presenting insights that move revenue and retention metrics for Suntek AI and its clients
.Key Responsibilities
- Implement & maintain tracking
- Configure GA4 properties, data streams, and enhanced-ecommerce events for Shopify and custom stores
- Deploy and debug tags via Google Tag Manager (GTM) and robust data layers
- Data quality & governance
- Audit existing implementations, eliminate double-counting, and enforce naming conventions
- Set up filters, channel groupings, and cross-domain tracking for a single view of the customer
- Analysis & insight generation
- Build dashboards for traffic, conversion funnels, cohort retention, and LTV
- Experimentation support
Slice data by channel, device, geography, and campaign to uncover growth opportunities
Partner with Growth and Product teams to design A/B tests; measure lift and statistical significance
- Reporting & storytelling
- Translate complex datasets into clear, actionable recommendations for stakeholders and clients
- Present findings in weekly business reviews and ad-hoc deep dives
Required Qualifications
Must-Have
Details
Experience
2–4 years working with Google Analytics (UA & GA4) in Shopify or broader e-commerce environments
Technical
Strong command of GA4 events, GTM, data-layer specs, attribution modeling, and Looker Studio
Analytical
Proficient in Excel/Sheets; comfortable with SQL basics for ad-hoc queries
Communication
Ability to distill insights into plain language and persuasive visuals
Education
Bachelor’s in Analytics, Statistics, Economics, Engineering, or related field (or equivalent experience)
Nice-to-Haves
- Hands-on experience with Shopify Admin/API events, checkout funnels, and app ecosystem analytics
- Familiarity with CRO tools and heat-mapping platforms (Hotjar, Clarity)
- Basic scripting (Python/JavaScript) for data automation
What We Offer
- Growth runway — direct impact on marquee e-commerce brands’ revenue
- Learning budget — GA4 certifications, analytics conferences, and courses covered
- Flexible schedule — remote-first culture with quarterly in-person meetups
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Key Responsibilities
• Understand customer product configurations and translate them into structured data using Windowmaker Software.
• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with customer-specific rules and industry practices.
• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate quoting and manufacturing.
• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver complete and tested data setups on time.
• Provide training, guidance, and documentation to internal teams and customers as needed.
• Continuously look for process improvements and contribute to knowledge-sharing across the team.
• Support escalated customer cases related to data accuracy or configuration issues.
• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and attention to detail.
Required Qualifications
• 3–5 years of experience in a data-centric role.
• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.
• Experience with product data structures and product lifecycle.
• Strong analytical skills with a keen eye for data accuracy and patterns.
• Ability to break down complex product information into structured data elements.
• Eagerness to learn industry domain knowledge and software capabilities.
• Hands-on experience with Excel, SQL, or other data tools.
• Ability to manage priorities and meet deadlines in a fast-paced environment.
• Excellent written and verbal communication skills.
• A collaborative, growth-oriented mindset.
Job Overview
As a Profile Data Setup Analyst, you will play a key role in configuring, analysing, and managing product
data for our customers. You will work closely with internal teams and clients to ensure accurate,
optimized, and timely data setup in Windowmaker software. This role is perfect for someone who
enjoys problem-solving, working with data, and continuously learning.
Key Responsibilities
• Understand customer product configurations and translate them into structured data using
Windowmaker Software.
• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with customer-specific rules and industry practices.
• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate quoting and manufacturing.
• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver complete and tested data setups on time.
• Provide training, guidance, and documentation to internal teams and customers as needed.
• Continuously look for process improvements and contribute to knowledge-sharing across the team.
• Support escalated customer cases related to data accuracy or configuration issues.
• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and attention to detail.
Required Qualifications
• 3–5 years of experience in a data-centric role.
• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.
• Experience with product data structures and product lifecycle.
• Strong analytical skills with a keen eye for data accuracy and patterns.
• Ability to break down complex product information into structured data elements.
• Eagerness to learn industry domain knowledge and software capabilities.
• Hands-on experience with Excel, SQL, or other data tools.
• Ability to manage priorities and meet deadlines in a fast-paced environment.
• Excellent written and verbal communication skills.
• A collaborative, growth-oriented mindset.
Nice to Have
• Prior exposure to ERP/CPQ/Manufacturing systems is a plus.
• Knowledge of the window and door (fenestration) industry is an added advantage.
Why Join Us
• Be part of a global product company with a solid industry reputation.
• Work on impactful projects that directly influence customer success.
• Collaborate with a talented, friendly, and supportive team.
• Learn, grow, and make a difference in the digital transformation of the fenestration industry.
About the Role
We're seeking a Python Backend Developer to join our insurtech analytics team. This role focuses on developing backend APIs, automating insurance reporting processes, and supporting data analysis tools. You'll work with insurance data, build REST APIs, and help streamline operational workflows through automation.
Key Responsibilities
- Automate insurance reporting processes including bordereaux, reconciliations, and data extraction from various file formats
- Support and maintain interactive dashboards and reporting tools for business stakeholders
- Develop Python scripts and applications for data processing, validation, and transformation
- Develop and maintain backend APIs using FastAPI or Flask
- Perform data analysis and generate insights from insurance datasets
- Automate recurring analytical and reporting tasks
- Work with SQL databases to query, analyze, and extract data
- Collaborate with business users to understand requirements and deliver solutions
- Document code, processes, and create user guides for dashboards and tools
- Support data quality initiatives and implement validation checks
Requirements
Essential
- 2+ years of Python development experience
- Strong knowledge of Python libraries: Pandas, NumPy for data manipulation
- Experience building web applications or dashboards with Python frameworks
- Knowledge of FastAPI or Flask for building backend APIs and applications
- Proficiency in SQL and working with relational databases
- Experience with data visualization libraries (Matplotlib, Plotly, Seaborn)
- Ability to work with Excel, CSV, and other data file formats
- Strong problem-solving and analytical thinking skills
- Good communication skills to work with non-technical stakeholders
Desirable
- Experience in insurance or financial services industry
- Familiarity with insurance reporting processes (bordereaux, reconciliations, claims data)
- Experience with Azure cloud services (Azure Functions, Blob Storage, SQL Database)
- Experience with version control systems (Git, GitHub, Azure DevOps)
- Experience with API development and RESTful services
Tech Stack
Python 3.x, FastAPI, Flask, Pandas, NumPy, Plotly, Matplotlib, SQL Server, MS Azure, Git, Azure DevOps, REST APIs, Excel/CSV processing libraries
Senior Full Stack Developer – Analytics Dashboard
Job Summary
We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.
The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.
Key Responsibilities
- Design and develop a full-stack web application using modern technologies.
- Build scalable backend APIs to handle data ingestion, processing, and storage.
- Develop interactive dashboards and data visualisations for business reporting.
- Implement secure user authentication and role-based access.
- Integrate with third-party APIs using OAuth and REST protocols.
- Design efficient database schemas for analytical workloads.
- Implement background jobs and scheduled tasks for data syncing.
- Ensure performance, scalability, and reliability of the system.
- Write clean, maintainable, and well-documented code.
- Collaborate with product and design teams to translate requirements into features.
Required Technical Skills
Frontend
- Strong experience with React.js
- Experience with Next.js
- Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
- Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)
Backend
- Strong experience with Node.js (Express or NestJS)
- REST and/or GraphQL API development
- Background job systems (cron, queues, schedulers)
- Experience with OAuth-based integrations
Database
- Strong experience with PostgreSQL
- Data modelling and performance optimisation
- Writing complex analytical SQL queries
DevOps / Infrastructure
- Cloud platforms (AWS)
- Docker and basic containerisation
- CI/CD pipelines
- Git-based workflows
Experience & Qualifications
- 5+ years of professional full stack development experience.
- Proven experience building production-grade web applications.
- Prior experience with analytics, dashboards, or data platforms is highly preferred.
- Strong problem-solving and system design skills.
- Comfortable working in a fast-paced, product-oriented environment.
Nice to Have (Bonus Skills)
- Experience with data pipelines or ETL systems.
- Knowledge of Redis or caching systems.
- Experience with SaaS products or B2B platforms.
- Basic understanding of data science or machine learning concepts.
- Familiarity with time-series data and reporting systems.
- Familiarity with meta ads/Google ads API
Soft Skills
- Strong communication skills.
- Ability to work independently and take ownership.
- Attention to detail and focus on code quality.
- Comfortable working with ambiguous requirements.
Ideal Candidate Profile (Summary)
A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.
Employment Type: Full-time, Permanent
Location: Near Bommasandra Metro Station, Bangalore (Work from Office – 5 days/week)
Notice Period: 15 days or less preferred
About the Company:
SimStar Asia Ltd is a joint vendor of the SimGems and StarGems Group — a Hong Kong–based multinational organization engaged in the global business of conflict-free, high-value diamonds.
SimStar maintains the highest standards of integrity. Any candidate found engaging in unfair practices at any stage of the interview process will be disqualified and blacklisted.
Experience Required
- 4+ years of relevant professional experience.
Key Responsibilities
- Hands-on backend development using Python (mandatory).
- Write optimized and complex SQL queries; perform query tuning and performance optimization.
- Work extensively with the Odoo framework, including development and deployment.
- Manage deployments using Docker and/or Kubernetes.
- Develop frontend components using OWL.js or any modern JavaScript framework.
- Design scalable systems with a strong foundation in Data Structures, Algorithms, and System Design.
- Handle API integrations and data exchange between systems.
- Participate in technical discussions and architecture decisions.
Interview Expectations
- Candidates must be comfortable writing live code during interviews.
- SQL queries and optimization scenarios will be part of the technical assessment.
Must-Have Skills
- Python backend development
- Advanced SQL
- Odoo Framework & Deployment
- Docker / Kubernetes
- JavaScript frontend (OWL.js preferred)
- System Design fundamentals
- API integration experience
About the role:
We are seeking a highly detail-oriented and experienced Payment Switch Manual Tester to join our Quality Assurance team. The ideal candidate will be responsible for rigorously testing and validating the functionality, reliability, and security of our core payment switch system, ensuring flawless transaction processing and compliance with all industry standards.
Key Responsibilities
- Test Planning & Design: Analyze payment switch requirements, technical specifications, and user stories to create comprehensive test plans, detailed test scenarios, and manual test cases.
- Test Execution: Execute functional, integration, regression, system, and end-to-end testing on the payment switch and related systems
- Transaction Flow Validation: Manually validate various payment transaction lifecycles, including authorization, clearing, settlement, and chargebacks for credit/debit cards, prepaid cards, and other payment methods.
- Defect Management: Identify, document, and track defects and inconsistencies using defect management tools (JIRA) and work closely with development teams to ensure timely solution.
- Protocol & Scheme Testing: Test and validate messages and protocols ( ISO 8583, SWIFT) and ensure compliance with card scheme mandates (Visa, Mastercard, RuPay).
- API Testing: Perform manual testing of APIs (REST/SOAP) related to payment processing, ensuring correct data validation, security, and error handling
- Data Validation: Execute SQL queries for backend database validation to ensure data integrity and consistency across the transaction lifecycle.
- Collaboration: Participate in Agile/Scrum ceremonies, provide testing estimates, and communicate test status and risks to stakeholders.
- Documentation: Prepare and maintain detailed test reports, test summary reports, and other necessary QA documentation.
Required Experience & Skills
- 3+ years of proven experience in manual software testing.
- 3+ years of direct experience testing Payment Switch systems, Card Management Systems.
- Strong understanding of the Payments Domain and the end-to-end transaction lifecycle (Authorization, Clearing, Settlement).
- In-depth knowledge of payment industry standards and protocols, such as ISO 8583.
- Proficiency in designing and executing various types of manual tests (Functional, Regression).
- Unix/Linux – Comfortable with command-line tools, log analysis.
- Experience with testing tools for APIs ( Postman, SoapUI) and defect tracking (JIRA).
- Solid skills in writing and executing SQL queries for data validation.
- Excellent analytical, problem-solving, and communication skills (written and verbal).
Interview Process -
- Screening
- Virtual L1 interview
- Managerial Round
- HR Discussion
About Snabbit: Snabbit is India’s first Quick-Service App, delivering home services in just 15 minutes through a hyperlocal network of trained and verified professionals. Backed by Nexus Venture Partners (investors in Zepto, Unacademy, and Ultrahuman), Snabbit is redefining convenience in home services with quality and speed at its core. Founded by Aayush Agarwal, former Chief of Staff at Zepto, Snabbit is pioneering the Quick-Commerce revolution in services. In a short period, we’ve completed thousands of jobs with unmatched customer satisfaction and are scaling rapidly.
At Snabbit, we don’t just build products—we craft solutions that transform everyday lives. This is a playground for engineers who love solving complex problems, building systems from the ground up, and working in a fast-paced, ownership-driven environment. You’ll work alongside some of the brightest minds, pushing boundaries and creating meaningful impact at scale.
Responsibilities: ● Design, implement, and maintain backend services and APIs
● Develop and architect complex UI features for iOS and Android apps using Flutter
● Write high-quality, efficient, and maintainable code, adhering to industry best practices.
● Participate in design discussions to develop scalable solutions and implement them.
● Take ownership of feature delivery timelines and coordinate with cross-functional teams
● Troubleshoot and debug issues to ensure smooth system operations. ● Design, develop, and own end-to-end features for in-house software and tools
● Optimize application performance and implement best practices for mobile development
● Deploy and maintain services infrastructure on AWS. Requirements: ● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: ○ 3-5 years of hands-on experience as a full-stack developer.
○ Expertise in developing backend services and mobile applications.
○ Experience in leading small technical projects or features
○ Proven track record of delivering complex mobile applications to production
● Technical Skills:
○ Strong knowledge of data structures, algorithms, and design patterns. ○ Proficiency in Python and Advanced proficiency in Flutter with deep understanding of widget lifecycle and state management
○ Proficiency in RESTful APIs and microservices architecture ○ Knowledge of mobile app deployment processes and app store guidelines
○ Familiarity with version control systems (Git) and agile development methodologies
○ Experience with AWS or other relevant cloud technologies
○ Experience with databases (SQL, NoSQL) and data modeling
● Soft Skills:
○ Strong problem-solving and debugging abilities with ability to handle complex technical challenges and drive best practices within the team
○ Leadership qualities with ability to mentor and guide junior developers ○ Strong stakeholder management and client communication skills
○ A passion for learning and staying updated with technology trends.
Experience - 10-20 Yrs
Job Location - CommerZone, Yerwada, Pune
Work Mode - Work from Office
Shifts - General Shift
Work days - 5 days
Quantification - Graduation full time mandatory
Domain - Payment/Card/Banking/BFSI/ Retail Payments
Job Type - Full Time
Notice period - Immediate or 30 days
Interview Process -
1) Screening
2) Virtual L1 interview
3) Managerial Round Face to Face at Pune Office
4) HR Discussion
Job Description
Job Summary:
The Production/L2 Application Support Manager will be responsible for managing the banking applications that supports our payment gateway systems in a production environment. You will oversee the deployment, monitoring, optimization, and maintenance of all application components. You will ensure that our systems run smoothly, meet business and regulatory requirements, and provide high availability for our customers.
Key Responsibilities:
- Manage and optimize the application for the payment gateway systems to ensure high availability, reliability, and scalability.
- Oversee the day-to-day operations of production environments, including managing cloud services (AWS), load balancing, database systems, and monitoring tools.
- Lead a team of application support engineers and administrators, providing technical guidance and support to ensure applications and infrastructure solutions are implemented efficiently and effectively.
- Collaborate with development, security, and product teams to ensure application support the needs of the business and complies with relevant regulations.
- Monitor application performance and system health using monitoring tools and ensure quick resolution of any performance bottlenecks or system failures.
- Develop and maintain capacity planning, monitoring, and backup strategies to ensure scalability and minimal downtime during peak transaction periods.
- Drive continuous improvement of processes and tools for efficient production/application management.
- Ensure robust security practices are in place across production systems, including compliance with industry standards
- Conduct incident response, root cause analysis, and post-mortem analysis to prevent recurring issues and improve system performance.
- Oversee regular patching, updates, and version control of production systems to minimize vulnerabilities.
- Develop and maintain application support documentation, including architecture diagrams, processes, and disaster recovery plans.
- Manage and execute on-call duties, ensuring timely resolution of application-related issues and ensuring proper support coverage.
Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- 8+ years of experience managing L2 application support in high-availability, mission-critical environments, ideally within a payment gateway or fintech organization.
- Experience with working L2 production support base on Java programming.
- Experience with database systems (SQL, NoSQL) and database management, including high availability and disaster recovery strategies.
- Excellent communication and leadership skills, with the ability to collaborate effectively across teams and drive initiatives forward..
- Ability to work well under pressure and in high-stakes situations, ensuring uptime and service continuity.
We are looking for a skilled and motivated Integration Engineer to join our dynamic team in the payment domain. This role involves the seamless integration of payment systems, APIs, and third-party services into our platform, ensuring smooth and secure payment processing. The ideal candidate will bring experience with payment technologies, integration methodologies, and a strong grasp of industry standards.
Key Responsibilities:
- System Integration:
- Design, develop, and maintain integrations between various payment processors, gateways, and internal platforms using RESTful APIs, SOAP, and related technologies.
- Payment Gateway Integration:
- Integrate third-party payment solutions such as Visa, MasterCard, PayPal, Stripe, and others into the platform.
- Troubleshooting & Support:
- Identify and resolve integration issues including transactional failures, connectivity issues, and third-party service disruptions.
- Testing & Validation:
- Conduct end-to-end integration testing to ensure payment system functionality across development, staging, and production environments.
Qualifications:
- Education:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Equivalent work experience is also acceptable.
- Experience:
- 3+ years of hands-on experience in integrating payment systems and third-party services.
- Proven experience with payment gateways (e.g., Stripe, Square, PayPal, Adyen) and protocols (e.g., ISO 20022, EMV).
- Familiarity with payment processing systems and industry standards.
Desirable Skills:
- Strong understanding of API security, OAuth, and tokenization practices.
- Experience with PCI-DSS compliance.
- Excellent problem-solving and debugging skills.
- Effective communication and cross-functional collaboration capabilities.
We are looking for a Python Backend Developer to design, build, and maintain scalable backend services and APIs. The role involves working with modern Python frameworks, databases (SQL and NoSQL), and building well-tested, production-grade systems.
You will collaborate closely with frontend developers, AI/ML engineers, and system architects to deliver reliable and high-performance backend solutions.
Key Responsibilities
- Design, develop, and maintain backend services using Python
- Build and maintain RESTful APIs using FastAPI
- Design efficient data models and queries using MongoDB and SQL databases (PostgreSQL/MySQL)
- Ensure high performance, security, and scalability of backend systems
- Write unit tests, integration tests, and API tests to ensure code reliability
- Debug, troubleshoot, and resolve production issues
- Follow clean code practices, documentation, and version control workflows
- Participate in code reviews and contribute to technical discussions
- Work closely with cross-functional teams to translate requirements into technical solutions
Required Skills & Qualifications
Technical Skills
- Strong proficiency in Python
- Hands-on experience with FastAPI
- Experience with MongoDB (schema design, indexing, aggregation)
- Solid understanding of SQL databases and relational data modelling
- Experience writing and maintaining automated tests
- Unit testing (e.g., pytest)
- API testing
- Understanding of REST API design principles
- Familiarity with Git and collaborative development workflows
Good to Have
- Experience with async programming in Python (async/await)
- Knowledge of ORMs/ODMs (SQLAlchemy, Tortoise, Motor, etc.)
- Basic understanding of authentication & authorisation (JWT, OAuth)
- Exposure to Docker / containerised environments
- Experience working in Agile/Scrum teams
What We Value
- Strong problem-solving and debugging skills
- Attention to detail and commitment to quality
- Ability to write testable, maintainable, and well-documented code
- Ownership mindset and willingness to learn
- Teamwork
What We Offer
- Opportunity to work on real-world, production systems
- Technically challenging problems and ownership of components
- Collaborative engineering culture
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes preferred (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
We are seeking an experienced and highly skilled Java (Fullstack) Engineer to join our team.
The ideal candidate will have a strong background in both Back-end JAVA, Spring-boot, Spring Framework & Frontend Javascript, React or Angular with ability to salable high performance applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services Develop, test & deploy scalable & robust back-end services using JAVA & Spring-boot
- Build responsive & user friendly front-end applications using modern Java-script framework with React
- or Angular
- Collaborate with architects & team members to design salable, maintainable & efficient systems.
- Contribute to architectural decisions for micro-services, API’s & cloud solutions.
- Implement & maintain Restful API for seamless integration.
- Write clean, efficient & res-usable code adhering to best practices
- Conduct code reviews, performance optimizations & debugging
- Work with cross functional teams, including UX/UI designers, product managers & QA team.
- Mentor junior developers & provide technical guidance.
Skills & Requirements
- Minimum 3 Years of experience in backend/ fullstack development
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Front-end - JavaScript, HTML, CSS, Typescript, Angular
- Database - MySQL
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
- Experience working in fintech, payments, or high-scale production environments
We are seeking an experienced & highly skilled Java Lead to join our team. The ideal candidate will have a strong background in both front end & Back-end Technologies with expertise in JAVA, and Spring.
As a Lead, you will be responsible for overseeing the development
team, architecture saleable application & ensuring best practices in software development. This role requires a hands on leader with excellent problem solving abilities & a passion for mentoring junior
team members.
Responsibilities
- Lead & mentor a team of developers proving guidance on coding standards, architecture & best
- practices
- Architect, design & develop ent to end JAVA based web applications & ensure high performance,
- security & scalability
- Work closely with cross functional teams, including product managers, designers & other developers
- to ensure alignment on project requirements & deliverable.
- Conduct code reviews & provide constructive feedback to team members to improve code quality &
- maintain a consistent codebase
- Participate in Agile/Scrum Ceremonies such as stand ups, sprint planning & retrospectives to
- contribute to the development process.
- Troubleshoot & resolve complex technical issues & ensure timely resolution of bugs & improvements.
- Stay up to date with emerging technologies & industry trends recommending & implementing
- improvements to keep our stack modern & effective
Skills & Requirements
- Minimum 8 Years of experience in Java development, with at least 2 years in Lead developer role.
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Database - MySQL
- Must be working in the fintech/ Payments domain
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
The Opportunity:
As a Technical Support Consultant, you will play a significant role in Performio providing world
class support to our customers. With our tried and tested onboarding process, you will soon
become familiar with the Performio product and company.
You will draw on previous support experience to monitor for new support requests in
Zendesk, provide initial triage with 1st and 2nd level support, ensuring the customer is kept up
to date and the request is completed within a timely manner.
You will collaborate with other teams to ensure more complex requests are managed
efficiently and will provide feedback to help improve product and solution knowledge as well
as processes.
Answers to questions asked by customers that are not in the knowledge base will be
reviewed and added to the knowledge base if appropriate. We’re looking for someone who
thinks ahead, recognising opportunities to help customers help themselves.
You will help out with configuration changes and testing, furthering your knowledge and
experience of Performio. You may also be expected to help out with Managed Service,
Implementation and Work Order related tasks from time to time.
About Performio:
Performio is the last ICM software you’ll ever need. It allows you to manage incentive
compensation complexity and change over the long run by combining a structured plan
builder and flexible data management, with a partner who will make you a customer for life.
Our people are highly-motivated and engaged professionals with a clear set of values and
behaviors. We prove these values matter to us by living them each day. This makes Performio
both a great place to work and a great company to do business with.
But a great team alone is not sufficient to win. We have solved the fundamental issue
widespread in our industry—overly-rigid applications that cannot adapt to your needs, or
overly-flexible ones that become impossible to maintain over time. Only Performio allows you
to manage incentive compensation complexity and change over the long run by combining a
structured plan builder and flexible data management. The component-based plan builder
makes it easier to understand, change, and self-manage than traditional formula or
rules-based solutions. Our ability to Import data from any source, in any format, and perform
in-app data transformations, eliminate the pain of external processing and provides
end-to-end data visibility. The combination of these two functions, allows us to deliver more
powerful reporting and insights. And while every vendor says they are a partner, we truly are
one. We not only get your implementation right the first time, we enable you and give you the
autonomy and control to make changes year after year. And unlike most, we support every
part of your unique configuration. Performio is a partner that will make you a customer for life.
We have a global customer base across Australia, Asia, Europe, and the US in 25+ industries
that includes many well-known companies like Toll Brothers, Abbott Labs, News Corp,
Johnson & Johnson, Nikon, and Uber Freight.
What will you be doing:
● Monitoring and triaging new Support requests submitted by customers using our
Zendesk Support Portal
● Providing 1st and 2nd line support for Support requests
● Investigate, reproduce and resolve Customer issues within the required Service Level
Agreements
● Maintain our evolving knowledge base
● Clear and concise documentation of root causes and resolution
● Assist with the implementation and testing of Change Requests and implementation
projects
● As your knowledge of the product grows, make recommendations for solutions based
on client’s requests
● Assist in educating our client's compensation administrators applying best practices
What we’re looking for:
● Passion for customer service with a communication style that can be adapted to suit
the audience
● A problem solver with a range of troubleshooting methodologies
● Experience in the Sales Compensation industry
● Familiar with basic database concepts, spreadsheets and experienced in working with
large datasets (Excel, Relational Database Tables, SQL, ETL or other types of
tools/languages)
● 4+ years of experience in a similar role (experience with ICM software preferred)
● Experience with implementation & support of ICM solutions like SAP Commissions,
Varicent, Xactly will be a big plus
● Positive Attitude - optimistic, cares deeply about company and customers
● High Emotional IQ - shows empathy, listens when appropriate, creates healthy
conversation dynamic
● Resourceful - has a "I'll figure it out" attitude if something they need doesn't exist
Role Overview:
We are looking for a detail-oriented Quality Assurance (QA) Tester who is
passionate about delivering high-quality consumer-facing applications. This role
involves manual testing with exposure to automation, API testing, databases, and
mobile/web platforms, while working closely with engineering and product teams
across the SDLC.
Products:
• Openly – A conversation-first social app focused on meaningful interactions.
• Playroom – Voicechat – A real-time voice chat platform for live community
engagement.
• FriendChat – A chatroom-based social app for discovering and connecting with
new people.
Key Responsibilities:
• Perform manual testing for Android, web, and native applications.
• Create and execute detailed test scenarios, test cases, and test plans.
• Conduct REST API testing using Postman.
• Validate data using SQL and MongoDB.
• Identify, report, and track defects with clear reproduction steps.
• Support basic automation testing using Selenium (Java) and Appium.
• Perform regression, smoke, sanity, and exploratory testing.
• Conduct risk analysis and highlight quality risks early in the SDLC.• Collaborate closely with developers and product teams for defect resolution.
• Participate in CI/CD pipelines and support automated test executions.
• Use ADB tools for Android testing across devices and environments.
Required Skills & Technical Expertise:
• Strong knowledge of Manual Testing fundamentals.
• Hands-on experience with Postman and REST APIs.
• Working knowledge of SQL and MongoDB.
• Ability to design effective test scenarios.
• Basic understanding of Automation Testing concepts.
• Familiarity with SDLC and QA methodologies.
• Exposure to Selenium with Java and Appium.
• Understanding of Android, web, and native application testing.
• Experience using proxy tools for debugging and network inspection.
Good to Have:
• Exposure to CI/CD tools and pipelines.
• Hands-on experience with Appium, K6, Kafka, and proxy tools.
• Basic understanding of performance and load testing.
• Awareness of risk-based testing strategies.
Key Traits:
• High attention to detail and quality.
• Strong analytical and problem-solving skills.
• Clear communication and collaboration abilities.
• Eagerness to learn and grow in automation and advanced testing tools.
We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.
Key Responsibilities
- Collaborate with business users and stakeholders to understand business processes and data requirements
- Design and implement dimensional data models, including fact and dimension tables
- Identify, design, and implement data transformation and cleansing logic
- Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
- Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
- Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
- Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
- Provide high-level design, research, and effort estimates for data integration initiatives
- Provide production support for ETL processes to ensure data availability and SLA adherence
- Analyze and resolve data pipeline and performance issues
- Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
- Translate business requirements into well-defined technical data specifications
- Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
- Define and document BI usage through use cases, prototypes, testing, and deployment
- Support and enhance data governance and data quality processes
- Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
- Train and support business users, IT analysts, and developers
- Lead and collaborate with teams spread across multiple locations
Required Skills & Qualifications
- Bachelor’s degree in Computer Science or a related field, or equivalent work experience
- 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
- Strong expertise in data warehousing concepts, tools, and best practices
- Excellent SQL skills
- Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
- Hands-on experience with Google Cloud Platform (GCP) services, including:
- BigQuery
- Cloud SQL
- Cloud Composer (Airflow)
- Dataflow
- Dataproc
- Cloud Functions
- Google Cloud Storage (GCS)
- Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
- Strong experience integrating data using APIs, XML, JSON, and similar formats
- In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
- Solid understanding of SDLC, Agile, and Scrum methodologies
- Strong problem-solving, multitasking, and organizational skills
- Experience handling large-scale datasets and database design
- Strong verbal and written communication skills
- Experience leading teams across multiple locations
Good to Have
- Experience with SSRS and SSIS
- Exposure to AWS and/or Azure cloud platforms
- Experience working with enterprise BI and analytics tools
Why Join Us
- Opportunity to work on large-scale, enterprise data platforms
- Exposure to modern cloud-native data engineering technologies
- Collaborative environment with strong stakeholder interaction
- Career growth and leadership opportunities
About the Company
SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.
Role Overview
This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.
Key Responsibilities
- Design, develop, and deploy machine learning models with guidance from senior engineers
- Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
- Implement ML algorithms using Python and standard ML libraries
- Train, test, and evaluate models and track performance metrics
- Assist in integrating AI/ML models into applications and APIs
- Perform basic data analysis and visualization to extract insights
- Participate in code reviews, documentation, and team discussions
- Stay updated on ML, AI, and Generative AI trends
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, AI, Data Science, or a related field
- Strong foundation in Python
- Clear understanding of core ML concepts: supervised and unsupervised learning
- Hands-on exposure to NumPy, Pandas, and Scikit-learn
- Basic familiarity with TensorFlow or PyTorch
- Understanding of data structures, algorithms, and statistics
- Good analytical thinking and problem-solving skills
- Comfortable working in a fast-moving product environment
Good to Have
- Exposure to NLP, Computer Vision, or Generative AI
- Experience with Jupyter Notebook or Google Colab
- Basic knowledge of SQL or NoSQL databases
- Understanding of REST APIs and model deployment concepts
- Familiarity with Git/GitHub
- AI/ML internships or academic projects
About Upsurge Labs
We're building the infrastructure and products that will shape how human civilization operates in the coming decades. The specifics evolve—the ambition doesn't.
The Role
The way software gets built is undergoing a fundamental shift. AI can now write, test, debug, and ship production-grade systems across web, mobile, embedded, robotics, and infrastructure. The bottleneck is no longer typing code—it's knowing what to build, why, and how the pieces fit together.
We're hiring Systems Engineers: people who can navigate an entire development cycle—from problem definition to production deployment—by directing AI tools and reasoning from first principles. You won't specialize in one stack. You'll operate across all of them.
This role replaces traditional dev teams. You'll work largely autonomously, shipping complete systems that previously required 3-5 specialists.
What You'll Do
- Own entire products and systems end-to-end: architecture, implementation, deployment, iteration
- Work across domains as needed—backend services, frontend interfaces, mobile apps, data pipelines, DevOps, embedded software, robotic systems
- Use AI tools to write, review, test, and debug code at high velocity
- Identify when AI output is wrong, incomplete, or subtly broken—and know how to fix it or when to escalate
- Make architectural decisions: database selection, protocol choices, system boundaries, performance tradeoffs
- Collaborate directly with designers, domain experts, and leadership
- Ship. Constantly.
What You Bring
First-principles thinking
You understand how systems work at a foundational level. When something breaks, you reason backward from the error to potential causes. You know the difference between a network timeout, a malformed query, a race condition, and a misconfigured environment—even if you haven't memorized the fix.
Broad technical fluency
You don't need to be an expert in everything. But you need working knowledge across:
- How web systems work: HTTP, DNS, TLS, REST, WebSockets, authentication flows
- How databases work: relational vs document vs key-value, indexing, query structure, transactions
- How infrastructure works: containers, orchestration, CI/CD, cloud primitives, networking basics
- How frontend works: rendering, state management, browser APIs, responsive design
- How mobile works: native vs cross-platform tradeoffs, app lifecycle, permissions
- How embedded/robotics software works: real-time constraints, sensor integration, communication protocols
You should be able to read code in any mainstream language and understand what it's doing.
AI-native workflow
You've already built real things using AI tools. You know how to prompt effectively, how to structure problems so AI can help, how to validate AI output, and when to step in manually.
High agency
You don't wait for permission or detailed specs. You figure out what needs to happen and make it happen. Ambiguity doesn't paralyze you.
Proof of work
Show us what you've built. Live products, GitHub repos, side projects, internal tools—anything that demonstrates you can ship complete systems.
What We Don't Care About
- Degrees or formal credentials
- Years of experience in a specific language or framework
- Whether you came from a "traditional" engineering path
What You'll Get
- Direct line to the CEO
- Autonomy to own large problem spaces
- A front-row seat to how engineering work is evolving
- Colleagues who ship fast and think clearly

Full‑Stack Engineer (Python/Django & Next.js)
Location: Bangalore
Experience: 2–8 years of hands‑on full‑stack development
We’re looking for a passionate Full‑Stack Engineer to join our team and help build secure, scalable systems that power exceptional customer experiences.
Key Skills -
• Architect and develop secure, scalable applications
• Collaborate closely with product & design teams
• Manage CI/CD pipelines and deployments
• Mentor engineers and enforce coding best practices
What we’re looking for:
• Strong expertise in Python/Django & Next.js/React
• Hands‑on with PostgreSQL, Docker, AWS/GCP
• Experience leading engineering teams
• Excellent problem‑solving & communication skills
If you’re excited about building impactful products and driving engineering excellence. Apply now !!
The Opportunity
Planview is looking for a passionate Sr Data Scientist to join our team tasked with developing innovative tools for connected work. You are an experienced expert in supporting enterprise
applications using Data Analytics, Machine Learning, and Generative AI.
You will use this experience to lead other data scientists and data engineers. You will also effectively engage with product teams to specify, validate, prototype, scale, and deploy features with a consistent customer experience across the Planview product suite.
Responsibilities (What you'll do)
- Enable Data Science features within Planview applications by working in a fast-paced start-up mindset.
- Collaborate closely with product management to enable Data Science features that deliver significant value to customers, ensuring that these features are optimized for operational efficiency.
- Manage every stage of the AI/ML development lifecycle, from initial concept through deployment in a production environment.
- Provide leadership to other Data Scientists by exemplifying exceptional quality in work, nurturing a culture of continuous learning, and offering daily guidance in their research endeavors.
- Effectively communicate ideas drawn from complex data with clarity and insight.
Qualifications (What you'll bring)
- Master’s in operations research, Statistics, Computer Science, Data Science, or related field.
- 8+ years of experience as a data scientist, data engineer, or ML engineer.
- Demonstrable history for bringing Data Science features to Enterprise applications.
- Exceptional Python and SQL coding skills.
- Experience with Optimization, Machine Learning, Generative AI, NLP, Statistics, and Simulation.
- Experience with AWS Data and ML Technologies (Sagemaker, Glue, Athena, Redshift)
Preferred qualifications:
- Experience working with datasets in the domains of project management, software development, and resource planning.
- Experience with common libraries and frameworks in data science (Scikit Learn, TensorFlow, PyTorch).
- Experience with ML platform tools (AWS SageMaker).
- Skilled at working as part of a global, diverse workforce of high-performing individuals.
- AWS Certification is a plus
We are seeking an experienced and highly skilled Java (Fullstack) Engineer to join our team.
The ideal candidate will have a strong background in both Back-end JAVA, Spring-boot, Spring Framework & Frontend Javascript, React or Angular with ability to salable high performance applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services Develop, test & deploy scalable & robust back-end services using JAVA & Spring-boot
- Build responsive & user friendly front-end applications using modern Java-script framework with React
- or Angular
- Collaborate with architects & team members to design salable, maintainable & efficient systems.
- Contribute to architectural decisions for micro-services, API’s & cloud solutions.
- Implement & maintain Restful API for seamless integration.
- Write clean, efficient & res-usable code adhering to best practices
- Conduct code reviews, performance optimizations & debugging
- Work with cross functional teams, including UX/UI designers, product managers & QA team.
- Mentor junior developers & provide technical guidance.
Skills & Requirements
- Minimum 5 Years of experience in backend/ fullstack development
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Front-end - JavaScript, HTML, CSS, Typescript, Angular
- Database - MySQL
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
- Experience working in fintech, payments, or high-scale production environments
Full Stack Developer
Company: Jupsoft Technologies Pvt. Ltd.
Experience: 2 Years
Salary: ₹30,000 – ₹40,000 per month
Location: Noida
Job Type: Full-Time
Job description:-
Key Responsibilities
- Design, develop, and maintain scalable, production-ready web applications using Next.js (frontend) and Django + Django REST Framework (backend).
- Build, document, and integrate RESTful APIs to enable seamless communication between services.
- Work with SQL-based databases (e.g., MySQL/PostgreSQL/SQL Server) for schema design, optimization, indexing, and performance tuning.
- Implement multi-tenant database architecture to support scalable, secure multi-tenant applications.
- Ensure applications are secure, optimized, and user-friendly with proper implementation of SSR, authentication, authorization, and session management.
- Utilize Tailwind CSS & Shadcn UI for building modern, reusable UI components.
- Integrate and work with Docker for containerization and development workflows.
- Work with Gemini, OpenAPI specifications for API implementation and documentation.
- Build CI/CD pipelines using GitHub Actions and collaborate with DevOps for smooth deployments.
- Manage end-to-end product lifecycle — development, testing, deployment, monitoring, and optimization.
- Troubleshoot, debug, and optimize application performance and reliability.
- Maintain high-quality technical documentation, code readability, and system design clarity.
- Collaborate closely with UI/UX, QA, and product teams to ensure smooth delivery.
Required Skills & Qualifications
- Strong hands-on experience with Next.js & React ecosystem.
- Strong backend experience using Django + Django REST Framework (DRF).
- Strong understanding of SQL database systems and query optimization.
- Solid experience working with Docker for production-grade apps.
- Strong proficiency with Tailwind CSS and UI component libraries, especially Shadcn UI.
- Experience with GitHub Actions for CI/CD implementation.
- Strong understanding of REST APIs, OpenAPI, authentication, session management, and state management.
- Experience developing multi-tenant systems.
- Experience making applications production-ready and deploying end-to-end.
- Proficiency with HTML5, CSS3, JavaScript (ES6+), TypeScript.
- Familiarity with version control (Git/GitHub).
- Strong problem-solving, debugging, and analytical skills.
- Ability to write clean, maintainable, scalable code following best practices.
Nice to Have
- Experience with cloud services (AWS / GCP / Azure).
- Experience with WebSockets for real-time communication.
- Basic understanding of DevOps pipelines and monitoring tools.
Additional Attributes
- Strong communication skills and ability to collaborate across teams.
- Passion for learning new technologies and delivering high-quality products.
- Ability to work independently and manage timelines in a fast-paced environment.
Benefits:
- Health insurance
- Paid sick time
- Provident Fund
Work Location: In person
Job Type: Full-time
Benefits:
- Health insurance
- Paid sick time
- Provident Fund
Work Location: In person
Job Role: Profile Data Setup Analyst
Job Title: Data Analyst
Location: Vadodara | Department: Customer Service |
Experience: 3 - 5 Years
Job Overview
A company, we’ve been transforming the window and door industry with intelligent
software for over 40 years. Our solutions power manufacturers, dealers, and installers globally,
enabling efficiency, accuracy, and growth. We are now looking for curious, data-driven professionals
to join our mission of delivering world-class digital solutions to our customers.
Job Overview
As a Profile Data Setup Analyst, you will play a key role in configuring, analysing, and managing product
data for our customers. You will work closely with internal teams and clients to ensure accurate,
optimized, and timely data setup . This role is perfect for someone who
enjoys problem-solving, working with data, and continuously learning.
Key Responsibilities
• Understand customer product configurations and translate them into structured data using
Windowmaker Software.
• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with
customer-specific rules and industry practices.
• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate
quoting and manufacturing.
• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver
complete and tested data setups on time.
• Provide training, guidance, and documentation to internal teams and customers as needed.
• Continuously look for process improvements and contribute to knowledge-sharing across the
team.
• Support escalated customer cases related to data accuracy or configuration issues.
• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and
attention to detail.
Required Qualifications
• 3–5 years of experience in a data-centric role.
• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.
• Experience with product data structures and product lifecycle.
• Strong analytical skills with a keen eye for data accuracy and patterns.
• Ability to break down complex product information into structured data elements.
• Eagerness to learn industry domain knowledge and software capabilities.
• Hands-on experience with Excel, SQL, or other data tools.
• Ability to manage priorities and meet deadlines in a fast-paced environment.
• Excellent written and verbal communication skills.
• A collaborative, growth-oriented mindset.
Nice to Have
• Prior exposure to ERP/CPQ/Manufacturing systems is a plus.
• Knowledge of the window and door (fenestration) industry is an added advantage.
Why Join Us
• Be part of a global product company with a solid industry reputation.
• Work on impactful projects that directly influence customer success.
• Collaborate with a talented, friendly, and supportive team.
• Learn, grow, and make a difference in the digital transformation of the fenestration industry.
About Hudson Data
At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.
Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.
Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.
⸻
About the Role
We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.
⸻
Key Responsibilities
1. Data Collection & Management
• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.
• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.
• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.
• Collaborate with engineering teams to define data governance, lineage, and security standards.
2. Data Cleaning & Preprocessing
• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.
• Handle missing data, duplicates, and outliers while ensuring consistency and quality.
• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.
3. Data Analysis & Insights
• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.
• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.
• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.
• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.
4. Predictive Modeling & Machine Learning
• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.
• Perform feature engineering, model tuning, and cross-validation for performance optimization.
• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.
• Continuously monitor, evaluate, and retrain models to ensure business relevance.
5. Reporting & Visualization
• Develop interactive dashboards and automated reports for performance tracking.
• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.
• Communicate insights effectively through clear data storytelling.
6. Collaboration & Communication
• Partner with business, engineering, and product teams to define analytical goals and success metrics.
• Translate complex data and model results into actionable insights for decision-makers.
• Advocate for data-driven culture and support data literacy across teams.
7. Continuous Improvement & Innovation
• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.
• Identify opportunities for process optimization, automation, and innovation.
• Contribute to internal R&D and AI product development initiatives.
⸻
Required Skills & Qualifications
Technical Skills
• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).
• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.
• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.
• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).
• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).
• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.
Soft Skills
• Strong analytical, problem-solving, and critical-thinking abilities.
• Excellent communication and presentation skills, including data storytelling.
• Curiosity and creativity in exploring and interpreting data.
• Collaborative mindset, capable of working in cross-functional and fast-paced environments.
⸻
Education & Certifications
• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
• Relevant certifications are highly valued:
• Google Cloud Professional Data Engineer
• AWS Certified Data Analytics – Specialty
• Microsoft Certified: Azure Data Scientist Associate
• TensorFlow Developer Certificate
⸻
Why Join Hudson Data
At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
We are seeking a highly skilled and experienced Python Developer with a strong background in fintech to join our dynamic team. The ideal candidate will have at least 7+ years of professional experience in Python development, with a proven track record of delivering high-quality software solutions in the fintech industry.
Responsibilities:
Design, build, and maintain RESTful APIs using Django and Django Rest Framework.
Integrate AI/ML models into existing applications to enhance functionality and provide data-driven insights.
Collaborate with cross-functional teams, including product managers, designers, and other developers, to define and implement new features and functionalities.
Manage deployment processes, ensuring smooth and efficient delivery of applications.
Implement and maintain payment gateway solutions to facilitate secure transactions.
Conduct code reviews, provide constructive feedback, and mentor junior members of the development team.
Stay up-to-date with emerging technologies and industry trends, and evaluate their potential impact on our products and services.
Maintain clear and comprehensive documentation for all development processes and integrations.
Requirements:
Proficiency in Python and Django/Django Rest Framework.
Experience with REST API development and integration.
Knowledge of AI/ML concepts and practical experience integrating AI/ML models.
Hands-on experience with deployment tools and processes.
Familiarity with payment gateway integration and management.
Strong understanding of database systems (SQL, PostgreSQL, MySQL).
Experience with version control systems (Git).
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Job Types: Full-time, Permanent
Work Location: In person
AccioJob is conducting a Walk-In Hiring Drive with Xebo.ai for the position of Software Engineer.
To apply, register and select your slot here: https://go.acciojob.com/SMPPbd
Required Skills: DSA, SQL, OOPS, JavaScript, React
Eligibility:
Degree: BTech./BE
Branch: Computer Science/CSE/Other CS related branch, IT
Graduation Year: 2025, 2026
Work Details:
Work Location: Noida (Onsite)
CTC: ₹6 LPA to ₹7 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida Centre
Further Rounds (for shortlisted candidates only):
Resume Shortlist, Technical Interview 1, Technical Interview 2, Technical Interview 3, HR Discussion
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/SMPPbd
👇 FAST SLOT BOOKING 👇
[ 📲 DOWNLOAD ACCIOJOB APP ]
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
As a Lead Data Specialist at MIC Global, you will play a key role in transforming data into actionable insights that inform strategic and operational decisions. You will work closely with Product, Engineering, and Business teams to analyze trends, build dashboards, and ensure that data pipelines and reporting structures are accurate, automated, and scalable.
This is a hands-on, analytical, and technically focused role ideal for someone experienced in data analytics and engineering practices. You will use SQL, Python, and modern BI tools to interpret large datasets, support pricing models, and help shape the data-driven culture across MIC Global
Key Roles and Responsibilities
Data Analytics & Insights
- Analyze complex datasets to identify trends, patterns, and insights that support business and product decisions.
- Partner with Product, Operations, and Finance teams to generate actionable intelligence on customer behavior, product performance, and risk modeling.
- Contribute to the development of pricing models, ensuring accuracy and commercial relevance.
- Deliver clear, concise data stories and visualizations that drive executive and operational understanding.
- Develop analytical toolkits for underwriting, pricing and claims
Data Engineering & Pipeline Management
- Design, implement, and maintain reliable data pipelines and ETL workflows.
- Write clean, efficient scripts in Python for data cleaning, transformation, and automation.
- Ensure data quality, integrity, and accessibility across multiple systems and environments.
- Work with Azure data services to store, process, and manage large datasets efficiently.
Business Intelligence & Reporting
- Develop, maintain, and optimize dashboards and reports using Power BI (or similar tools).
- Automate data refreshes and streamline reporting processes for cross-functional teams.
- Track and communicate key business metrics, providing proactive recommendations.
Collaboration & Innovation
- Collaborate with engineers, product managers, and business leads to align analytical outputs with company goals.
- Support the adoption of modern data tools and agentic AI frameworks to improve insight generation and automation.
- Continuously identify opportunities to enhance data-driven decision-making across the organization.
Ideal Candidate Profile
- 10+ years of relevant experience in data analysis or business intelligence, ideally
- within product-based SaaS, fintech, or insurance environments.
- Proven expertise in SQL for data querying, manipulation, and optimization.
- Hands-on experience with Python for data analytics, automation, and scripting.
- Strong proficiency in Power BI, Tableau, or equivalent BI tools.
- Experience working in Azure or other cloud-based data ecosystems.
- Solid understanding of data modeling, ETL processes, and data governance.
- Ability to translate business questions into technical analysis and communicate findings effectively.
Preferred Attributes
- Experience in insurance or fintech environments, especially operations, and claims analytics.
- Exposure to agentic AI and modern data stack tools (e.g., dbt, Snowflake, Databricks).
- Strong attention to detail, analytical curiosity, and business acumen.
- Collaborative mindset with a passion for driving measurable impact through data.
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development
If interested please share your resume at ayushi.dwivedi at cloudsufi.com
Note - This role is remote but with quarterly visit to Noida office (1 week in a qarter) if you are ok for that then pls share your resume.
Data Engineer
Position Type: Full-time
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
If interested please send your resume at ayushi.dwivedi at cloudsufi.com
Current location of candidate must be Bangalore (as client office visit is required), also candidate must be open for 1 week in a quarter visit to Noida office.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.
Key Roles and Responsibilities
Database Administration & Optimization
- Support infrastructure team with production database issues and troubleshooting
- Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
- Optimize stored procedures, functions, and views for better performance
- Perform query tuning, index optimization, and execution plan analysis
- Design and develop complex stored procedures, functions, and views
- Support the reporting team with complex SQL queries and database design
Data Engineering & Pipelines
- Design and build ETL/ELT pipelines using Azure Data Factory and Python
- Implement data quality checks and validation rules before data enters pipelines
- Develop data integration solutions to connect various data sources and systems
- Create automated data validation, quality monitoring, and alerting mechanisms
- Develop Python scripts for data processing, transformation, and automation
- Build and maintain data models to support reporting and analytics requirements
Support & Collaboration
- Help data analytics team build visualizations and dashboards by providing data models and queries
- Support reporting team with data extraction, transformation, and complex reporting queries
- Collaborate with development teams to support application database requirements
- Provide technical guidance and best practices for database design and query optimization
Azure & Cloud
- Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
- Implement cloud-based data solutions following Azure best practices
- Support cloud database migrations and optimizations
- Work with Agentic AI concepts and tools to build intelligent data solutions
Ideal Candidate Profile
Essential
- 5-8 years of experience in data engineering and database administration
- Strong expertise in MS SQL Server (2016+) administration and development
- Proficient in writing complex SQL queries, stored procedures, functions, and views
- Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
- Strong Python scripting skills for data processing and automation
- Experience with ETL/ELT design and implementation
- Knowledge of database performance tuning, query optimization, and indexing strategies
- Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
- Understanding of data modeling and dimensional design concepts
- Knowledge of Agile methodology and experience working in Agile teams
- Strong problem-solving and analytical skills
- Understanding of Agentic AI concepts and tools
- Excellent communication skills and ability to work with cross-functional teams
Desirable
- Knowledge of insurance or financial services domain
- Experience with Azure ML and machine learning pipelines
- Experience with Azure DevOps and CI/CD pipelines
- Familiarity with data visualization tools (Power BI, Tableau)
- Experience with NoSQL databases (Cosmos DB, MongoDB)
- Knowledge of Spark, Databricks, or other big data technologies
- Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
- Experience with version control systems (Git, Azure Repos)
Tech Stack
- MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development
Job Title : Java Developer
Experience : 2 to 10 Years
Location : Pune (Must be currently in Pune)
Notice Period : Immediate to 15 Days (Serving NP acceptable)
Budget :
- 2 to 3.5 yrs → up to 13 LPA
- 3.5 to 5 yrs → up to 18 LPA
- 5+ yrs → up to 25 LPA
Mandatory Skills : Java 8/17, Spring Boot, REST APIs, Hibernate/JPA, SQL/RDBMS, OOPs, Design Patterns, Git/GitHub, Unit Testing, Microservices (Good Coding Skills Mandatory)
Role Overview :
Hiring multiple Java Developers to build scalable and performance-driven applications. Strong hands-on coding and problem-solving skills required.
Key Responsibilities :
- Develop and maintain Java-based applications & REST services
- Write clean, testable code with JUnit & unit tests
- Participate in code reviews, debugging & optimization
- Work with SQL databases, CI/CD & version control tools
- Collaborate with cross-functional teams in Agile setups
Good to Have :
- MongoDB, AWS, Docker, Jenkins/GitHub Actions, Prometheus, Grafana, Spring Actuators, Tomcat/JBoss

Position: Full Stack Developer ( PHP Codeigniter)
Company : Mayura Consultancy Services
Experience: 2 yrs
Location : Bangalore
Skill: HTML, CSS, Bootstrap, Javascript, Ajax, Jquery , PHP and Codeigniter or CI
Work Location: Work From Home(WFH)
Apply: Please apply for the job opening using the URL below, based on your skill set. Once you complete the application form, we will review your profile.
Website:
https://www.mayuraconsultancy.com/careers/mcs-full-stack-web-developer-opening?r=jlp
Requirements :
- Prior experience in Full Stack Development using PHP Codeigniter
Perks of Working with MCS :
- Contribute to Innovative Solutions: Join a dynamic team at the forefront of software development, contributing to innovative projects and shaping the technological solutions of the organization.
- Work with Clients from across the Globe: Collaborate with clients from around the world, gaining exposure to diverse cultures and industries, and contributing to the development of solutions that address the unique needs and challenges of global businesses.
- Complete Work From Home Opportunity: Enjoy the flexibility of working entirely from the comfort of your home, empowering you to manage your schedule and achieve a better work-life balance while coding innovative solutions for MCS.
- Opportunity to Work on Projects Developing from Scratch: Engage in projects from inception to completion, working on solutions developed from scratch and having the opportunity to make a significant impact on the design, architecture, and functionality of the final product.
- Diverse Projects: Be involved in a variety of development projects, including web applications, mobile apps, e-commerce platforms, and more, allowing you to showcase your versatility as a Full Stack Developer and expand your portfolio.
Joining MCS as a Full Stack Developer opens the door to a world where your technical skills can shine and grow, all while enjoying a supportive and dynamic work environment. We're not just building solutions; we're building the future—and you can be a key part of that journey.
Role Overview
We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.
Key Responsibilities
- Analyze large-scale marketing and customer datasets to deliver actionable business insights.
- Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
- Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
- Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
- Write complex SQL queries to extract, combine, and analyze data from multiple sources.
- Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
- Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
- Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
- Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
- Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
- Ensure data accuracy, consistency, and documentation of analytics methodologies.
Required Skills & Qualifications
- 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
- Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
- Strong proficiency in SQL and Python for data analysis.
- Experience with Tableau for dashboarding and automated reporting.
- Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
- Strong understanding of data modeling, reporting, and ROI measurement.
- Excellent stakeholder management, communication, and data storytelling skills.
- Ability to work independently in a fast-paced and ambiguous environment.
Good to Have
- Experience with Power BI / Looker / BigQuery
- Exposure to A/B testing, experimentation, or econometric modeling
- Experience working with large marketing datasets and cloud platforms
Role Overview
We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.
Key Responsibilities
- Analyze large-scale marketing and customer datasets to deliver actionable business insights.
- Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
- Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
- Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
- Write complex SQL queries to extract, combine, and analyze data from multiple sources.
- Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
- Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
- Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
- Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
- Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
- Ensure data accuracy, consistency, and documentation of analytics methodologies.
Required Skills & Qualifications
- 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
- Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
- Strong proficiency in SQL and Python for data analysis.
- Experience with Tableau for dashboarding and automated reporting.
- Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
- Strong understanding of data modeling, reporting, and ROI measurement.
- Excellent stakeholder management, communication, and data storytelling skills.
- Ability to work independently in a fast-paced and ambiguous environment.
Good to Have
- Experience with Power BI / Looker / BigQuery
- Exposure to A/B testing, experimentation, or econometric modeling
- Experience working with large marketing datasets and cloud platforms
Position Overview:
As a BI (Business Intelligence) Developer, they will be responsible for designing,
developing, and maintaining the business intelligence solutions that support data
analysis and reporting. They will collaborate with business stakeholders, analysts, and
data engineers to understand requirements and translate them into efficient and
effective BI solutions. Their role will involve working with various data sources,
designing data models, assisting ETL (Extract, Transform, Load) processes, and
developing interactive dashboards and reports.
Key Responsibilities:
1. Requirement Gathering: Collaborate with business stakeholders to understand
their data analysis and reporting needs. Translate these requirements into
technical specifications and develop appropriate BI solutions.
2. Data Modelling: Design and develop data models that effectively represent
the underlying business processes and facilitate data analysis and reporting.
Ensure data integrity, accuracy, and consistency within the data models.
3. Dashboard and Report Development: Design, develop, and deploy interactive
dashboards and reports using Sigma computing.
4. Data Integration: Integrate data from various systems and sources to provide a
comprehensive view of business performance. Ensure data consistency and
accuracy across different data sets.
5. Performance Optimization: Identify performance bottlenecks in BI solutions and
optimize query performance, data processing, and report rendering. Continuously
monitor and fine-tune the performance of BI applications.
6. Data Governance: Ensure compliance with data governance policies and
standards. Implement appropriate security measures to protect sensitive data.
7. Documentation and Training: Document the technical specifications, data
models, ETL processes, and BI solution configurations.
8. Ensuring that the proposed solutions meet business needs and requirements.
9. Should be able to create and own Business/ Functional Requirement
Documents.
10. Monitor or track project milestones and deliverables.
11. Submit project deliverables, ensuring adherence to quality standards.
Qualifications and Skills:
1. Master/ Bachelor’s degree in IT or relevant and having a minimum of 2-4 years of
experience in Business Analysis or a related field
2. Proven experience as a BI Developer or similar role.
3. Fundamental analytical and conceptual thinking skills with demonstrated skills in
managing projects on implementation of Platform Solutions
4. Excellent planning, organizational and time management skills.
5. Strong understanding of data warehousing concepts, dimensional modelling, and
ETL processes.
6. Proficiency in SQL, Snowflake for data extraction, manipulation, and analysis.
7. Experience with one or more BI tools such as Sigma computing.
8. Knowledge of data visualization best practices and ability to create compelling
data visualizations.
9. Solid problem-solving and analytical skills with a detail-oriented mindset.
10. Strong communication and interpersonal skills to collaborate effectively with
different stakeholders.
11. Ability to work independently and manage multiple priorities in a fast-paced
environment.
12. Knowledge of data governance principles and security best practices.
13. Candidate having experience in managing implementation project of platform
solutions to the U.S. clients would be preferable.
14. Exposure to U.S debt collection industry is a plus.


























