50+ SQL Jobs in India
Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!
JOB DETAILS:
- Job Title: Lead I - Data Science - Python, Machine Learning, Spark
- Industry: Global Digital Transformation Solutions Provider
- Experience: 5-10 years
- Job Location: Pune
- CTC Range: Best in Industry
JD for Data Scientist
Hands-on experience with data analysis tools:
Proficient in using tools such as Python and R for data manipulation, querying, and analysis.
Skilled in utilizing libraries like Pandas, NumPy, and Scikit-Learn to perform in-depth data analysis and modeling.
Skilled in machine learning and predictive analytics:
Expertise in building, training, and deploying machine learning models using frameworks such as TensorFlow and PyTorch.
Capable of performing tasks like regression, classification, clustering, and recommendation, leading to data-driven predictions and insights.
Expertise in big data technologies:
Proficient in handling large datasets using big data tools such as Spark.
Skilled in employing distributed computing and parallel processing techniques to ensure efficient data processing, storage, and analysis, enabling enterprise-level solutions and informed decision-making
Skills: Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI.
Must-Haves
5–9 years of relevant experience in Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI
******
NP - Immediate joiners only
Location-Pune
The AI Data Engineer will be responsible for designing, building, and operating scalable data pipelines and curated data assets that power machine learning, generative AI, and intelligent automation solutions in an SLA-driven managed services environment. This role focuses on data ingestion, transformation, governance, and operational reliability across cloud and hybrid environments enabling use cases such as knowledge retrieval (RAG), conversational AI, predictive analytics, and AI-assisted service management. The ideal candidate combines strong data engineering fundamentals with an understanding of AI workload requirements, including quality, lineage, privacy, and performance.
Key Responsibilities
•Design, build, and operate production-grade data pipelines that support AI/ML and generative AI workloads in managed services environments
•Develop curated, analytics-ready datasets and data products to enable model training, grounding, feature generation, and AI search/retrieval
•Implement data ingestion patterns for structured and unstructured sources (APIs, databases, files, event streams, documents)
•Build and maintain transformation workflows with strong testing and validation
•Enable Retrieval-Augmented Generation (RAG) by preparing document corpora, chunking strategies, metadata enrichment, and vector indexing patterns
•Integrate data pipelines with application services
•Support ITSM and enterprise workflow data needs, including ServiceNow data integration, CMDB/incident data quality improvements, and automation enablement
•Implement observability for data pipelines (monitoring, alerting, SLAs/SLOs) and perform root cause analysis for pipeline failures or data quality incidents
•Apply data governance and security best practices
•Collaborate with ML Engineers, DevOps/SRE, and solution architects to operationalize end-to-end AI solutions
•Contribute to reusable patterns, templates, and standards within the Bell Techlogix AI Center of Excellence
Required Qualifications
•Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
•5+ years of experience in data engineering, analytics engineering, or platform data operations
•Strong proficiency in SQL and Python; experience with data modeling and dimensional concepts
•Hands-on experience with Azure data services (e.g., Data Factory, Synapse, Databricks, Storage, Key Vault) or equivalent cloud tooling
•Experience building reliable pipelines with scheduling, dependency management, and automated testing/validation
•Experience supporting production data platforms with incident management, troubleshooting, and root cause analysis
•Understanding of data security, privacy, and governance principles in enterprise environments
Preferred Qualifications
•Experience enabling AI/ML workloads: feature engineering, training data preparation, and integration with Azure Machine Learning
•Experience with unstructured data processing for generative AI
•Familiarity with vector databases or vector search and RAG patterns
•Experience with event streaming and messaging
•Familiarity with ServiceNow data model and integration patterns (Table API, export, CMDB/ITSM reporting)
•Relevant certifications (Microsoft Azure Data Engineer, Azure AI Engineer, Databricks)
Team -Support Operations — Technical Solutions
Level - IC3 (4–7 years of relevant experience)
Location - India (Remote) IST time zone, with overlap with US East/Central teams
Reports To -Technical Manager
Manages -Not a people-manager role, but a lead role with real technical authority
Employment Type -Full-time
ABOUT DELTEK
Deltek is the leading global provider of software and solutions for project-based businesses — serving government contractors, professional services firms, and architecture & engineering companies. Our products help customers manage the full project lifecycle, from winning work and planning resources to executing delivery and getting paid.
The Support Operations Technical Solutions team sits inside Deltek's Customer Success organization. We build and maintain the internal tooling, integrations, and AI-powered workflows that allow Deltek's support and customer success teams to operate at scale — intelligent case routing, knowledge-base agents, data pipelines between Salesforce, Gainsight, and Oracle Service Cloud, and automation that removes manual work from high-volume support processes.
THE ROLE
We are looking for a Senior System Engineer to take technical ownership of our most complex solutions. This is not a management role — it is a senior individual contributor role with real architectural authority and a multiplier effect on the team around you.
You own problems end-to-end. You design the solution before writing the first line, consider downstream impacts before committing to an approach, and hold the technical bar for the work your team delivers. You are the person a junior engineer turns to when they're stuck, and the person a business stakeholder trusts to tell them whether an idea is feasible and what it will cost to maintain.
In your first year, you can expect to:
- Own the end-to-end design and delivery of major integrations and AI-enabled components from architecture through deployment and post-launch stability
- Lead solution design for the team's most complex problems using PHP, JavaScript, Workato, APIs, and Web Services
- Evaluate technology and platform tradeoffs and make defensible, documented recommendations that balance short-term delivery with long-term maintainability
- Apply AI, automation, and agentic architectures to business problems at production scale — not as experiments, but as shipped systems
- Anticipate performance, operational, and security risks before they reach production; design with those constraints in mind from day one
- Set engineering standards and review the work of IC1/IC2 engineers, making them better through structured feedback and clear design expectations
- Partner directly with CS operations leadership and cross-functional stakeholders to translate ambiguous business needs into concrete technical strategies
This role suits an engineer who is past proving they can build things, and is now focused on building the right things in the right way — and helping others do the same.
WHAT WE'RE LOOKING FOR
Must-Have Technical Skills
- PHP and JavaScript: Production depth: You have designed and shipped non-trivial systems in these languages. You understand performance characteristics, know where the footguns are, and write code you'd be comfortable having reviewed by a senior peer.
- Integration architecture: You have designed system-to-system integrations — not just consumed APIs. You understand data flow, transformation logic, error handling, retry strategies, and idempotency.
- AI / LLM applied experience: You have built or led the build of AI-assisted workflows, LLM-based tools, or agentic systems in an operational or product context. You know the difference between a demo and a production-grade AI system.
- Relational databases: Query and schema design: You write optimized SQL, design schemas with long-term maintainability in mind, and understand when a query will cause production problems before it does.
- Full-stack troubleshooting at depth: You can diagnose complex, multi-layer issues — across front-end, API, back-end, and database — and trace the root cause without being handed a reproduction case.
- Technical tradeoff analysis: When evaluating tools, platforms, or approaches, you can articulate the tradeoffs clearly — not just pick what you know best — and document the rationale in a way that holds up six months later.
- Agile technical leadership: You have led technical workstreams in a sprint-based environment: broken down epics, written meaningful acceptance criteria, and been accountable for team delivery quality.
- Documentation and design artifacts: You produce architecture diagrams, solution designs, and technical decision records that others can act on — not just notes for yourself.
Must-Have Leadership & Soft Skills
- Technical mentorship: You actively make the engineers around you better. Code reviews are teaching opportunities, not gatekeeping. Design reviews are conversations, not approvals.
- Stakeholder communication: You can translate a technical constraint into a business impact, and a business requirement into a technical specification. You don't hide behind jargon or over-simplify to avoid hard conversations.
- Ownership under ambiguity: When a problem is poorly defined, you ask the right questions to define it — then own the answer. You don't wait for complete requirements before starting to think.
- Proactive risk management: You raise issues before they become incidents. You've learned from production failures and carry those lessons into design decisions.
- Business context awareness: You understand how the systems you build affect end users and business operations. You've made engineering decisions informed by that context, not just by technical preference.
Nice-to-Have Skills
Prioritized by relevance to this team's current and near-term roadmap:
Oracle Service Cloud
Workato / iPaaS
Salesforce
Gainsight
Agentic AI / LLM Ops
Snowflake
Microsoft Power BI
Microsoft Power Apps
Cloud-native development
Experience designing agentic AI systems — not just integrating LLM APIs — is highly relevant to where this team is going. Candidates who have shipped multi-step agent architectures with tool-calling, memory, and guardrails will stand out.
RESPONSIBILITIES
Design & Architecture
- Own end-to-end technical solution design — from requirements through architecture, implementation, and post-launch stability — for the team's most complex initiatives
- Lead solution design using PHP, JavaScript, Workato, APIs, and Web Services; ensure solutions are scalable, maintainable, and aligned with established governance standards
- Evaluate tradeoffs across tools, platforms, and architectural patterns; produce documented recommendations that account for both short-term delivery needs and long-term operational cost
- Anticipate downstream impacts, performance bottlenecks, and operational risk during the design phase — not as an afterthought in retrospect
- Author and maintain Architecture Decision Records (ADRs) and technical design documents for all major solution components
AI, Automation & Integration
- Apply AI, automation, and agentic architectures to complex business problems at production scale — designing for reliability, observability, and graceful failure
- Lead the integration of AI-enabled components (LLM workflows, intelligent routing, agentic tools) into the team's operational platform
- Design and oversee integrations between Deltek's CS platforms (Oracle Service Cloud, Salesforce, Gainsight) and internal data systems, ensuring data integrity, performance, and auditability
- Evaluate new AI frameworks, LLM providers, and automation platforms; provide grounded, implementation-level recommendations rather than theoretical assessments
Technical Leadership & Mentoring
- Serve as the primary technical reviewer for IC1/IC2 engineers — conducting structured code and design reviews that build capability, not just ship code
- Break down complex initiatives into well-scoped workstreams that junior engineers can execute with confidence and appropriate independence
- Establish and enforce engineering standards: code quality, documentation, testing coverage, deployment practices, and incident response
- Identify skill gaps in the team and work with the manager to address them through pairing, documentation, or structured learning
Stakeholder & Cross-functional Engagement
- Translate ambiguous business and operational requirements from CS leadership into concrete technical strategies with clear milestones and measurable outcomes
- Engage directly with senior stakeholders — CS operations leads, product owners, IT — to align on priorities, surface risks, and manage technical expectations
- Represent the technical perspective of the team in cross-functional planning and architecture discussions
Operate & Improve
- Own post-launch stability of solutions you design: monitor, respond to incidents, and drive root-cause resolution — not just resolution
- Drive continuous improvement of the team's delivery practices: identify process friction, propose solutions, and follow through on implementation
- Stay current on AI, automation, and integration technology evolution; bring relevant advances back to the team with a concrete point of view on applicability
QUALIFICATIONS
- Education: Bachelor's degree in Computer Science, Electrical or Electronics Engineering, or a related technical discipline. Equivalent demonstrated experience considered.
- Experience: 4–7 years of hands-on experience in software engineering, systems integration, or closely related work, with at least 2 years at a level where you have owned technical design decisions — not just implemented them.
- Coding evidence: A portfolio, GitHub profile, architecture document, or production system you can speak to in depth. At IC3, we expect you to be able to walk through a non-trivial design decision you made and defend the tradeoffs.
- AI / ML: Practical, production-level experience with LLMs or AI tooling — not just prompt engineering or personal experimentation. Familiarity with frameworks such as LangChain, OpenAI APIs, or similar platforms is a strong plus.
- Collaboration model: Comfortable working as a technical authority in a distributed team. The role requires regular IST overlap with US East/Central stakeholders (approximately 6:30 PM – 10:30 PM IST for at least part of the week).
- Language: Strong written and spoken English. At IC3, much of your influence operates through written design documents, async reviews, and stakeholder communications. Precision in writing matters.
WHAT TO EXPECT WORKING HERE
- Technical authority with real impact — your design decisions ship to production and affect how thousands of Deltek customers experience support
- Exposure to production AI/agentic systems and direct involvement in shaping where the team's AI roadmap goes next
- A team where senior engineers are trusted to lead, not managed step-by-step — you will have autonomy commensurate with your accountability
- Structured growth path: IC3 engineers who demonstrate architectural leadership and cross-functional influence have a clear track toward Staff or Associate Director scope
- Regular 1:1s, design review forums, and a manager who will invest in your growth rather than just your output
Team -Support Operations — Technical Solutions
Level
IC2 (1–3 years of relevant experience)
Location
India (Remote) — IST time zone, with overlap with US East/Central teams
Reports To Tech Manager
Employment Type Full-time
ABOUT DELTEK
Deltek is the leading global provider of software and solutions for project-based businesses, serving government contractors, professional services firms, and architecture & engineering companies. Our products help customers manage the full project lifecycle — from winning work and planning resources to executing delivery and getting paid.
The Support Operations Technical Solutions team sits inside Deltek's Customer Success organization. We build and maintain the internal tooling, integrations, and AI-powered workflows that enable Deltek's support and customer success teams to operate at scale — think intelligent case routing, knowledge-base agents, data pipelines between Salesforce, Gainsight, and Oracle Service Cloud, and automation that removes manual work from high-volume support processes.
THE ROLE
We are looking for a System Engineer (IC2) to join our Technical Solutions team based in India. This is a hands-on engineering role, you will build, integrate, and support the systems that power our customer-facing and internal support operations.
In your first year, you can expect to:
- Build and maintain integrations between support platforms (Oracle Service Cloud, Salesforce, Gainsight) using PHP, JavaScript, and Workato
- Contribute to AI-assisted workflow automation — including LLM-based tools and intelligent routing solutions already in production
- Write and optimize SQL queries against our operational data stores to power dashboards, reports, and automated triggers
- Troubleshoot issues across the full stack: front-end, API layer, back-end logic, and database and document root cause findings
- Work in a sprint-based environment alongside engineers, CS operations leads, and product stakeholders across the US and India
This role is well-suited for someone who is early in their career but already has real project or production experience. You will work with guidance from senior engineers while taking genuine ownership of defined workstreams. The expectation is not that you know everything on day one — it is that you are technically curious, structured in your thinking, and driven to ship things that work.
WHAT WE'RE LOOKING FOR
Must-Have Technical Skills
- PHP and JavaScript: Hands-on experience building or maintaining web applications, APIs, or internal tools. You have written code that went somewhere beyond your laptop.
- REST/SOAP APIs and Web Services: You understand how system-to-system data flows work and have built or consumed integrations in a real context.
- Relational databases and SQL: You can write optimized queries, understand joins and indexes, and are comfortable reading a schema you didn't design.
- Full-stack troubleshooting: When something breaks, you know how to methodically trace the issue across front-end, back-end, and database layers — not just escalate it.
- Documentation: You can translate what you built into clear written artifacts — requirements, workflow diagrams, solution designs — that a non-engineer can follow.
- Agile/sprint delivery: You have worked in a structured sprint environment and are comfortable with ceremonies, tickets, and incremental delivery.
Must-Have Soft Skills
- Root-cause orientation: You don't patch symptoms and move on. You want to understand why something broke before deciding how to fix it.
- Self-driven with good judgment: You can manage your own time on a defined problem, identify when you're stuck and need input, and flag risks before they become blockers.
- Clear communicator across audiences: You can explain a technical problem to a non-technical stakeholder and a design decision to a senior engineer — in writing and in a call.
- Collaborative: You work well with people you've never met in person, across time zones, and with stakeholders who don't share your technical background.
Nice-to-Have Skills
The following are not required for the role, but candidates with depth in any of these areas will stand out. Listed in rough order of relevance to this team's current work:
Oracle Service Cloud
Workato / iPaaS
Salesforce
Gainsight
AI / LLM integration
Snowflake
Microsoft Power BI
Microsoft Power Apps
Cloud-native development
Experience with AI tools (GitHub Copilot, LLM APIs, automation agents) used in an operational or product context — not just personal experimentation — is a genuine plus for this team.
RESPONSIBILITIES
At the IC2 level, you will primarily execute within defined frameworks and grow your independent scope over time. The following reflects what you will own and contribute to:
Build & Integrate
- Build and maintain AI-enabled workflows, platform integrations, and internal tools using PHP, JavaScript, Workato, and Web Services
- Develop prototypes and proofs of concept; contribute to production deployments under senior guidance
- Implement and test integrations between Deltek's support platforms and internal data systems
Analyse & Solve
- Break down defined problems into actionable tasks; identify risks, dependencies, and edge cases before they surface in production
- Troubleshoot complex issues across the full stack and document root cause findings clearly
- Investigate stakeholder-reported issues to identify whether the problem is technical, process-related, or both
Operate & Improve
- Follow established governance, architecture, and deployment processes; raise improvement suggestions through proper channels
- Write and maintain documentation for systems, workflows, business rules, and solution designs
- Participate actively in sprint ceremonies; manage your own tasks and flag blockers early
- Demonstrate continuous learning in AI, automation, and integration technologies — this space moves fast and curiosity is part of the job
QUALIFICATIONS
- Education: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related technical discipline. Equivalent practical experience considered.
- Experience: 1–3 years of hands-on experience in software engineering, systems integration, or a closely related field. Internship and co-op experience counts if it involved real production systems.
- Coding: Demonstrable PHP and/or JavaScript experience — a portfolio, GitHub profile, or code sample you can speak to will strengthen your application.
- Collaboration model: Comfortable working remotely with distributed teams. The role requires regular overlap with US East/Central time zones (approximately 6:30 PM – 10:30 PM IST for at least part of the week).
- Language: Strong written and spoken English is essential — much of the collaboration with stakeholders and senior engineers happens asynchronously in writing.
WHAT TO EXPECT WORKING HERE
- A small, technically-focused team where your work is visible and your contributions are directly tied to outcomes customers feel
- Exposure to production AI/LLM systems, not just theoretical discussions about AI
- A culture that values root-cause thinking and good documentation over heroics and quick fixes
- Growth path: engineers who demonstrate technical depth and ownership at IC2 have a clear track toward IC3 (mid-level) scope within 18–24 months
- Regular 1:1s and structured feedback — this team invests in making you better, not just keeping you busy
Actosoft is a software developing and company that offers complete IT solutions. We are a part of that
We are a collective of focused, energetic, talented, and hardworking professionals who believe in getting things done at the highest level. Our team aims to innovate, be authentic and grow in everything that we do.
The ideal candidate should be familiar with the complete software design life cycle. In addition, they should have experience in designing, coding, testing and consistently managing applications. They should be comfortable coding in multiple languages and be able to test codes to maintain high-quality coding.
Job details:
Job Location: Actosoft, Gajera Rd, beside Avalon Business Hub, Katargam, Surat, Gujarat 395004
Experience: 0 to 1years of experience
Salary: 10,000 to 20,000 (per month)
Job Type: Full-time – Work from Office
Working Schedule:
9:00 am to 6:00 pm (Monday to Friday)
Alternate Saturdays Off.
Job Responsibilities:
● Design, code, test, and manage various applications
● Collaborate with the engineering team and product team to establish the best products
● Follow outlined standards of quality related to coding and systems
● Develop automated tests and conduct performance tuning
● Ability to create and support documentation for all new applications
● Willing to work as a team member
Qualifications:
● Bachelor's degree in Computer Science or relevant field, like MCA, BCA, or BE
● Experience developing web-based applications in C#, HTML, VBScript/ASP, and .NET
● Experience working with MS SQL Server and MySQL Knowledge of practices and procedures for full software design life cycle
● Experience in working with an agile development company
Required Skills:
● .NET Framework
● C#
● Microsoft SQL Server
● JavaScript
● jQuery
● ASP.NET MVC
● ASP.NET Web API
● HTML
● WCF Services
● PL/SQL
● Anqular
● Entity Framework
● CSS
● Ajax
● XML
Perks and Benefits:
1. Evaluation for Bonus and Promotion every year.
2. Incredible opportunity to diversify your writing skills by working with experts on unique projects.
Website:
Industry
- Computer Software
Employment Type
Full-time
Edit job description
Job Summary
We are looking for a strong QAD Developer to support a US-based client from our Applix offshore delivery center. The role requires a self-driven engineer who can independently handle QAD-related customizations, enhancements, implementation support, troubleshooting, and ongoing production support.
The ideal candidate should be comfortable working directly with functional stakeholders, understanding business requirements, converting them into technical solutions, and supporting deployment and stabilization activities with minimal supervision.
Shift:
- Second shift / US overlap
- Regular working hours will extend up to 11:30 PM IST on certain business days.
Required Skills
- Strong hands-on experience in QAD ERP development and customization
- Good understanding of QAD technical architecture
- Experience in custom development, reports, forms, interfaces, and enhancements
- Good understanding of manufacturing/business process flows in ERP environments
- Ability to troubleshoot production issues independently
- Strong SQL knowledge for data analysis, backend troubleshooting, and query handling
- Experience supporting implementations, rollouts, or enhancement projects
- Good communication skills and ability to interact with US-based teams
Preferred Skills
- Experience in manufacturing industry environments
- Exposure to integrations, EDI, or external system interfaces
- Experience supporting QAD implementations or upgrades
- Familiarity with change management, release processes, and production support practices
Key Traits
- Self-sufficient and proactive
- Able to work with minimal supervision
- Strong ownership mindset
- Comfortable in a client-facing offshore support model
- Able to handle second-shift working hours consistently
- Excellent verbal and written communication skills, with the ability to clearly explain technical issues, progress, risks, and dependencies to US-based client teams
- Proactive ownership mindset, with the ability to independently drive QAD customizations, issue resolution, and implementation tasks from analysis through closure with minimal supervision
Key Responsibilities
- Develop, customize, and support QAD ERP solutions based on business requirements
- Handle QAD-related enhancements, bug fixes, and implementation activities
- Work on forms, reports, custom programs, interfaces, and data handling within the QAD environment
- Analyze functional requirements and convert them into technical design and development tasks
- Support issue investigation, root cause analysis, and defect resolution in production and non-production environments
- Collaborate with client stakeholders, functional teams, and internal delivery teams during requirement clarification, development, testing, and deployment
- Perform unit testing and support SIT/UAT cycles
- Assist in data migration, configuration support, and deployment activities as needed
- Maintain proper technical documentation for customizations, fixes, and implementation changes
- Work independently during offshore support hours and provide timely progress and issue updates
Job Summary
We are looking for a strong SQL Developer to support a US-based client from our Applix offshore delivery center. This role requires a self-sufficient engineer who can independently manage SQL development, database troubleshooting, data fixes, query optimization, backend support for application changes, and support customizations and implementations tied to business needs.
The ideal candidate should be comfortable working closely with application teams and business stakeholders to understand data flows, support development needs, and resolve production issues with minimal supervision.
Shift:
- Second shift / US overlap
- Regular working hours will extend up to 11:30 PM IST on certain business days.
Required Skills
- Strong hands-on experience in SQL development
- Strong experience with stored procedures, views, functions, joins, indexing, and performance tuning
- Good experience in data analysis, troubleshooting, and backend support
- Ability to write efficient, scalable, and maintainable SQL code
- Experience supporting production issues and implementing fixes independently
- Good understanding of database design principles and data integrity
- Ability to work with application teams on customization and implementation needs
- Strong communication and problem-solving skills
Preferred Skills
- Experience supporting ERP applications, preferably manufacturing-related systems
- Experience with data migration, ETL, reporting, or interface support
- Exposure to QAD or similar ERP environments
- Experience in a client-facing offshore support model
Key Traits
- Self-sufficient and dependable
- Strong analytical mindset
- Able to independently own issues from analysis to closure
- Comfortable working in extended overlap with US teams
- Able to manage priorities with minimal supervision
- Excellent verbal and written communication skills, with the ability to clearly document findings, explain data/database issues, and provide timely updates to US-based client teams
- Strong ownership and proactive follow-through, with the ability to independently analyze, troubleshoot, optimize, and close SQL/data-related issues without constant direction
Key Responsibilities
- Develop, maintain, and optimize SQL queries, stored procedures, functions, views, and backend database objects
- Support application customizations and implementations through database development and data-level troubleshooting
- Analyze and resolve production issues related to data, performance, and SQL logic
- Perform query tuning and performance optimization for existing and new database objects
- Support data extraction, transformation, validation, and migration activities
- Work closely with QAD/application teams to support enhancements, integrations, and issue resolution
- Assist in deployment, testing, and stabilization of new changes
- Perform root cause analysis for database and data-related issues
- Maintain technical documentation for database changes, fixes, and support activities
- Provide reliable offshore support during second shift with timely communication and status updates
Responsibilities:
Use quantitative methods such as business simulations, data mining, modeling, and advanced statistical techniques to solve problems. The Data Scientist contributes by serving as a technical lead for analytics initiatives of low‑to‑medium complexity or business impact and supporting high‑profile, enterprise initiatives such as the Engineered Value Chain.
In this role, you will act as an individual contributor on analytic teams, partnering on cross‑functional projects, and guiding technical delivery. You will also mentor procurement professionals on the technical approaches used to solve problems presented by business units, service organizations, dealers, or customers.
Job duties/Responsibilities include but not limited to:
- Lead and deliver analytics initiatives by defining analytical approaches, building models, and translating insights into business actions for procurement and enterprise stakeholders.
- Develop, train, validate, and monitor predictive models using a broad set of machine learning/statistical methods to support targeted business outcomes.
- Design and implement ETL/data pipelines and integrate data sources to create safe, trusted datasets for reporting and analytics (including Snowflake and SQL-based workflows).
- Build executive-ready dashboards and decision tools (e.g., Power BI) that enable data‑driven leadership decisions.
- Apply data modelling best practices (conceptual, logical, physical models) and support integration/transformation patterns to analytics environments and warehouses.
- Partner cross‑functionally (Procurement, Digital/IT, Finance, operations stakeholders) to deploy analytics solutions into production and ensure adoption.
- Operate with strong data governance and operational rigor, including troubleshooting data issues, managing access/user needs, and supporting reliable analytics operations.
- Use modern engineering practices (e.g., GitLab/DevOps toolchains) to improve repeatability, scalability, and maintainability of analytics solutions.
Must Have Skills
- Strong AI/ML background across model development and validation, including methods such as time series, clustering, tree-based algorithms, generalized linear models, or neural networks.
- Strong SQL + Snowflake proficiency for ETL, transformation, and analytics-ready datasets.
- Experience with cloud solutions, solution integration, IT operations, and data governance.
- Proficiency in Python programming language.
- Proficiency in Prompt Engineering.
- Experience with front-end technologies such as HTML, CSS, and JavaScript.
- Experience with back-end technologies such as Django, Flask, or Node.js.
- Solid grasp of database technologies such as MySQL, PostgreSQL, or MongoDB.
- Creation of CI,CD Pipelines for ML algorithms, training, prediction pipelines
- Proficiency in Machine Learning Operations.
- Work with business, data scientists to ensure value realization, ROI on operationalization of models
- Strong understanding of statistical analysis and machine learning algorithms.
- Containerization, packaging of ML models
- Experience with data visualization tools such as Power BI.
- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity.
- Experience with Large Language Models (LLMs) and Natural Language Processing (NLP) technologies.
Good To Have Skills:
- Experience with cloud-first and agile methodologies.
- Strong understanding of statistical analysis and machine learning algorithms.
- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity.
Required Skill:
- Degree in Computer Science, Business, Mathematics, Economics, Statistics, Engineering, or related field.
Proficiency in Java 8+.
Solid understanding of REST APIs(Spring boot), microservices,
databases (SQL/NoSQL), and caching systems like Redis/Aerospike.
Familiarity with cloud platforms (AWS, GCP, Azure) and DevOps tools (Docker, Kubernetes, CI/CD).
Summary:
Data Engineer/Analytics Engineer with experience in semantic layer modeling using AtScale, building scalable data pipelines, and delivering high-performance analytics solutions on cloud platforms.
⸻
Responsibilities
• Build and maintain ETL/ELT pipelines for large-scale data
• Develop semantic models, cubes, and metrics in AtScale
• Optimize query performance and BI dashboards
• Integrate data platforms (Snowflake, Databricks, BigQuery)
• Collaborate with analysts and business teams
⸻
Skills
• SQL, Python/Scala
• Data modeling (star schema, OLAP)
• AtScale (semantic layer)
• Spark, dbt, Airflow
• BI tools (Tableau, Power BI, Looker)
• AWS / GCP / Azure
⸻
Experience
• 3–8+ years in data/analytics engineering
• Experience with enterprise data platforms and BI systems
Senior Project Owner / Project Manager Technology
Department - Technology / Software Development
Work Mode - Work From Home (WFH), Full Time
Experience - Minimum 10 Years (Development Background)
Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.
ROLE SUMMARY
We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.
KEY RESPONSIBILITIES
Project & Delivery Management
- Own and manage multiple concurrent technology projects from initiation to production release
- Define project scope, timelines, milestones, and resource allocation plans
- Distribute tasks effectively across a team of developers, QA, and support engineers
- Track assigned work daily, follow up on progress, and proactively remove blockers
- Ensure all projects meet deadlines and quality benchmarks without compromise
- Participate actively in production activities and take full accountability for live deployments
US Client Management
- Serve as the Technology single point of contact for all assigned US clients
- Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
- Resolve client queries, manage escalations, and ensure high client satisfaction
- Showcase company-developed applications and software demos confidently to clients
- Translate complex client requirements into clear technical deliverables for the team
Team Leadership
- Lead, mentor, and performance-manage a distributed remote team of technical members
- Foster accountability, ownership, and a high-delivery culture within the team
- Conduct sprint planning, stand-ups, retrospectives, and performance reviews
- Identify skill gaps and work with HR/training teams to bridge them
Process & Operations
- Deeply understand ARDEM's internal processes and align project execution accordingly
- Ensure development standards and best practices are followed across all projects
- Manage crisis situations with composure, identify root causes and drive swift resolution
- Coordinate with cross-functional teams including HR, Operations, Training, and QA
- Maintain project documentation, status reports, and risk registers
REQUIRED EXPERIENCE
- 10+ years of total experience in software development and project management
- 5–7 years of hands-on coding experience in one or more technologies listed below
- 2–3 years in a team management or tech lead role overseeing 5+ members
- Proven experience managing multiple simultaneous projects in a remote/WFH environment
- Prior experience working with US-based clients strong understanding of US work culture and expectations
TECHNICAL SKILLS
- Python: scripting, automation, data processing, backend services
- JavaScript / Node.js: server-side development, REST APIs, async workflows
- NET Core: enterprise application development and service integration
- SQL Databases: query optimization, schema design, stored procedures
- Familiarity with CI/CD pipelines, Git workflows, and deployment processes
- Ability to review code, understand architectural decisions, and guide the team technically
SKILLS & COMPETENCIES
- Exceptional verbal and written communication skills in English client-facing confidence is a must
- Strong crisis management and conflict resolution ability under tight deadlines
- Highly organized with a structured approach to planning, prioritization, and execution
- Self-driven and accountable capable of operating independently in a remote environment
- Strong presentation skills able to demo software to non-technical stakeholders
- Empathetic leadership style with the ability to motivate and align diverse team members
QUALIFICATIONS
- Bachelor's or master's degree in computer science
- PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
- Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage
LOCATION PREFERENCE
- Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
- This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
ABOUT ARDEM
ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving US based clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.
About the role:
We are looking for a skilled Data Engineer with hands-on expertise in Dagster orchestration or GCP with Bigquery and Apache Airflow, modern data pipeline development, and architecture implementation. The ideal candidate will design, build, and optimize scalable data pipelines with strong SQL proficiency, data modelling expertise.
Key Responsibilities
• Design, develop, and maintain scalable data pipelines using Dagster.
• Build and manage Dagster components such as: o Ops / Assets o Schedules o Sensors o Jobs o Resource definitions
• Implement and maintain Medallion Architecture (Bronze, Silver, Gold layers).
• Write optimized and production-grade SQL scripts for transformations and data validation.
• GCP, Big query, Apache Airflow – expertise is must if not familiar with Dagster and orchestration.
Must Have
• 3+ years of experience in Data Engineering.
• Strong hands-on experience with Dagster and workflow orchestration. • Strong hands-on experience with GCP, Big query and Apache Airflow. • Solid understanding of data pipeline design patterns.
• Experience implementing Medallion Architecture.
• Advanced SQL skills (complex joins, CTEs, performance tuning).
• Experience working with GCP cloud data platform.
Why Join Us:
• Collaborative work environment.
• Exposure to modern tools and scalable application architectures.
• Medical cover for employee and eligible dependents.
• Tax beneficial salary structure.
• Comprehensive leave policy
• Competency development training programs.
AccioJob is conducting a Walk-In Hiring Drive with GoComet for the position of Software Engineer.
Apply Now:
https://go.acciojob.com/3v2b64
Required Skills: Intermediate DSA, SQL, Aptitude, REST APIs, React
Eligibility:
Degree: BTech./BE
Branch: Computer Science/CSE/Other CS related branch, IT, Electrical/Other electrical related branches
Graduation Year: 2025, 2026, 2027
Work Details:
Work Location: Bangalore Urban (Onsite)
CTC: ₹10 LPA to ₹12 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Bangalore Centre, AccioJob Chennai Centre, AccioJob Hyderabad Centre, AccioJob Noida Centre, AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Resume Evaluation, Coding Assignment, Technical Interview 1, Technical Interview 2, Technical Interview 3
Important Note: Bring your laptop & earphones for the test.
Register here:
https://go.acciojob.com/3v2b64
👇 FAST SLOT BOOKING 👇
[ 📲 DOWNLOAD ACCIOJOB APP ]
Who are we ?
Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.
The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.
Tech Superpowers
End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.
The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.
Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.
Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.
The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table
Experience & Relevance
Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads
Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.
Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time analytics architectures.
Foster a culture of technical excellence by mentoring and inspiring a team of Data analysts and engineers. Lead deep-dive code reviewa, prompte best-practice data modeling and ensure the squad adopts modern engineering standards like CI/CD For data
Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.
The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.
Who are we ?
Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.
The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.
Tech Superpowers
End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.
The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.
Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.
Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.
The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table
Experience & Relevance
Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads
Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.
Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time
analytics architectures.
Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.
The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.
6 + years of hands-on development experience and in-depth knowledge of , Spring Java, Spring boot, Quarkus and nice to have front-end technologies like Angular, React JS
● Excellent Engineering skills in designing and implementing scalable solutions
● Good knowledge of CI/CD Pipeline with strong focus on TDD
● Strong communication skills and ownership
● Exposure to Cloud, Kubernetes, Docker, Microservices is highly desired.
● Experience in working on public cloud environments like AWS, Azure, GCP w.r.t. solutions development, deployment & adoption of cloud-based technology components like IaaS / PaaS offerings
● Proficiency in PL/SQL and Database development.
Strong in J2EE & OOPS Design Patterns.
Senior Software Engineer – SQL Server / T-SQL
Chennai | IIT Madras Research Park | Full-Time
About Novacis Digital
Novacis Digital is a product-first technology company building AI-driven platforms and large-scale data systems. Our products process complex, high-volume data to power real-time analytics and GenAI-driven experiences.
We don’t see SQL as “just a database layer” - we treat it as a core compute engine. If you love writing efficient SQL and solving performance problems, this is the role for you.
What You Will Do
· Design and build complex T-SQL stored procedures involving Dynamic SQL, along with views, functions, and triggers
· Implement flexible, metadata-driven query frameworks using sp_executesql and parameterized Dynamic SQL
· Engineer high-performance, set-based queries using CTEs, window functions, temp tables and table variables
· Optimize queries using execution plans, statistics and DMVs
· Refactor inefficient queries and redesign schemas for performance and scalability
· Solve real-world challenges related to locks, blocking, deadlocks and transaction isolation
· Collaborate with application engineers to build reliable, high-performance data access layers
What We’re Looking For
We’re looking for true SQL engineers — people who think in execution flow, logic and data behavior rather than just syntax.
You should have:
· 4+ years of deep hands-on experience with Microsoft SQL Server & T-SQL
· Strong expertise in:
o Stored Procedures (with Dynamic SQL)
o Views
o Functions
o Triggers
· Strong experience with:
o Dynamic SQL best practices and secure execution patterns
o Indexing strategies and query plan optimization
o Handling parameter sniffing and plan instability
· Strong knowledge of:
o Temp tables vs table variables
o Cardinality estimation
o Cost-based optimization concepts
Nice to Have
· Exposure to GenAI data pipelines or analytical architectures
· Exposure to Graph, Vector and No SQL Databases
How We Work
· We write production-grade T-SQL
· We value performance, clarity, and correctness
· We invest heavily in query readability and maintainability
· Engineering quality is non-negotiable
Apply Now
If you enjoy designing complex Dynamic SQL-powered stored procedures and tuning systems at scale, we’d like to talk.

Location: PAN India
💼 Employment Type: Full-Time / Contract
👨💻 Experience: 3–6 Years
🔍 Job Overview
We are looking for a talented Automation Test Engineer with strong expertise in Python-based automation, Selenium, and API testing. The ideal candidate will be responsible for building scalable automation frameworks and ensuring high-quality delivery across applications and cloud environments.
🔑 Key Responsibilities
Develop and maintain automation scripts using Python, Selenium, TestNG / Pytest
Perform API testing for RESTful services
Work with AWS services like S3 & API Gateway (basic level)
Conduct database validations using SQL & NoSQL
Integrate automation with CI/CD pipelines (Jenkins, Docker)
Write and maintain test cases, reports, and documentation
Collaborate with cross-functional teams in Agile environments
Debug and resolve automation issues and defects
🛠 Required Skills
Strong experience in Selenium, TestNG / Pytest (Intermediate–Expert)
Proficiency in Python scripting
Experience in RESTful API testing
Knowledge of SQL & NoSQL databases
Hands-on experience with Git (Basic–Intermediate)
Experience with CI/CD tools (Jenkins, Docker)
Basic understanding of AWS (S3, API Gateway)
Scripting knowledge in Shell / Groovy
⭐ Good to Have
Experience in automation framework design
Exposure to cloud-based testing environments
Senior Project Owner / Project Manager Technology
Department - Technology / Software Development
Work Mode - Work From Home (WFH), Full Time
Experience - Minimum 10 Years (Development Background)
Location - Tier-1 Cities Only (Mumbai, Delhi, Bengaluru, Hyderabad, Chennai, Pune, Kolkata)
Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.
ABOUT ARDEM
ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving USbased clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.
ROLE SUMMARY
We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.
KEY RESPONSIBILITIES
Project & Delivery Management
- Own and manage multiple concurrent technology projects from initiation to production release
- Define project scope, timelines, milestones, and resource allocation plans
- Distribute tasks effectively across a team of developers, QA, and support engineers
- Track assigned work daily, follow up on progress, and proactively remove blockers
- Ensure all projects meet deadlines and quality benchmarks without compromise
- Participate actively in production activities and take full accountability for live deployments
US Client Management
- Serve as the Technology single point of contact for all assigned US clients
- Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
- Resolve client queries, manage escalations, and ensure high client satisfaction
- Showcase company-developed applications and software demos confidently to clients
- Translate complex client requirements into clear technical deliverables for the team
Team Leadership
- Lead, mentor, and performance-manage a distributed remote team of technical members
- Foster accountability, ownership, and a high-delivery culture within the team
- Conduct sprint planning, stand-ups, retrospectives, and performance reviews
- Identify skill gaps and work with HR/training teams to bridge them
Process & Operations
- Deeply understand ARDEM's internal processes and align project execution accordingly
- Ensure development standards and best practices are followed across all projects
- Manage crisis situations with composure, identify root causes and drive swift resolution
- Coordinate with cross-functional teams including HR, Operations, Training, and QA
- Maintain project documentation, status reports, and risk registers
REQUIRED EXPERIENCE
- 10+ years of total experience in software development and project management
- 5–7 years of hands-on coding experience in one or more technologies listed below
- 2–3 years in a team management or tech lead role overseeing 5+ members
- Proven experience managing multiple simultaneous projects in a remote/WFH environment
- Prior experience working with US-based clients strong understanding of US work culture and expectations
TECHNICAL SKILLS
- Python: scripting, automation, data processing, backend services
- JavaScript / Node.js: server-side development, REST APIs, async workflows
- NET Core: enterprise application development and service integration
- SQL Databases: query optimization, schema design, stored procedures
- Familiarity with CI/CD pipelines, Git workflows, and deployment processes
- Ability to review code, understand architectural decisions, and guide the team technically
SKILLS & COMPETENCIES
- Exceptional verbal and written communication skills in English client-facing confidence is a must
- Strong crisis management and conflict resolution ability under tight deadlines
- Highly organized with a structured approach to planning, prioritization, and execution
- Self-driven and accountable capable of operating independently in a remote environment
- Strong presentation skills able to demo software to non-technical stakeholders
- Empathetic leadership style with the ability to motivate and align diverse team members
QUALIFICATIONS
- Bachelor's or master's degree in computer science
- PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
- Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage
LOCATION PREFERENCE
- Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
- This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
We have an immediate requirement for a Java Developer role in the Pune location. Please find the details below:
Role: Java Developer
Experience: 3–4 Years (Mandatory)
Location: Pune
Joining: Immediate joiners only
Key Responsibilities:
- Develop and maintain scalable and robust J2EE applications
- Follow and implement coding standards within the project
- Integrate with third-party APIs and services
- Work in an Agile environment to design and implement new features
- Support team members in resolving technical issues
- Debug and resolve production issues (code/infrastructure)
- Communicate effectively with team members and product management
Mandatory Skills:
- Strong knowledge of Java and JEE internals (Class Loading, Memory Management, Transaction Management, etc.)
- Expertise in OOPs/OOAD concepts and design patterns
- Hands-on experience with Spring Framework and Web Services
- Basic knowledge of JavaScript, jQuery, AJAX, and DOM
- Good understanding of SQL, relational databases, and ORM (Hibernate/DAO)
- Strong problem-solving skills and communication abilities
Important Note:
- Interview is scheduled for Monday
- Selected candidates are expected to join by Tuesday or Wednesday
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Job Title: Data Analyst (AI/ML Exposure)
Experience: 1–3 Years
Location: Mumbai
Job Description:
We are looking for a Data Analyst with strong experience in data handling, analysis, and visualization, along with exposure to AI/ML concepts. The role involves working with structured and unstructured data (SQL, CSV, JSON), building data pipelines, performing EDA, and deriving actionable insights. Candidates should have hands-on experience with Python (Pandas, NumPy), data visualization tools, and basic knowledge of NLP/LLMs. Exposure to APIs, data-driven applications, and client interaction will be an added advantage.
Skills Required: Python, SQL, Data Analysis, EDA, Visualization, APIs
Apply: Share your resume or connect with us.
Overview
We are looking for a highly skilled Lead Data Engineer with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python & POWER BI .
Skills : Python, Databricks, SQL
Key Responsibilities:
- Design, develop, and maintain scalable data warehouse solutions.
- Write and optimize complex SQL queries for data extraction, transformation, and reporting.
- Develop and automate data pipelines using Python.
- Work with AWS cloud services for data storage, processing, and analytics.
- Collaborate with cross-functional teams to provide data-driven insights and solutions.
- Ensure data integrity, security, and performance optimization.
Required Skills & Experience:
- Must have a minimum of 6-10 years of experience in Data Warehousing & Analytics.
- Must have strong experience in Databricks
- Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing.
- Hands-on experience with Python for data processing and automation.
- Experience working with AWS cloud services.
- Hands-on experience with reporting tools like Power BI or Tableau.
- Ability to work independently and collaborate with teams across different time zones.
We are looking for a Staff Engineer - PHP to join one of our engineering teams at our office in Hyderabad.
What would you do?
- Design, build, and maintain backend systems and APIs from requirements to production.
- Own feature development, bug fixes, and performance optimizations.
- Ensure code quality, security, testing, and production readiness.
- Collaborate with frontend, product, and QA teams for smooth delivery.
- Diagnose and resolve production issues and drive long-term fixes.
- Contribute to technical discussions and continuously improve engineering practices.
Who Should Apply?
- 4–6 years of hands-on experience in backend development using PHP.
- Strong proficiency with Laravel or similar PHP frameworks, following OOP, MVC, and design patterns.
- Solid experience in RESTful API development and third-party integrations.
- Strong understanding of SQL databases (MySQL/PostgreSQL); NoSQL exposure is a plus.
- Comfortable with Git-based workflows and collaborative development.
- Working knowledge of HTML, CSS, and JavaScript fundamentals.
- Experience with performance optimization, security best practices, and debugging.
- Nice to have: exposure to Docker, CI/CD pipelines, cloud platforms, and automated testing.
Job Title : Senior QA Engineer (Crypto Exchange Platform)
Experience : 2+ Years
Location : Gurugram & Vadodara
Employment Type : Full-Time
About the Company :
We are a fast-growing crypto exchange platform building secure, scalable, and high-performance trading systems with real-time data and wallet infrastructure.
Role Overview :
We are looking for a Senior QA Engineer to ensure the quality, reliability, and security of our platform. You’ll work on web, mobile, and backend systems, focusing on APIs, trading engines, and real-time systems in a fast-paced agile environment.
Mandatory Skills :
2+ years in QA with strong manual & automation testing, experience in Selenium/Cypress/Playwright, API testing (Postman/REST Assured), CI/CD (Jenkins/GitHub Actions), SQL, and real-time/WebSocket testing.
Key Responsibilities :
- Create and execute test plans, cases, and strategies
- Perform functional, regression, integration & API testing
- Build and maintain automation frameworks
- Test trading systems, wallets, and real-time data (WebSockets)
- Track bugs using Jira and collaborate with teams
- Integrate testing into CI/CD pipelines
- Ensure performance, stability, and security
Required Skills :
- Strong experience in automation + functional testing
- Hands-on with Selenium/Cypress/Playwright
- Good knowledge of API testing & microservices
- Experience with CI/CD tools
- Strong SQL & database validation skills
- Understanding of Agile & SDLC
Good to Have :
- Experience in crypto/fintech/trading platforms
- Knowledge of blockchain, wallets, smart contracts
- Performance testing (JMeter, K6)
- Basic security testing knowledge
What We’re Looking For :
- Strong problem-solving skills
- Attention to detail
- Ability to work in a fast-paced environment
- Good communication & ownership mindset
Key Skills:
- ETL Testing (Functional, Regression, Integration)
- IBM DataStage
- SQL (Joins, Subqueries, Aggregations, Data Validation)
- Data Warehousing Concepts (Star/Snowflake Schema)
- Test Case Design & Execution
- Defect Management (JIRA, HP ALM)
- Agile & Waterfall Methodologies
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Python Developer (Performance Optimization Focus)
Experience: 3–5 Years
Location: Remote (India-based candidates only)
Employment Type: Full-time
Role Overview
We are seeking a Python Developer with a strong focus on performance optimization and system efficiency. In this role, you will identify bottlenecks, enhance system performance, and contribute to building scalable, high-performance applications in a Linux-based environment.
Key Responsibilities
- Analyze and troubleshoot performance bottlenecks in applications and systems
- Optimize code, database queries, and architecture for scalability and speed
- Design, develop, test, and maintain robust Python applications
- Work with large datasets and improve data processing efficiency
- Collaborate with cross-functional teams to improve system reliability and performance
- Monitor system performance and implement proactive improvements
- Write clean, maintainable, and efficient code following best practices
Required Skills & Qualifications
- 3–5 years of hands-on experience in Python development
- Strong expertise in performance tuning and optimization techniques
- Experience with debugging and profiling tools
- Solid understanding of data structures and algorithms
- Experience with REST APIs and backend development
- Strong analytical and problem-solving skills
Linux & System Knowledge (Must-Have)
- Comfortable working in Linux/Unix environments
- Command-line proficiency, including:
- File editing (vi, nano)
- File permissions (chmod, chown)
- File downloads (wget, curl)
- Basic file and directory operations
Basic Python Knowledge (Interview Scope)
- Writing simple scripts and reusable functions
- String manipulation and data handling
- Example task: Count words in a file/string efficiently
Good to Have
- Familiarity with AI/ML concepts or tools
- Experience optimizing data-intensive or distributed systems
- Exposure to cloud platforms (AWS, GCP, Azure)
Why Join Us
- Work on performance-critical systems with real-world impact
- Fully remote work environment
- Opportunity to work with modern, scalable technologies
- Collaborative, growth-focused team culture
We are looking for a highly skilled QA Automation Engineer with at least 3 years of experience to join our dynamic team in Mumbai. The ideal candidate should be proactive, detail-oriented, and ready to hit the ground running.
Company's Name:-WeAssemble
Reach US:- www.weassemble.team
Location:- One International Centre, Prabhadevi, Mumbai
Working days:- Monday - Friday / Sat & Sun Fixed Off
Location: Prabhadevi , Mumbai
*Key Responsibilities:*
* Design, develop, and execute automated test scripts using industry-standard tools and frameworks.
* Collaborate with developers, business analysts, and product managers to ensure product quality.
* Conduct functional, non-functional, API, integration testing.
* Implement and maintain automation frameworks.
* Contribute to continuous improvement in QA processes.
*Required Skills & Experience:*
* Strong experience in Playwright with JavaScript.
* API Testing Automation (Postman, REST Assured, or equivalent).
* Hands-on experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab, or similar).
* Solid understanding of software QA methodologies, tools, and processes.
* Ability to identify, log, and track bugs effectively.
* Strong problem-solving and analytical skills.
*Good to Have:*
* Knowledge of performance testing tools.
* Familiarity with cloud platforms (AWS, Azure, or GCP).
Experience: 1–3 Years
Qualification: B.Tech (Computer Science / IT or related field)
Shift Timing: 5:00 PM – 2:00 AM (Late Evening Shift)
Location: Hyderabad
Job Summary
We are seeking a proactive and detail-oriented Application Support Engineer with 1–3 years of experience in Linux/Windows environments, application servers, and monitoring tools. The candidate will be responsible for ensuring the stability, performance, and availability of applications, along with providing L2/L3 support in a fast-paced production environment.
Key Responsibilities :
- Provide application support and incident management for production systems.
- Monitor system performance using hardware/software monitoring and trending tools.
- Troubleshoot issues in Linux and Windows environments.
- Manage and support Apache and Tomcat servers.
- Analyze logs and debug application/system issues.
- Work on SQL/Oracle databases for query execution, troubleshooting, and performance tuning.
- Handle deployments and support CI/CD pipelines using tools like Docker and Jenkins.
- Ensure SLA adherence and timely resolution of incidents and service requests.
- Coordinate with development, infrastructure, and database teams for issue resolution.
- Maintain documentation for incidents, processes, and knowledge base articles.
- Support SaaS applications hosted in data center environments.
Required Skills :
Strong knowledge of Linux and Windows OS administration
Experience with Apache and Tomcat servers
Hands-on experience with monitoring and alerting tools
Good understanding of log analysis and troubleshooting techniques
Working knowledge of SQL / Oracle databases
Familiarity with Docker and Jenkins (CI/CD pipelines)
Understanding of ITIL processes (Incident, Problem, Change Management)
Knowledge of SaaS applications and data center operations.
Preferred Skills :
Experience with automation/scripting (Shell, Python, etc.)
Exposure to cloud platforms (AWS/Azure/GCP) is a plus
Basic networking knowledge
Soft Skills :
Strong analytical and problem-solving abilities
Good communication skills
Ability to work in night shifts and handle production support
Team player with a proactive attitude
👉 Job Title: Angular Developer
🌟 Experience: 4 Years
💡Location: Pune (Hybrid)
👉 Notice Period :- Immediate Joiners
( Candidate Serving notice period are preffered)
💫 Interview Mode :- Walk in Interview ( Baner Location)
Job Overview
We are looking for a skilled Angular Developer with 4 years of experience to join our dynamic development team. The ideal candidate will have strong expertise in Angular, JavaScript, and SQL, with the ability to build high-performance, scalable web applications. This is a hybrid role based in Pune.
Key Responsibilities
- Develop and maintain responsive web applications using Angular.
- Write clean, scalable, and efficient JavaScript code.
- Collaborate with cross-functional teams including designers, backend developers, and product managers.
- Integrate frontend applications with backend services and APIs.
- Optimize applications for maximum speed and scalability.
- Troubleshoot, debug, and upgrade existing applications.
- Work with SQL databases for data querying and manipulation.
- Ensure code quality through best practices, code reviews, and testing.
Required Skills & Qualifications
- 4 years of hands-on experience in Angular development.
- Strong proficiency in JavaScript (ES6+).
- Solid understanding of HTML5, CSS3, and responsive design.
- Experience working with SQL databases.
- Familiarity with RESTful APIs and web services.
- Knowledge of version control systems like Git.
- Strong problem-solving and analytical skills.
- Good communication and teamwork abilities.
Hiring: Senior BI Architech
Experience: 5+ Years
Work Mode :- Remote
Notice Period :- Immediate Joiners
( Or Candidate Serving Notice period )
About the Role
We are looking for a seasoned Senior Business Intelligence Architect who can bridge the gap between deep technical engineering and functional business insight delivery. This role demands an architect who doesn't just build dashboards — but designs enterprise-grade BI ecosystems that drive executive decision-making at scale.
4 Mandatory Skills
These are non-negotiable. Candidates without proficiency in all three will not be considered.
- Tableau — Advanced dashboard design, REST API integration, JavaScript SDK embedding, extract optimization, and CI/CD deployment via Git.
- SQL — Expert-level query tuning, complex joins, incremental refresh strategies, and performance optimization for large-scale data models.
- Power BI - with the ability to architect migration paths or co-existence strategies for self-serve and executive reporting.
- ThoughtSpot — with the ability to architect migration paths or co-existence strategies for self-serve and executive reporting.
Key Responsibilities
- Lead end-to-end design and publishing of sophisticated Tableau and Cognos solutions with a focus on interactivity and executive-grade narrative storytelling.
- Own the full optimization lifecycle — advanced query tuning, incremental extract strategies, and dashboard load-time reduction.
- Architect seamless "Analytics as a Service" integrations by embedding BI content into external applications via Tableau REST APIs and JavaScript SDKs.
- Provide expert guidance on Power BI and ThoughtSpot migration strategies and co-existence models for enterprise reporting.
- Elevate data storytelling through UX principles, accessibility standards, and custom visualizations using D3.js for high-impact mapping and charting.
- Implement CI/CD pipelines for BI releases using Git, ensuring rigorous version control and deployment governance for all dashboard assets.
- Define and track performance analytics and usage metrics to measure dashboard ROI and drive organization-wide adoption.
Required Skills & Qualifications
- 5–10 years of hands-on experience in BI development and architecture.
- Deep expertise in Tableau, Power BI, ThoughtSpot, and SQL.
- Proficiency in Tableau REST API and JavaScript SDK for embedded analytics.
- Strong understanding of DataOps frameworks and scalable data pipeline design.
- Experience with Git-based CI/CD workflows for BI asset management.
- Familiarity with D3.js or other custom visualization libraries.
- Excellent communication skills to translate complex data into compelling business narratives for executive stakeholders.
Key Requirements / Skills
- 6+ years of overall experience in software development with strong expertise in building scalable web applications.
- 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
- Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
- Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
- Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
- AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
- Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
- Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
- Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
- Strong communication skills to effectively interact with technical and non-technical stakeholders.
- Experience working in high-compliance environments such as healthcare systems is a plus.
Education Qualifications:
- B.Tech/M.Tech in CSE/IT/AI/ML from a good university
We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.
Key Responsibilities
- Collaborate with business users and stakeholders to understand business processes and data requirements
- Design and implement dimensional data models, including fact and dimension tables
- Identify, design, and implement data transformation and cleansing logic
- Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
- Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
- Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
- Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
- Provide high-level design, research, and effort estimates for data integration initiatives
- Provide production support for ETL processes to ensure data availability and SLA adherence
- Analyze and resolve data pipeline and performance issues
- Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
- Translate business requirements into well-defined technical data specifications
- Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
- Define and document BI usage through use cases, prototypes, testing, and deployment
- Support and enhance data governance and data quality processes
- Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
- Train and support business users, IT analysts, and developers
- Lead and collaborate with teams spread across multiple locations
Required Skills & Qualifications
- Bachelor’s degree in Computer Science or a related field, or equivalent work experience
- 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
- Strong expertise in data warehousing concepts, tools, and best practices
- Excellent SQL skills
- Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
- Hands-on experience with Google Cloud Platform (GCP) services, including:
- BigQuery
- Cloud SQL
- Cloud Composer (Airflow)
- Dataflow
- Dataproc
- Cloud Functions
- Google Cloud Storage (GCS)
- Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
- Strong experience integrating data using APIs, XML, JSON, and similar formats
- In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
- Solid understanding of SDLC, Agile, and Scrum methodologies
- Strong problem-solving, multitasking, and organizational skills
- Experience handling large-scale datasets and database design
- Strong verbal and written communication skills
- Experience leading teams across multiple locations
Good to Have
- Experience with SSRS and SSIS
- Exposure to AWS and/or Azure cloud platforms
- Experience working with enterprise BI and analytics tools
Why Join Us
- Opportunity to work on large-scale, enterprise data platforms
- Exposure to modern cloud-native data engineering technologies
- Collaborative environment with strong stakeholder interaction
- Career growth and leadership opportunities
Essential Functions/Responsibilities
• Provide hands-on development in the application development, unit test, and rollout of
strategic web and Mobile initiatives.
• Develop both front-end and back-end for web/mobile applications, working with a hybrid
internal/vendor team, to support various lines of business and functional areas
• Work with Business Owners and Business Analysis teams, to create business requirements.
• Document technical requirements and technical specifications for Web/Mobile
applications (and related integrated solutions) and provide technical solutions to support
those needs.
• Provide feedback (and approval) on technical designs and methods to support business
requirements.
• Effectively communicate relevant project planning and status information to
leadership/management.
• Deliver engaging, informa<ve, well-organized demos/presenta<ons that are effectively
tailored to the intended audience, as needed
Job Summary: We are seeking a highly skilled and experienced Senior .NET Developer to join our dynamic development team. The ideal candidate will have a strong background in developing robust, scalable, and high-performance applications using the Microsoft .NET framework, coupled with significant expertise in SQL Server. You will be instrumental in designing, developing, and maintaining complex software solutions, collaborating with cross-functional teams, and mentoring junior developers.
Responsibilities:
- Design, develop, test, deploy, and maintain high-quality, scalable, and secure applications using C#, .NET/.NET Core, and related technologies.
- Lead the development of key modules and features, ensuring adherence to coding standards, best practices, and architectural guidelines.
- Collaborate with product owners, business analysts, and other stakeholders to understand requirements, translate them into technical specifications, and propose effective solutions.
- Develop and optimize complex SQL queries, stored procedures, functions, and database schemas for optimal performance and data integrity.
- Perform code reviews, provide constructive feedback, and ensure the quality and maintainability of the codebase.
- Troubleshoot, debug, and resolve software defects and production issues in a timely manner.
- Actively participate in the entire software development life cycle (SDLC), including requirements gathering, design, development, testing, deployment, and support.
- Mentor and guide junior developers, fostering their growth and ensuring best practices are followed.
- Stay up-to-date with emerging technologies and industry trends, evaluating and recommending new tools and practices to improve development efficiency and product quality.
- Contribute to the continuous improvement of development processes and methodologies.
Required Skills and Experience:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Minimum of 4+ years of professional experience in software development with a strong focus on the Microsoft .NET ecosystem.
- Proficiency in C# and extensive experience with .NET Framework (4.x) and/or .NET Core/.NET 5+.
- Solid understanding and practical experience with ASP.NET MVC, Web API,
- Strong SQL skills with proven experience in designing databases, writing complex queries, stored procedures, functions, and optimizing database performance in Microsoft SQL Server.
- Experience with front-end technologies such as HTML5, CSS3, JavaScript, jQuery, and at least one modern JavaScript framework (e.g., Angular, React, Vue.js) is a plus.
- Familiarity with ORM frameworks such as Entity Framework or Dapper.
- Experience with version control systems, particularly Git.
- Understanding of object-oriented programming (OOP) principles, design patterns, and software development best practices.
- Excellent problem-solving, analytical, and debugging skills.
- Strong communication and interperson
About Us
AtDrive Infotech is a forward-thinking IT company delivering innovative and scalable software solutions using modern technologies. We focus on building reliable, secure, and intelligent applications that enhance business performance.
We are looking for a motivated Software Development Engineer (ASP.NET) who is eager to learn, contribute to development projects, and work on modern technologies including AI-integrated applications.
Key Responsibilities
- Assist in designing, developing, and maintaining ASP.NET-based applications.
- Participate in the software development lifecycle, including coding, testing, debugging, and deployment.
- Develop and maintain ASP.NET MVC and Web API applications.
- Work on frontend development using HTML5, CSS3, JavaScript, and jQuery.
- Develop and integrate REST APIs and third-party services.
- Write efficient SQL queries and assist in database development.
- Debug technical issues and optimize application performance.
- Perform real-time testing during development to ensure code quality.
- Collaborate with team members and follow coding standards.
- Support deployment and maintenance activities.
Qualifications
- Bachelor’s degree in Computer Science, IT, or related field.
- 1–2 years of experience in ASP.NET development.
Required Technical Skills
- Good knowledge of C# and .NET Framework.
- Experience with ASP.NET MVC and Web API development.
- Basic knowledge of Web Services and API integration.
- Strong understanding of HTML5 and CSS3.
- Experience with JavaScript and jQuery.
- Familiarity with AJAX-based client-server communication.
- Basic knowledge of SQL Server and Stored Procedures.
- Familiarity with Entity Framework or LINQ.
- Understanding of JSON and XML data formats.
- Basic knowledge of Object-Oriented Programming (OOP) concepts.
- Familiarity with Git or version control systems.
AI Skills (Desired)
- Basic understanding of Artificial Intelligence (AI) concepts.
- Familiarity with using AI-based APIs such as chatbots or automation tools.
- Willingness to learn and work on AI-integrated applications.
Additional Skills (Preferred)
- Exposure to AngularJS or similar frontend frameworks.
- Familiarity with WPF / WinForms concepts.
- Basic knowledge of n-tier architecture.
- Understanding of debugging and performance optimization techniques.
Personal Attributes
- Strong problem-solving and troubleshooting skills.
- Ability to work in a fast-paced development environment.
- Good communication and teamwork skills.
- Self-motivated and eager to learn new technologies.
- Ability to prioritize tasks and meet deadlines.
- Strong ownership and accountability mindset.
Roles Expectations
- Deliver assigned development tasks within deadlines.
- Maintain clean and maintainable code.
- Follow development and quality standards.
- Continuously learn new technologies and frameworks.
- Support senior developers in technical implementations.
Job Details
Job Title: Software Development Engineer – ASP.NET
Job Type: Full-time
Work Location: Remote
Salary Range:
₹25,000 – ₹35,000 per month
(Negotiable based on skills and technical performance)
Benefits
- Paid sick leave
- Paid time off
- Work from home
Schedule
- Day shift
- Monday to Friday (Mon-Sat during Probation)
Software Engineer – EdTech (PHP)
Experience: 3+ Years
Work Mode: Permanent Work From Home
Role Summary
We are seeking a highly skilled software developer with strong experience in EdTech platforms and education ERP systems. The ideal candidate will have expertise in core PHP/Laravel and database technologies, with hands-on experience in building and scaling education-focused modules such as LMS, online examination systems, admissions, and fee management.
This role focuses on developing scalable, secure, and high-performance solutions for schools, colleges, and online learning platforms.
Key Responsibilities
- Design, develop, and maintain Education ERP and EdTech platform modules.
- Build and enhance systems for LMS (Learning Management System), online exams, admissions, fee management, HR, and finance.
- Develop and optimize REST APIs/GraphQL services for seamless integration with web and mobile platforms.
- Ensure high performance, scalability, and security for large-scale student and institutional data.
- Work closely with product, QA, and implementation teams to deliver EdTech features.
- Conduct code reviews, maintain coding standards, and mentor junior developers.
- Continuously improve platform capabilities based on EdTech trends and user needs.
Required Skills & Qualifications
- Strong expertise in Core PHP (Laravel Framework).
- Solid experience with MySQL, MongoDB, PostgreSQL (database design & optimization).
- Understanding of EdTech workflows like student lifecycle, course management, and assessments.
- Frontend basics: JavaScript, jQuery, HTML, CSS (React/Vue is a plus).
- Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email services).
- Familiarity with Git/GitHub, Docker, and CI/CD pipelines.
- Knowledge of cloud platforms (AWS, Azure, GCP) is an advantage.
- Minimum 3+ years of development experience, with at least 2 years in education ERP/EdTech systems.
Preferred Experience
- Prior experience working in EdTech companies or education ERP platforms.
- Deep understanding of LMS, online examination systems, admissions, fees, HR, and finance modules.
- Experience handling high-traffic educational platforms (e.g., exam portals, live classes, student dashboards).
- Exposure to scalable architecture for large student/user bases.
• 3+ years of hands-on experience developing and testing highly scalable software.
• Excellent coding skills in Java 17 or above.
• Very good understanding of any RDBMS and/or messaging queues
• Proficient in Core java, Solid foundation in object-oriented development and design patterns.
• Excellent problem-solving skills and attention to detail.
• Ability to engineer complex features/systems from scratch and drive it to completion.
• Good knowledge of multiple data storage systems.
• Prior experience in micro services and event driven architecture.
• Experience with Spring boot and Spring Security Framework
• Spring web-flux understanding is desirable
• Understand OWASP Top 10/CWE, DAST and SAST
Star Apps is looking for a detail-oriented and technically-driven QA Engineer to join our product team in Baner, Pune. In this role, you will move beyond simple bug-finding to focus on the reliability and scalability of our SaaS platform.
The ideal candidate has a "product-first" mindset, having previously worked in SaaS or Product-based environments, and possesses a strong understanding of how data moves through complex systems.
What You’ll Do
- End-to-End Test Engineering: Develop and maintain robust automated test suites across the stack (UI and API) to ensure high-velocity, high-quality releases.
- Deep-Dive Diagnostics: You won’t just report bugs; you will read and interpret logs to perform deep Root Cause Analysis (RCA), helping developers identify exactly where a system is failing.
- Complex System Validation: Test highly interdependent workflows where a change in one module affects several others, ensuring stability across complex business logic.
- SaaS Integration: Partner with Engineering to bake automated tests directly into the CI/CD pipeline.
- Shift-Left Collaboration: Participate in design reviews to provide "testability" feedback before a single line of code is written.
What We’re Looking For
- The "Product" Lens: 1–4 years of experience in a fast-paced SaaS or Product company. You prioritize the user’s journey over just "passing" scripts.
- Backend & API Mastery: Strong experience in Backend testing and API validation using tools like Postman or RestAssured. You should be comfortable testing logic involving queues, webhooks, and third-party integrations.
- Automation First: Proven hands-on experience with modern automation frameworks (e.g., Playwright, Cypress, or Selenium).
- The "Sleuth" Mindset: A natural inclination to solve the why behind a bug, not just the what. You are comfortable navigating server logs to trace errors.
- AI-Forward Thinking: You are already leveraging AI tools (like GitHub Copilot, ChatGPT, or automated test generators) to accelerate your coding, generate test data, or optimize test cases.
- On-site Energy: A desire to work out of our Baner office 5 days a week, contributing to a high-collaboration, face-to-face team environment.
- Education: A degree in Computer Science, IT, or a related technical field.
Preferred Skills
- Data Literacy: Solid understanding of SQL for deep data verification and state checking.
- Infrastructure Awareness: Familiarity with cloud platforms (AWS, Azure, or GCP) and how services interact within them.
- Resilience Testing: Experience testing distributed systems or performance testing basics.
Why Star Apps?
- Impact: Your work directly affects our core product and global customer base.
- Growth: A fast-paced environment where your 1–3 years of experience will quickly scale through mentorship and ownership.
- Location: Work from the heart of Pune's tech hub in Baner.
Data Quality Engineer
Engineering - Hyderabad, Telangana
About Gradera — Digital Twin & Physical AI Platform
At Gradera, we are building a next-generation Digital Twin and Physical AI platform that enables enterprises to model, simulate, and optimize complex real-world systems. Our work brings together strategy, architecture, data, simulation, and experience design to power decision-making across large-scale operational environments such as manufacturing, logistics, and supply chain networks.
This platform-led initiative applies AI-native execution, advanced simulation, and governed orchestration to help organizations test scenarios, predict outcomes, and continuously improve performance. We operate with an enterprise-first mindset prioritizing reliability, transparency, and measurable business impact as we build intelligent systems that scale beyond a single industry or use case.
Data Quality Engineer
Overview
We are seeking a detail-oriented Data Quality Engineer to ensure the integrity, accuracy, and reliability of data powering our digital twin and AI platforms. You will design and implement data quality frameworks, build automated validation pipelines, and establish quality metrics that enable trusted, simulation-ready data products. This role is critical to ensuring that operational decisions and ML models are built on a foundation of high-quality, governed data.
Our core data quality stack includes:
Data Quality Frameworks
- Delta Live Tables expectations for declarative quality enforcement
- Great Expectations for comprehensive data validation
- Databricks data profiling and quality monitoring
Platform & Tools
- Databricks SQL and PySpark for quality checks at scale
- Unity Catalog for lineage tracking and governance compliance
- Python for custom validation logic and anomaly detection
Observability
- Quality metrics dashboards and alerting
- Data profiling and statistical analysis
- Anomaly detection and drift monitoring
Key Responsibilities
- Design and implement data quality frameworks using Delta Live Tables expectations and Great Expectations
- Build automated data validation pipelines that enforce quality standards at ingestion and transformation stages
- Develop data profiling processes to understand data distributions, patterns, and anomalies
- Define and track data quality metrics (completeness, accuracy, consistency, timeliness, validity)
- Implement anomaly detection mechanisms to identify data drift and quality degradation
- Create quality dashboards and alerting systems for proactive issue identification
- Collaborate with data engineers to embed quality checks into ETL/ELT pipelines
- Partner with data architects to establish data quality standards and governance policies
- Investigate and perform root cause analysis for data quality issues
- Document data quality rules, thresholds, and remediation procedures
- Support data certification processes for simulation-ready and ML-ready datasets
- Drive continuous improvement in data quality practices and tooling
Preferred Qualifications
- 6+ years of experience in data engineering or data quality roles, with 3+ years focused on data quality
- Track record of implementing enterprise-scale data quality frameworks
- Experience with Lakehouse architectures (Delta Lake, Iceberg)
- Familiarity with real-time data quality monitoring for streaming pipelines
- Experience working in agile, cross-functional teams
Highly Desirable
- Experience with data quality for digital twin or simulation platforms
- Familiarity with operational state data validation and temporal consistency checks
- Experience with graph data quality validation (Neo4j or similar)
- Exposure to ML data quality (feature validation, training data quality)
- Experience with data observability platforms
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
We are looking for a strong Mobile Engineer with backend exposure who can own end-to-end feature development. This is a mobile-heavy fullstack role where you will primarily build scalable mobile applications while contributing to backend services and APIs.
Key Responsibilities
- Design and develop high-quality mobile applications (primary focus)
- Build and integrate RESTful APIs and backend services
- Collaborate with product and design teams to ship features end-to-end
- Ensure performance, scalability, and reliability of mobile apps
- Write clean, maintainable, and testable code
- Participate in architecture discussions and technical decision-making
Must Have Skills
- Strong experience in mobile development (Flutter / React Native / iOS / Android)
- Solid understanding of backend development (Node.js / Java / Python / Go)
- Experience with API design, microservices, and databases
- Good understanding of system design and app performance optimization
- Familiarity with cloud platforms (AWS/GCP)
Good to Have
- Experience working in startup environments
- Exposure to CI/CD pipelines and DevOps practices
- Understanding of real-time systems or scalable architectures
Generative AI System Design
- Architect and implement end-to-end LLM-powered applications
- Build scalable RAG pipelines (chunking, embeddings, hybrid search, reranking)
- Design and implement agent-based workflows (tool calling, multi-step reasoning, orchestration)
- Integrate LLM APIs such as OpenAI and Anthropic, along with open-source models
- Implement structured output validation, grounding strategies, and hallucination mitigation
- Optimize inference cost, latency, and token efficiency
- Design evaluation pipelines for performance, accuracy, and safety
2️⃣ Backend & Microservices Engineering
- Design scalable backend systems using Python
- Build REST and async APIs using FastAPI / Django
- Architect and implement microservices with clear service boundaries
- Implement service-to-service communication (REST, gRPC, event-driven messaging)
- Work with message brokers (Kafka / RabbitMQ)
- Optimize database performance (PostgreSQL, MongoDB)
- Implement caching strategies (Redis)
- Build observability: logging, monitoring, distributed tracing
3️⃣ Cloud-Native Architecture & DevOps
- Design and deploy containerized services using Docker
- Orchestrate services using Kubernetes
- Implement CI/CD pipelines
- Ensure system scalability, resilience, and fault tolerance
- Apply distributed systems principles:
- Circuit breakers
- API gateway patterns
- Load balancing
- Horizontal scaling
- Saga patterns
- Zero-downtime deployments
Senior Data (Platform) Engineer
Location: Hyderabad | Department: Technology, Data
About the Role
Are you passionate about building reliable, scalable data platforms that make analytics and AI development easier? As a Senior Data Platform Engineer, you will be hands-on in building, operating, and improving our core data platform and AI/LLM enablement tooling.
You’ll focus on infrastructure, orchestration, CI/CD, and reusable frameworks that support analytics engineering and AI-driven use cases. You’ll work closely with Analytics Engineering and Insights teams and support other departments as they integrate with our data systems.
What You'll Do
Data Platform & Infrastructure
- Build, deploy, and operate cloud infrastructure for data and AI workloads using Infrastructure as Code (Terraform).
- Provision and manage cloud resources across development, staging, and production environments.
- Develop and maintain CI/CD pipelines for data transformations, orchestration workflows, and platform services.
- Operate and scale containerized workloads on Kubernetes, including Airflow, internal APIs, and AI/LLM services.
- Troubleshoot and resolve infrastructure, pipeline, and orchestration failures to ensure platform reliability.
- Maintain and support existing ML services and pipelines to ensure stability and reliability (No expectation to design or develop new ML models or training pipelines).
- Continuously monitor and optimize platform performance and cost.
Framework, Tooling and Enablement
- Build and maintain reusable frameworks and patterns for dbt, Airflow, Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.), Internal data and AI APIs
- Build and support infrastructure and pipelines for AI/LLM-based use cases, including orchestration, integration, and serving.
- Improve developer experience for Analytics Engineering and Insights teams by reducing friction in local development, deployments, and production workflows.
- Create and maintain technical documentation and examples to support self-service analytics and data development.
What You’ll Need
Technical Skills & Experience
- 5+ years of experience in data engineering, platform engineering, or similar hands-on roles.
- Strong programming skills in Python and SQL.
- Hands-on experience with:
- Terraform
- Airflow
- dbt
- Kubernetes
- Cloud platforms (AWS, Google Cloud, or Microsoft Azure)
- CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, etc.)
- Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.)
- Strong understanding of analytical data models and how analytics teams consume data.
- Experience integrating and operating LLM-based pipelines and services (not model training).
Soft Skills & Collaboration
- Strong problem-solving skills and ability to debug complex platform issues.
- Strong preference for declarative development, with the ability to clearly separate what a system should do from how it is implemented.
- Clear communicator who can work effectively with both technical and non-technical stakeholders.
- Pragmatic, ownership-driven mindset with a focus on reliability and simplicity.
Why Join Us?
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Software Engineer - Lending Platform
2 - 5 years Experience · Seed Stage · On-site preferred · Mumbai
What Neenv Is
Neenv is a fintech platform building channel finance infrastructure for MSME dealer networks in India. We sit between anchor companies and their dealer ecosystems, providing the credit technology layer while lending partners provide the capital.
The platform powers four supply chain finance products: Channel Financing, Working Capital Loans, Factoring, and Supplier Financing. The lending engine is configuration-driven. New products, rate changes, new anchors, new lenders -- config changes only.
What Problems Are We Solving
India runs on dealer networks. Hundreds of thousands of distributors, resellers, and stockists sit inside large corporate supply chains - buying from anchors, selling downstream, keeping markets liquid. These are creditworthy businesses. Their anchor relationships are essentially proof of cash flow. And yet they are chronically underfinanced.
Banks are too slow. Informal credit is expensive. The anchor relationship that makes a dealer viable is invisible to traditional lenders.
We are building the infrastructure to change that. A configuration-driven lending engine for channel finance - powering working capital credit to dealer networks at scale, with the anchor relationship as the underwriting signal.
Who You'll Be Working With
The founding team brings over 50 years of combined banking and channel finance experience. Founders with 25+ years each in client coverage, trade finance, risk management, and SCF sales across Standard Chartered and IDFC First Bank - having collectively managed over $1Bn in channel finance assets with sub-1% delinquency.
The CTO brings solid supply chain finance fintech experience with a product-first, AI-native approach to lending infrastructure.
You are not joining a first-time experiment. You are joining people who have spent careers building exactly what Neenv is now automating.
What Makes Your Role
We have a production lending infrastructure in place. It handles loan origination, repayment waterfalls, interest accrual, payment processing, ledger management, and multi-product configuration. You will own this platform end to end.
Understand the codebase end to end. Drive every config change, every extension, every integration. Be the person who can answer "can the system do X?" without waiting for anyone.
That is the first act.
The second act: we are building AI-native lending workflows. A credit decisioning agent that processes bureau reports, bank statements, GST data, and ITR. A collections agent that automates follow-up and escalation. Ops agents that handle accruals, month-end, lender reporting, and anomaly detection.
You will design this architecture from day one.
What Works Well Here
Someone who gets uncomfortable when they don't fully understand a system. Who reads error logs with curiosity. Who treats financial logic correctness as non-negotiable. Who can hold a product conversation and a technical conversation in the same breath.
If you have built something non-trivial and can explain every decision you made, that is the signal.
What You Need
- PHP and Laravel -- solid working proficiency
- Python -- working proficiency for AI agents, data processing, integrations
- SQL and relational database design -- financial data where a paisa-level rounding error is a production bug
- API design and third-party integration patterns -- REST, webhooks, handling flaky vendor APIs
- LLM and agent workflows -- curiosity or working familiarity. Strong signal if you have built with Claude, GPT, or any agent framework
- Fintech, NBFC, or any domain where data accuracy has real consequences
What We Are Offering
Fixed salary, competitive for early-stage fintech in Mumbai. Direct founder access. Ownership over a production lending system and the AI layer being built on top. For the right fit, a clear path to owning the entire technical stack as we scale.
We cannot offer a large team, defined career ladders, or a 500-person safety net. We can offer a genuinely hard problem, speed, and the chance to build something that matters from nearly the beginning.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
We're seeking an experienced Senior Backend Engineer to join our team. As a senior backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Code Quality: Ensure code quality through code reviews, adherence to best practices, and continuous improvement
● Mentorship: Guide and mentor team members, fostering growth and innovation
● Collaboration: Work closely with stakeholders to align technical goals with business objectives
● Problem-Solving: Analyze and resolve technical challenges promptly ● Innovation: Stay updated with the latest technology trends and integrate them into solutions
Requirements:
● At least 7+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment. We're seeking an experienced Backend Software Engineer to join our team. As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Participate in code reviews and contribute to the improvement of the codebase
● Debug and resolve issues in the existing codebase
● Develop and execute unit tests to ensure high code quality
● Work with DevOps engineers to ensure seamless deployment of software changes
● Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
● Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
● Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
● At least 3+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
Required Qualifications:
• OOPS - In-depth understanding of Object Oriented Programming principles
• Solid - In-depth knowledge and practical knowledge of applying SOLID design principles
• Architectural Patterns - In-depth understanding of design patterns and have experience in designing and building complex architecture solutions
• React + Typescript - Extensive hands-on experience in React + Typescript, building high-performance complex frontends
• Unit testing - Ability to write Unit Testing using Jest framework
• Dotnet core webapi - Extensive hands-on experience in writing WebAPIs using dotnet core. Should have advanced knowledge on middlewares, auth flow, etc.
• C# - Solid knowledge on advanced C# language features like Lambda functions, Generics, etc.
• Entity Framework - Solid knowledge on Entity framework database first and code first approach
• Unit testing - Ability to write Unit Testing using popular mock frameworks and XUnit framework
• SQL - Advanced - Extensive hands-on experience in Microsoft SQL (DDL, DML, Aggregates, Functions, Stored proc, etc)
• Query Performance Tuning - Ability to understand query plans and tune compelx queries to improve performance
• Core services - Advanced knowledge and hands-on in building applications hosted in AWS using ECS containers, API Gateway, Lambda, Postgres/DynamoDb
• IAC - Hands-on experience in writing Teraform Scripting - CI/CD, Github pipelines
Role Overview
We are looking for a Saviynt-focused IAM professional at an architecture/engineering level with deep expertise in Identity Governance and Administration (IGA). The candidate will drive end-to-end Saviynt solution design, implementation, and optimization, ensuring scalable, secure, and compliant identity ecosystems across enterprise environments.
Key Responsibilities
- Saviynt Architecture & Platform Engineering:
- Design and implement scalable Saviynt architecture, including tenant setup, data model design, and performance optimization
- Develop and manage advanced rules, workflows, and business logic within Saviynt
- Drive platform customization, plugin development, and REST/API-based integrations
- IGA Solution Design:
- Architect and implement end-to-end IGA solutions including Access Request System (ARS), SoD (Segregation of Duties), and Certification/Recertification frameworks
- Define RBAC models, entitlement governance strategies, and lifecycle management processes
- Identity Integration & Ecosystem:
- Lead integrations with enterprise applications, directories, and cloud platforms using connectors, APIs, and event-driven mechanisms
- Work closely with cross-functional teams to enable application onboarding and automated provisioning
- AD / Azure AD / Multi-Tenant Expertise:
- Architect identity models across Active Directory (AD) and Azure Active Directory (AAD) environments
- Design group structures, OU strategies, and identity lifecycle flows
- Leverage Multi-Tenant Organization (MTO) capabilities for cross-tenant identity governance
- Governance, Risk & Compliance:
- Implement and optimize SoD policies, access certifications, and audit controls
- Ensure compliance with security standards and regulatory frameworks
- Automation & Optimization:
- Enhance self-service capabilities, workflow automation, and access request efficiencies
- Continuously improve performance, scalability, and operational stability of the Saviynt platform
- Code Quality & Delivery Excellence:
- Maintain high-quality code standards, documentation, and deployment practices
- Support production environments, troubleshoot issues, and ensure platform reliability
Required Skills & Experience
- 8+ years of hands-on experience in Saviynt IGA implementation and engineering
- Strong expertise in: Saviynt EIC platform architecture & configuration; ARS, SoD, Recertification, RBAC; REST APIs, JSON, SQL, and scripting
- Deep understanding of: Active Directory (AD) & Azure AD (AAD); Identity lifecycle management & provisioning workflows
- Experience in enterprise integrations and large-scale deployments
- Exposure to Multi-Tenant Organization (MTO) is a strong plus
Good to Have
- Experience with other IAM tools (e.g., SailPoint, Okta)
- Knowledge of cloud platforms (Azure, AWS)
- Understanding of security frameworks (ISO, SOX, GDPR)
We are looking for a highly skilled Full Stack Developer (MERN Stack) with 3–5 years of experience to join our growing team. You will have the opportunity to work on cutting-edge technology solutions, build products from scratch, and contribute to scalable systems handling large volumes of data.
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications
- Build responsive and high-performance user interfaces using modern frontend frameworks
- Develop robust backend services and APIs
- Ensure seamless system performance while handling large-scale data without slowdowns
- Collaborate with cross-functional teams (product, design, QA) to meet business goals
- Optimize applications for maximum speed, scalability, and reliability
- Participate in architecture discussions and contribute to technical decisions
Required Skills & Qualifications:
Frontend
- Strong experience in React.js
- Hands-on experience with Next.js (mandatory)
- Good understanding of UI/UX principles and responsive design
Backend
- Solid experience in Node.js
- Experience with Python or Java is a plus
- Strong knowledge of RESTful APIs and microservices architecture
Databases
- Strong experience with SQL (mandatory)
- Experience with MongoDB is a plus
- Caching & Messaging
- Experience with at least one: Redis, Kafka, or Cassandra
Other Requirements
- Strong problem-solving and analytical skills
- Ability to work in a fast-paced, collaborative environment
- Good communication and stakeholder management skills
Good to Have:
- Cloud certifications (AWS / Azure / GCP)
- Experience working on high-scale or distributed systems
- Exposure to DevOps practices and CI/CD pipelines
Why Join Us:
- Opportunity to work on cutting-edge tech and greenfield projects
- Ownership and freedom to build solutions from scratch
- Collaborative and growth-focused work environment

























