
Location Delhi NCR
Opening Immediate
Search Context
Over 1.8 billion non-salaried informal sector workers globally, and roughly 700m Indians are
not eligible for pension or other social protection benefits. Without an urgent and effective
response to pension exclusion, they face the grim prospect of extreme poverty for over 20
years once they are too old to work.
pinBox is the only global pensionTech committed exclusively to mass-scale digital micropension inclusion among self-employed women and youth. We deploy our white-labelled,
API-enabled pension administration and delivery platform, our unique deployment model and
a simple and intuitive UI/UX to make access to regulated pension, savings and insurance
products easy and simple for non-salaried informal sector workers. We're working actively
with governments, regulators, multilateral aid agencies and leading financial inclusion
stakeholders in Asia and Africa. The pinBox model is already operating in Rwanda, Kenya
and India. We will expand to Bangladesh, Uganda, Chile, Indonesia and Nigeria by 2023.
Governments and pension regulators use the our pensionTech to jumpstart digital micropension and insurance inclusion among informal sector workers. Pension funds and insurers
use our pensionTech to build a mass market for their products beyond their traditional agentled customer base. Banks, MNOs, cooperatives, MFIs, fintech firms and gig-platforms use
our plug-and-play pensionTech to instantly offer an integrated social protection solution to
their clients, members and employees without any new investments in IT or capacity
enhancement.
We’ve recently completed our first equity fundraise to enhance our engineering, business
and delivery capacity and embark on the next stage of pinBox pensionTech development
and expansion. By 2025, we aim to enable and assist 100 million excluded individuals to
start saving for their old age in a secure, affordable and well-regulated environment.
pinBox is looking for senior software engineers who are deeply passionate about using IT to
solve difficult, real-life problems at scale across multiple countries.
The Senior Software Engineer will be expected to
1. Design, code, test, deploy and maintain applications to satisfy business requirements,
2. Plan and implement technical efforts through all phases of the software development
process,
3. Collaborate cross-functionally to make continuous improvements to the pinBox
pensionTech platform,
4. Help drive engineering plans through a broad approach to engineering quality
(consistent and thoughtful patterns, improved observability, unit and integration testing,
etc.),
5. Adhere to national and global architecture standards, risk management and security
policies,
6. Monitor the performance of applications and work with developers to continuously
improve and optimize performance.
The ideal candidate processes
1. An undergraduate degree in engineering,
2. At least 6 years’ experience as a software engineer or in a similar role,
3. Experience working with distributed version control systems such as Git / Mercurial
4. Frontend: Experience with HTML, CSS, bootstrap, Javascript, Jquery is necessary.
Experience with React / Angular will be an advantage,
5. Backend: Experience with Django/Python, PostgreSQL or any other RDBMS is
mandatory. Experience with Redis will be an advantage,
6. Experience in working with AWS / Azure / Google Cloud,
7. As our applications use a number of third party micro-services, experience with REST
APIs, as also with the Indian digital finance ecosystem (UPI, e-KYC) will be both
necessary and an advantage,
8. Critical thinking and problem-solving skills, and
9. Excellent teamwork and interpersonal skills, a keen eye for detail and the ability to
function effectively and proactively under tight deadlines.

Similar jobs
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less
🚨 We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Own and evolve the technical backbone of an AI-first enterprise platform.
You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.
🧩 What You’ll Do
- Architect large-scale distributed systems powering AI-driven workflows
- Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
- Redesign legacy systems into scalable, modular, AI-native architectures
- Drive system design excellence across teams (APIs, infra, observability, reliability)
- Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
- Mentor senior engineers and influence engineering culture/org standards
- Partner with product, data, and leadership on long-term technical strategy
🧠 What We’re Looking For
- Proven track record building high-scale backend or platform systems
- Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
- Strong exposure to data systems/infra / Data / real-time architectures
- Experience or strong interest in LLMs, GenAI, or AI system design
- Exceptional system design, abstraction, and problem-solving ability
- High ownership mindset — you think in terms of systems, not tickets
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
- Solve hard system problems (latency, scale, reliability)
- Drive cross-team technical decisions and standards
- Mentor senior engineers and influence org-wide architecture
- Design large-scale distributed systems and backend platforms
- Mentorship & Technical Leadership
- Expertise in system design, scalability, and performance optimization
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Core AI Backend Engineer – LLM Fine-Tuning
You know that moment when you don’t just debug code — you train a model, fine-tune it, and suddenly it understands your domain better than you expected? That’s the kind of magic we’re looking for.
We’re building something that turns chaotic social video data into crystal-clear business intelligence. Not just another API — but AI-backed architecture fine-tuned to our world. Systems that marketing teams thank you for, because they feel the intelligence, not just the infrastructure.
Either you feel the craft when you read this, or you don’t. This isn’t just another backend role. This is where you bring together scalable systems and cutting-edge LLMs to build something the world hasn’t seen before.
Who We Are
We’re a small, global team that ships fast. Every line of code and every model choice affects millions of video analysis requests.
Our engineers don’t just build APIs — they architect solutions, they optimize at scale, and now, they fine-tune models to make AI work in the real world. Our CPTO still codes. Our senior engineers make complexity look effortless. Our backend team sets a standard that others ask how we move so fast.
What We Need
We need someone who’s lived both sides of this life:
- Backend excellence: building high-scale, high-performance systems.
- LLM fine-tuning: hands-on with open-source models, not just calling APIs.
Someone who can sit with a requirement at 2pm and by 6pm not only has endpoints working, but also has a fine-tuned model running behind them — customized to our use case.
Your Craft
- JavaScript/TypeScript & NodeJS as core backend tools.
- Next.js for full-stack where needed.
- Rust when performance is non-negotiable.
- Golang/Python as comfortable tools of choice.
- MySQL/Postgres/Redis — wielded with intention.
- AWS ecosystem — your playground, not your puzzle.
- LLM/AI integration you’ve actually shipped.
- Open-source LLM fine-tuning experience:
- Bringing in open-source models (LLaMA, Mistral, Falcon, etc.).
- Fine-tuning/adapting them for specific domains.
- Optimizing for inference cost, latency, and accuracy.
The Reality
The work is beautifully complex. The scale is real and growing. The problems are the kind that wake you up at 3am with solutions.
If you get your energy from building backend systems and adapting LLMs to make them smarter for real-world use, you’ll probably fall in love with what we do. If you’re only interested in APIs without touching models, this won’t be your thing — and that’s completely okay.
How to Apply
If you’re reading this thinking “finally, a team that actually cares about real AI engineering” — we’d love to see something you’ve built.
Not just a resume. Show us your craft:
- An LLM fine-tuning repo.
- A domain-adapted model you worked on.
- A system design where you combined backend and AI.
- Or even a short write-up or voice note explaining what you’ve fine-tuned.
We’re genuinely excited to see what you’ve done and have a meaningful conversation about whether this could be magic for both of us.
- Experience in API development using Java will be a plus.
- Excellent knowledge and experience in writing testable, scalable, flexible, robust and efficient web applications using JavaEE 6/7 technologies, specifically, Spring core, Spring Boot Spring Data, spring batch and JPA
- Experience in successfully deploying Java-based applications in production and understanding load-balancing, authentication, and fault tolerance through Tom Cat.
- Experience in database modeling (MySQL/NoSQL databases such as Mongo DB)
- Knowledge of integrating with Ant, Maven, GIT and Shell scripting.
- Strong backend experience to develop Data Layer using at least one of the ORM frameworks like Hibernate, JPA etc.
- Strong RDBMS Skills and SQL skills. Experience in MySQL, Teradata and warehousing databases.
-
Experience in Analytics frameworks and visualization products.
- Excellent knowledge and experience of Maven, Continuous Integration, and Continuous Delivery with Jenkins.
• Experience with JavaScript frameworks, especially Angular is a definite plus.
Java 8, J2EE , Spring Boot, Microservices, Apache Spark, DevOps, Advanced SQL, preferably with
expertise in Data engineering/Data analytics,
ELK(Elastic Search, Logstash , Kibana) stack, Teradata, any No SQL database, Hands on experience in
maintaining products on Cloud Technologies like PCF, Azure, Docker, Kubernetes, etc, NodeJS,
Angular 2x, GitLab with CI/CD, Hands on experience in Unix server, Shell scripting, Large data
processing, Performance tuning, Experience in working in various Test Automation frameworks like
Selenium, Test NG, Python, Cucumber, Karma, Karate/ Jasmine, etc
Experience in using Eclipse, Spring tool suite, Project building tools - Maven, Gradle, etc, JIRA for ALM.
Requirements
- Previous working experience as a MEAN Stack Developer for 1+ years
- BSc degree in Computer Science or similar relevant field
- In-depth knowledge of NodeJS, ExpressJS or Restify
- Experience implementing applications using Angular 1 or React
- Experience creating front end applications using HTML5, Angular, LESS/SASS
- Hands-on experience with JavaScript Development on both client and server-side
- Experience with modern frameworks and design patterns, minimum one-year experience with MEAN Fullstack paradigm
- Knowledge of the following will be considered as an advantage:
- Consumer Web Development Experience for High-Traffic, Public Facing web applications
- Experience with cloud technologies also a plus
- Creating secure RESTful-based web services in XML and JSON, Javascript, JQuery
- Continuous integration (Jenkins/Hudson) and version control (SVN, Git).
Benefits of working with Ebizz Family:
- 5 working days
- Paid Overtime
- Team Exposure
- PF and ESIC benefits
- Growth in a short time
- Co-operative Teammates
- Friendly Environment
Project Overview: We are looking for expert level Postgres database developer to work on a software application development project for a fortune 500 US based telecom client. The application is web based and used across multiple teams to support their business processes. The developer will be responsible for developing various components of the Postgres database and for light administration of the database.
Key Responsibilities: Collaborate with onshore, offshore and other team members to understand the user stories and develop code. Develop and execute scripts to unit test. Collaborate with onshore developers, product owner and the client team to perform work in an integrated manner.
Professional Attributes: Should have the ability to work independently and seek guidance as and when necessary - Should have good communication skills - Flexible working in different time zones if necessary - Good team player - Mentoring juniors
Experience preferred:
- Extensive experience in Postgres database development (expert level)
- Experience in Postgres administration.
- Must have working experience with GIS data functionality
- Experience handling large datasets (50-100M tables)
- Preferred – exposure to Azure or AWS
- Must have skillsets for database performance tuning
- Familiarity with web applications
- Ability to work independently with minimal oversight
- Experience working cohesively in integrated teams
- Good interpersonal, communication, documentation and presentation skills.
- Prior experience working in agile environments
- Ability to communicate effectively both orally and in writing with clients, Business Analysts and Developers
- Strong analytical, problem-solving and conceptual skills
- Excellent organizational skills; attention to detail
- Ability to resolve project issues effectively and efficiently
- Ability to prioritize workload and consistently meet deadlines
- Experience working with onshore-offshore model
Role - Backend Developer
Experience:- 2+
Qualification:- BE Computer Engineering/MCA
Programming Language - Python & framework Django
Responsibilities
- Building REST API's & Services In Django Framework
- Building reusable code and libraries for future use
- Optimization of the application for maximum speed and scalability
- Implementation of security and data protection
- Design and implementation of the database schema
- Design and implementation of data storage solutions
- Implementing CI/CD pipeline
- Proficient knowledge of a back-end programming language Python
- Hands-on experience with Python Frameworks like Django
- Proficient knowledge of MySQL, PostgreSQL
- Creating database schemas that represent and support business processes (Relational & NoSQL)
- Understanding of queueing systems like Redis/AWS SQS
- User authentication and authorization between multiple systems, servers, and environments
- Data migration, transformation, and scripting
- Understanding differences between multiple delivery platforms such as mobile vs desktop, and optimizing output to match the specific platform
- Proficient understanding of code versioning tools, such as Git
Brownie Points
- Experience in Docker
- HandsOn Knowledge with implementing CI/CD pipelines
- Experience in managing applications on AWS
- Management of hosting environment, including database administration and scaling application to support load changes
- Product wide application-level thinking on API and data modeling
- 10 + years of experience and expertise in database and systems architecture design, development and implementation;
- Expertise and experience in data structures, indexing, query, and retrieval;
- Excellent data management skills including ensuring data integrity, data access, security and archiving procedures;
- Knowledge of the language technology and video extraction research domains to be able to converse fluently with the research communities to transform research requirements into concrete formalisms.
- Experience in cross platform development for multiple variants of Unix, Linux including 32 and 64 bit
- Experience with NoSQL, SQL databases, statistics and algorithms
- Strong oral and written communication skills
- Design, implement and test novel database architecture designs to accommodate the multimodal data types used in Client managed technology evaluations
- Administer and monitor database and address data security solutions as applicable
- Design and develop efficient techniques for fast, optimized data access and transfer of distributed or networked database systems
- Design, implement and test novel data structures, data search, query and retrieval solutions to enable access to and processing of the multimodal data types used in Client managed technology evaluations
Candidate must have experience with start up product based companies.
Opportunity is with the client from E Mobility domain.








