
Intellimatch Developer/Lead
at A Leading MNC House giving services to top Banking companies
Responsibilities:
- Thorough understanding of Reconciliation and hands on development experience on Intellimatch 9.2.7 or higher version
- Comprehensive understanding of SDLC in developing new Reconciliations in IntelliMatch. Involvement across all stages of Software Development life cycle (SDLC) including business requirement analysis, data mapping, validate data against requirement, build, unit testing, peer reviews, systems integration and user acceptance testing.
- Analyze & develop Test Strategies, design Test Plans, & generate and execute Test Cases.
- Should be able to perform enhancements, whereby need to analyze existing set-up and understand the requirement by performing proper impact analysis, provide estimate and deliver quality code.
- Independent player should be able to liaise with different team to re-solve/get dependencies on time. Should be able to for-see issue/risk and intimate the same to team lead / project manager.
- Should be able to guide support team.
- Deep knowledge on IntelliMatch Database linkage and functionalities is necessary for timely resolution of user queries.
- Propose improvements related to the development and support activity.
- Develop stored procedures to build reports for the end users using SQL 2016

Similar jobs
We are seeking enthusiastic and motivated fresh graduates with a strong foundation in programming, primarily in Python, and basic knowledge of Java, C#, or JavaScript. This role offers hands-on experience in developing applications, writing clean code, and collaborating on real-world projects under expert guidance.
Key Responsibilities
• Develop and maintain applications using Python as the primary language.
• Assist in coding, debugging, and testing software modules in Java, C#, or JavaScript as needed.
• Collaborate with senior developers to learn best practices and contribute to project deliverables.
• Write clean, efficient, and well-documented code.
• Participate in code reviews and follow standard development processes.
• Continuously learn and adapt to new technologies and frameworks.
Core Expectations
• Eagerness to Learn: Open to acquiring new programming skills and frameworks.
• Adaptability: Ability to work across multiple languages and environments.
• Problem-Solving: Strong analytical skills to troubleshoot and debug issues.
• Team Collaboration: Work effectively with peers and seniors.
• Professionalism: Good communication skills and a positive attitude.
Qualifications
• Bachelor’s degree in Computer Science, IT, or related field.
• Strong understanding of Python (OOP, data structures, basic frameworks like Flask/Django).
• Basic knowledge of Java, C#, or JavaScript.
• Familiarity with version control systems (Git).
• Understanding of databases (SQL/NoSQL) is a plus.
NOTE: Laptop with high speed internet is mandatory
Python Developer - AI/MLYour Responsibilities
- Develop, train, and optimize ML models using PyTorch, TensorFlow, and Keras.
- Build end-to-end LLM and RAG pipelines using LangChain and LangGraph.
- Work with LLM APIs (OpenAI, Anthropic Claude, Azure OpenAI) and implement prompt engineering strategies.
- Utilize Hugging Face Transformers for model fine-tuning and deployment.
- Integrate embedding models for semantic search and retrieval systems.
- Work with transformer-based architectures (BERT, GPT, LLaMA, Mistral) for production use cases.
- Implement LLM evaluation frameworks (RAGAS, LangSmith) and performance optimization.
- Design and maintain Python microservices using FastAPI with REST/GraphQL APIs.
- Implement real-time communication with FastAPI WebSockets.
- Implement pgvector for embedding storage and similarity search with efficient indexing strategies.
- Integrate vector databases (pgvector, Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
- Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
- Configure AWS infrastructure (EC2, S3, RDS, SageMaker, Lambda, CloudWatch) for AI/ML workloads.
- Version ML experiments using MLflow, Weights & Biases, or Neptune.
- Deploy models using serving frameworks (TorchServe, BentoML, TensorFlow Serving).
- Implement model monitoring, drift detection, and automated retraining pipelines.
- Build CI/CD pipelines for automated testing and deployment with ≥80% test coverage (pytest).
- Follow security best practices for AI systems (prompt injection prevention, data privacy, API key management).
- Participate in code reviews, tech talks, and AI learning sessions.
- Follow Agile/Scrum methodologies and Git best practices.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, AI/ML, or related field.
- 2–5 years of Python development experience (Python 3.9+) with strong AI/ML background.
- Hands-on experience with LangChain and LangGraph for building LLM-powered workflows and RAG systems.
- Deep learning experience with PyTorch or TensorFlow.
- Experience with Hugging Face Transformers and model fine-tuning.
- Proficiency with LLM APIs (OpenAI, Anthropic, Azure OpenAI) and prompt engineering.
- Strong experience with FastAPI frameworks.
- Proficiency in PostgreSQL with pgvector extension for embedding storage and similarity search.
- Experience with vector databases (pgvector, Pinecone, Weaviate, FAISS, or Milvus).
- Experience with model versioning tools (MLflow, Weights & Biases, or Neptune).
- Hands-on with Docker, Kubernetes basics, and AWS cloud services.
- Skilled in Git workflows, automated testing (pytest), and CI/CD practices.
- Understanding of security principles for AI systems.
- Excellent communication and analytical thinking.
Nice to Have
- Experience with multiple vector databases (Pinecone, Weaviate, FAISS, Milvus).
- Knowledge of advanced LLM fine-tuning (LoRA, QLoRA, PEFT) and RLHF.
- Experience with model serving frameworks and distributed training.
- Familiarity with workflow orchestration tools (Airflow, Prefect, Dagster).
- Knowledge of quantization and model compression techniques.
- Experience with infrastructure as code (Terraform, CloudFormation).
- Familiarity with data versioning tools (DVC) and AutoML.
- Experience with Streamlit or Gradio for ML demos.
- Background in statistics, optimization, or applied mathematics.
- Contributions to AI/ML or LangChain/LangGraph open-source projects.
We’re hiring a remote, contract-based Backend & Infrastructure Engineer who can build and run production systems end-to-end.
You will build and scale high-throughput backend services in Golang and Python, operate ClickHouse-powered analytics at scale, manage Linux servers for maximum uptime, scalability, and reliability, and drive cost efficiency as a core engineering discipline across the entire stack.
What You Will Do:
Backend Development (Golang & Python)
- Design and maintain high-throughput RESTful/gRPC APIs — primarily Golang, Python for tooling and supporting services
- Architect for horizontal scalability, fault tolerance, and low-latency at scale
- Implement caching (Redis/Memcached), rate limiting, efficient serialization, and CI/CD pipelines
Scalable Architecture & System Design
- Design and evolve distributed, resilient backend architecture that scales without proportional cost increase
- Make deliberate trade-offs (CAP, cost vs. performance) and design multi-region HA with automated failover
ClickHouse & Analytical Data Infrastructure
- Deploy, tune, and operate ClickHouse clusters for real-time analytics and high-cardinality OLAP workloads
- Design optimal table engines, partition strategies, materialized views, and query patterns
- Manage cluster scaling, replication, schema migrations, and upstream/downstream integrations
Cost Efficiency & Cost Optimization
- Own cost optimization end-to-end: right-sizing, reserved/spot capacity, storage tiering, query optimization, compression, batching
- Build cost dashboards, budgets, and alerts; drive a culture of cost-aware engineering
Linux Server Management & Infrastructure
- Administer and harden Linux servers (Ubuntu, Debian, CentOS/RHEL) — patching, security, SSH, firewalls
- Manage VPS/bare-metal provisioning, capacity planning, and containerized workloads (Docker, Kubernetes/Nomad)
- Implement Infrastructure-as-Code (Terraform/Pulumi); optionally manage AWS/GCP as needed
Data, Storage & Scheduling
- Optimize SQL schemas and queries (PostgreSQL, MySQL); manage data archival, cold storage, and lifecycle policies
- Build and maintain cron jobs, scheduled tasks, and batch processing systems
Uptime, Reliability & Observability
- Own system uptime: zero-downtime deployments, health checks, self-healing infra, SLOs/SLIs
- Build observability stacks (Prometheus, Grafana, Datadog, OpenTelemetry); structured logging, distributed tracing, alerting
- Drive incident response, root cause analysis, and post-mortems
Required Qualifications:
Must-Have (Critical)
- Deep proficiency in Golang (primary) and Python
- Proven ability to design and build scalable, distributed architectures
- Production experience deploying and operating ClickHouse at scale
- Track record of driving measurable cost efficiency and cost optimization
- 5+ years in backend engineering and infrastructure roles
Also Required
- Strong Linux server administration (Ubuntu, Debian, CentOS/RHEL) — comfortable living in the terminal
- Proven uptime and reliability track record across production infrastructure
- Strong SQL (PostgreSQL, MySQL); experience with high-throughput APIs (10K+ RPS)
- VPS/bare-metal provisioning, Docker, Kubernetes/Nomad, IaC (Terraform/Pulumi)
- Observability tooling (Prometheus, Grafana, Datadog, OpenTelemetry)
- Cron jobs, batch processing, data archival, cold storage management
- Networking fundamentals (DNS, TCP/IP, load balancing, TLS)
Nice to Have
- AWS, GCP, or other major cloud provider experience
- Message queues / event streaming (Kafka, RabbitMQ, SQS/SNS)
- Data pipelines (Airflow, dbt); FinOps practices
- Open-source contributions; compliance background (SOC 2, HIPAA, GDPR)
What We Offer
- Remote, contractual role
- Flexible time zones (overlap for standups + incident coverage)
- Competitive contract compensation + equity
- Long-term engagement opportunity based on performance
Job description
🔧 Key Responsibilities:
- Design and implement robust backend services using Node.js.
- Develop and maintain RESTful APIs to support front-end applications and third-party integrations
- Manage and optimize SQL/NoSQL databases (e.g., PostgreSQL, MongoDB, Snowflake)
- Collaborate with front-end developers to ensure seamless integration and data flow
- Implement caching, logging, and monitoring strategies for performance and reliability
- Ensure application security, scalability, and maintainability
- Participate in code reviews, architecture discussions, and agile ceremonies
✅ Required Skills:
- Proficiency in backend programming languages (Node.js, Java, .NET Core)
- Experience with API development and tools like Postman, Swagger
- Strong understanding of database design and query optimization
- Familiarity with microservices architecture and containerization (Docker, Kubernetes)
- Knowledge of cloud platforms (Azure, AWS) and CI/CD pipelines.
Key Skills:
- Programming Languages: C# or VB.NET
- Server-Side Technologies: ASP.NET / MVC
- Front-End Technologies: Html5, ES5/ES6/ JavaScript, CSS, jQuery, Bootstrap, Ajax, Web Sockets
- Database: SQL Server
Required Skills: Angular 11/12, .NET framework, .NET Core, Web APIs, Web Security,
Microservices, Event driven architecture, Clean Code and 12 Factor principles, Azure
PaaS services experience, public facing web application development, web analytics,
Bootstrap v5, Angular Material, jQuery, HTML/CSS, SQL Server, Transact-SQL, Azure
SQL
Experience working with software design, software development life cycle, and
development methodologies and implementation
Experience working with product systems design principles
Experience working with appropriate programming languages, operating systems,
hardware and software
Experience working with company application development policies and procedures
Experience working with company software and hardware products and related
business issues that may impact overall business plans
- Complete experience in the Spring Framework
- proficient knowledge of SQL and NoSQL must
- Hands-on experience in designing and developing applications using Java EE platforms
- Designing and developing high-volume, low-latency applications for mission-critical systems and delivering high availability and performance
- Contributing in all phases of the development lifecycle
- Writing well designed, testable, efficient code
- Excellent Communication Skills
- Willingness to own a responsibility
- Ability to work in a team as well as an individual
- Ability to work under pressure and maintain deadlines
- Good to have worked on end to end in projects
- Well versed in Core Java, OOPs concepts, collections, multi-threading, concurrency, lambdas, and streams.
- Hands-on knowledge of Spring Core, MVC, JPA, Security, transaction
- Working knowledge of REST API designing as well as development, using Spring.
- Exposure to Spring Boot, Docker, Kubernetes, OpenShift for the microservices environment.
- Savvy with SQL and database concepts.
- Ability to use frameworks like JUnit, Mockito, etc., for implementing unit testing.
- Sound understanding of code versioning tools, such as Git/bit bucket with Maven.
- Should have worked in a CI/CD environment with TeamCity/Jenkins.
We are seeking talented, motivated engineers who will be part of a dynamic global team delivering and supporting technology infrastructure to meet the growth needs of the business.
As a Product Support Engineer, you will collaborate with the Engineering, Product and Support teams to ensure the designed product and service is fully operational with streamlined processes and procedures for addressing reported bugs and anomalies. Production Support Engineers will take ownership of resolving product issues through its life cycle and communication to multiple stakeholders.
This is a programming role which requires a good understanding of Java along with solid debugging skills so complex workflows can be debugged and solved. You will own the code you push to production
To be successful, you must be an excellent team player and self motivated person who can carry out duties with minimal supervision.
Skills you need to have:
-
A good understanding of SQL
-
An understanding of Java and Java design patterns
-
L4 support
-
Good debugging and problem solving skills
-
Excellent communication
Bonus Skills:
-
An understanding of SpringBoot and ORM's
The position requires for an individual to Develop high-volume, low-latency application for data
analytics for big consumer product corporations. The position also requires the candidate to Contribute
in all phases of the development lifecycle, write well designed, testable, efficient code. Should ensure
designs follow specifications.
An ideal candidate will be/have:
• Strong experience in Python/JAVA.
• Familiarity with Test driven development and Continuous Integration.
• Strong knowledge and hands-on with code development tools (Eclipse, GIT, Jenkins, Unit, Testing Frameworks).
• Familiar with Software development methodology like Agile methodology.
• Ability to write complex SQL.
• Desire to learn and develop new tools and techniques and share with the team
• Knowledge of cloud would be a plus
• Ability to design software modules.











