50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)
Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.
Required Skills & Experience
- Must have 8+ years relevant experience in Java Design Development.
- Extensive experience working on solution design and API design.
- Experience in Java development at an enterprise level (Spring Boot, Java 17+, Spring Security, Microservices, Spring).
- Extensive work experience in monolithic applications using Spring.
- Extensive experience leading API development and integration (REST/JSON).
- Extensive work experience using Apache Camel.
- In-depth technical knowledge of database systems (Oracle, SQL Server).
- Ability to refactor and optimize existing code for performance, readability, and maintainability.
- Experience working with Continuous Delivery/Continuous Integration (CI/CD) pipelines.
- Experience in container platforms (Docker, OpenShift, Kubernetes).
- DevOps knowledge including:
- Configuring continuous integration, deployment, and delivery tools like Jenkins or Codefresh
- Container-based development using Docker, Kubernetes, and OpenShift
- Instrumenting monitoring and logging of applications
Requirements
- 6–12 years of backend development experience.
- Strong expertise in Java 11+, Spring Boot, REST APIs, AWS.
- Solid experience with distributed, high-volume systems.
- Strong knowledge of RDBMS (e.g., MySQL, Oracle) and NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra).
- Hands-on with CI/CD (Jenkins) and caching technologies Redis or Similar.
- Strong debugging and system troubleshooting skills.
- Experience in payments system is a must.
Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.
Key Responsibilities:
- Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
- Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
- Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
- Optimize workloads post-migration for cost, performance, and security.
- Collaborate with stakeholders to define migration roadmaps and timelines.
- Perform data migration, application re-architecture, and hybrid cloud setups.
- Monitor migration progress, troubleshoot issues, and ensure business continuity.
- Document processes and provide post-migration support and training.
- Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.
Required Qualifications:
- 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
- AWS Certified Solutions Architect or Migration Specialty certification preferred.
- Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
- Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
- Experience with infrastructure as code (CloudFormation, Terraform).
- Proficiency in scripting (Python, PowerShell) and automation.
- Familiarity with security best practices (IAM, encryption, compliance).
- Hands-on experience with Kubernetes/EKS networking components and best practices.
Preferred Skills:
- Experience with hybrid/multi-cloud environments.
- Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
- Excellent problem-solving and communication skills.
Designation: Software Development Team Lead (Full-Stack)
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead (Full-Stack)
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
- Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Designation: Software Development Team Lead (Full-Stack)
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead (Full-Stack)
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Requirements:
- Strong experience in JAVA and J2EE technologies with Cloud based environment.
- Expert knowledge in JPA, Hibernate, JDBC, SQL, Spring, JUnit and JSON,
- REST/JSON web services.
- Strong knowledge in Java Design Patterns.
- Development and implementation of features in any Cloud platform products and technologies.
- Experience developing applications with Agile team methodologies preferred.
- Strong Object-Oriented design skills and understanding of MVC.
- Sufficient experience with Git to organize a large software project with multiple developers to include branching, tagging and merging.
Desired Skills:
- Experience in Azure cloud (PaaS) with Java is a plus.
- Strong business application design skills.
- Excellent communications and interpersonal skills.
- Strong debugging skills.
- Highly proficient in standard Java development tools (Eclipse, Maven, etc.)
- A strong interest in building security into applications from the initial design.
- Experience at creating technical project Documentation and task time estimates.
- In-depth knowledge of at least one high-level programming language
- Understanding of core AWS services, uses, and basic AWS architecture best
- practices
- Proficiency in developing, deploying, and debugging cloud-based applications using AWS
- Ability to use the AWS service APIs, AWS CLI, and SDKs to write applications
- Ability to identify key features of AWS services
- Understanding of the AWS shared responsibility model
- Understanding of application lifecycle management
- Ability to use a CI/CD pipeline to deploy applications on AWS
- Ability to use or interact with AWS services
- Ability to apply a basic understanding of cloud-native applications to write code
- Ability to write code using AWS security best practices (e.g., not using secret and access keys in the code, instead using IAM roles)
- Ability to author, maintain, and debug code modules on AWS
- Proficiency writing code for serverless applications
- Understanding of the use of containers in the development process
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Key Responsibilities:
- Application Development: Design and implement both client-side and server-side architecture using JavaScript frameworks and back-end technologies like Golang.
- Database Management: Develop and maintain relational and non-relational databases (MySQL, PostgreSQL, MongoDB) and optimize database queries and schema design.
- API Development: Build and maintain RESTfuI APIs and/or GraphQL services to integrate with front-end applications and third-party services.
- Code Quality & Performance: Write clean, maintainable code and implement best practices for scalability, performance, and security.
- Testing & Debugging: Perform testing and debugging to ensure the stability and reliability of applications across different environments and devices.
- Collaboration: Work closely with product managers, designers, and DevOps engineers to deliver features aligned with business goals.
- Documentation: Create and maintain documentation for code, systems, and application architecture to ensure knowledge transfer and team alignment.
Requirements:
- Experience: 1+ years in backend development in micro-services ecosystem, with proven experience in front-end and back-end frameworks.
- 1+ years experience Golang is mandatory
- Problem-Solving & DSA: Strong analytical skills and attention to detail.
- Front-End Skills: Proficiency in JavaScript and modern front-end frameworks (React, Angular, Vue.js) and familiarity with HTML/CSS.
- Back-End Skills: Experience with server-side languages and frameworks like Node.js, Express, Python or GoLang.
- Database Knowledge: Strong knowledge of relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB).
- API Development: Hands-on experience with RESTfuI API design and integration, with a plus for GraphQL.
- DevOps Understanding: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a bonus.
- Soft Skills: Excellent problem-solving skills, teamwork, and strong communication abilities.
Nice-to-Have:
- UI/UX Sensibility: Understanding of responsive design and user experience principles.
- CI/CD Knowledge: Familiarity with CI/CD tools and workflows (Jenkins, GitLab CI).
- Security Awareness: Basic understanding of web security standards and best practices.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Planview is seeking a passionate Sr Software Engineer I to lead the development of internal AI tools and connectors, enabling seamless integration with internal and third-party data sources. This role will drive internal AI enablement and productivity across engineering and customer teams by consulting with business stakeholders, setting technical direction, and delivering scalable solutions.
Responsibilities:
- Work with business stakeholders to enable successful AI adoption.
- Develop connectors leveraging MCP or third-party APIs to enable new integrations.
- Prioritize and execute integrations with internal and external data platforms.
- Collaborate with other engineers to expand AI capabilities.
- Establish and monitor uptime metrics, set up alerts, and follow a proactive maintenance schedule.
- Exposure to operations, including Docker-based and serverless deployments and troubleshooting.
- Work with DevOps engineers to manage and deploy new tools as required.
Required Qualifications:
- Bachelor’s degree in computer science, Data Science, or related field.
- 4+ years of experience in infrastructure engineering, data integration, or AI operations.
- Strong Python coding skills.
- Experience configuring and scaling infrastructure for large user bases.
- Proficiency with monitoring tools, alerting systems, and maintenance best practices.
- Hands-on experience with containerized and serverless deployments.
- Ability to code connectors using MCP or third-party APIs.
- Strong troubleshooting and support skills.
Preferred Qualifications:
- Experience with building RAG knowledge bases, MCP Servers, and API integration patterns. Experience leveraging AI (LLMs) to boost productivity and streamline workflows.
- Exposure to working with business stakeholders to drive AI adoption and feature expansion. Familiarity with MCP server support and resilient feature design.
- Skilled at working as part of a global, diverse workforce.
- AWS Certification is a plus.
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools
Job Title: Practice Head - Cloud Business Development
Experience: 10 - 13 Years
Location: Bangalore
Territory Focus: India, MENA, and SEA
About Pacewisdom Solutions:
Pacewisdom is a deep-tech product engineering and consulting firm. As an AWS Advanced Tier Partner, our Cloud Center of Excellence is a high-growth vertical specialized in Migration, Modernization, and Cloud Strategy.
We are seeking a hands-on sales leader to drive our cloud business expansion across Indian and international markets. Role Overview As Practice Head, you will spearhead new business acquisition for the Cloud practice. This is a hunter role focused on securing contracts for Cloud Migration, Modernization, and Managed Services. The role offers the opportunity to build a dedicated sales team under your leadership within the first 12-18 months.
Key Responsibilities
• Identify and penetrate high-potential enterprise and mid-market accounts across diversified verticals in India, MENA, and South East Asia.
• Drive the full sales cycle for cloud transformation deals, ensuring a healthy balance of Consulting, Implementation, and recurring Managed Services revenue.
• Leverage our established collaboration with the AWS Partner Network and commercial distributors to drive co-selling models to accelerate market access and utilize funding programs like MAP/OLA for deal closure.
• Execute an active field sales strategy by frequently visiting AWS and partner offices to build personal trust with Account Managers and representing the company at offline industry events.
• Move beyond transactional sales to close high-value engagements by conducting strategic discussions with C-level executives regarding Total Cost of Ownership, compliance, and modernization architecture.
• Build a predictable sales pipeline to meet aggressive growth targets, with a specific focus on executing larger ticket-size projects rather than smaller ad-hoc tasks.
Candidate Requirements
• Total IT sales experience of 10 to 13 years, with the last 5 years strictly focused on Cloud Services sales, preferably with exposure to the AWS ecosystem.
• Demonstrated history of closing individual deals valued over 1 Cr annually and experience managing or contributing to an annual portfolio revenue of 10-20 Cr.
• Hands-on experience in structuring profitable deals using cloud partner incentives and navigating cross-border sales in emerging markets like MENA or SEA.
• Ability to articulate technical differentiators to a non-technical audience and commercial value to technical stakeholders without constantly relying on presales support. Compensation & Growth
• Competitive market standard fixed compensation with an aggressive, performance based incentive structure directly linked to deal closures, recurring revenue, and partner funding optimization.
• Backed by strong operational support and direct access to the AWS partner ecosystem.
• Clear roadmap to scale the vertical, with the mandate to hire and groom a reporting sales team upon achieving initial annual targets.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
About Company:
HealthAsyst® is an IT service and product company. It is a leading provider of IT services to some of the largest healthcare IT vendors in the United States. We bring the value of cutting-edge technology through our deep expertise in product engineering, custom software development, testing, large scale healthcare IT implementation and integrations, on-going maintenance and support, BI & Analytics, and remote monitoring platforms.
As a true partner, we help our customers navigate a complex regulatory landscape, deal with cost pressures, and offer high-quality services. As a healthcare transformation agent, we enable innovation in technology and accelerate problem-solving while delivering unmatched cost benefits to healthcare technology companies.
Founded : 1999
Location : Anjaneya Techno Park, HAL Old Airport Road, Bangalore. Products : CheckinAsyst, RadAsyst
IT Services : Product Engineering, Custom Development, QA & Testing, Integration, Maintenance, Managed Services.
Position Overview:
We are seeking a highly skilled and motivated Associate Architect for Web Applications to join our dynamic product development team. The ideal candidate will have a strong background in web application architecture, design, and development, along with a passion for staying up to date with the latest industry trends and technologies. As an Associate Architect, you will collaborate with cross-functional teams, mentor junior developers, and play a critical role in shaping the technical direction of our web applications
Qualifications:
- Bachelor’s or master’s degree in computer science, Software Engineering, or a related field.
- Overall, 9 to 15 years’ experience candidate,
- Proven experience (8 years) in designing and developing web applications using modern web technologies and frameworks (e.g., JavaScript, React, jQuery Mobile, MVC and ASP.net ).
- Strong understanding of software architecture principles, design patterns, and best practices.
- Demonstrated experience in mentoring and leading development teams.
- Proficiency in database design
- Excellent problem-solving skills and the ability to tackle complex technical challenges.
- Familiarity with cloud platforms (Azure) and containerization technologies (Docker) is a plus.
- Effective communication skills and the ability to collaborate with both technical and non-technical stakeholders.
- Up-to-date knowledge of industry trends, emerging technologies, and best practices in web application development.
Key Responsibilities:
Web Application Architecture:
- Collaborate with stakeholders, including business analysts and Solutions, to understand product requirements and translate them into scalable and efficient web application using the right architectural designs.
- Design architectural patterns, system components, and data models to ensure a robust and maintainable application structure.
- Evaluate and recommend appropriate technology stacks, frameworks, and tools to achieve project goals.
Technical Leadership and Mentorship:
- Provide technical guidance and mentorship to junior developers, fostering their growth and professional development.
- Lead code reviews, ensure optimized/scalable code is written, architectural discussions, and brainstorming sessions to ensure high-quality, well-architected solutions.
- Share best practices and coding standards with the development team to ensure consistent and efficient coding practices.
Development and Coding:
- Participate in hands-on development of components and features, ensuring code quality, performance, and security.
- Collaborate with front-end and back-end developers. With full stack development experience, ensure integrations are done well within the technical stack and partner systems.
- Troubleshoot complex technical issues, provide workaround, and contribute to debugging efforts.
Performance and Scalability:
- Optimize application performance by analysing and addressing bottlenecks, ensuring responsive and efficient user experiences.
- Design and implement strategies for horizontal and vertical scalability to support increasing user loads and data volumes.
Collaboration and Communication:
- Work closely with cross-functional teams, including Architects, QA engineers, Business analyst, automation resource to deliver features on time.
- Effectively communicate technical concepts to non-technical stakeholders, contributing to project planning, progress tracking, and decision-making.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Senior Software Engineer
Location: Hyderabad, India
Who We Are:
Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.
What We Do:
At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.
What You’ll Do:
Build, Innovate, and Own:
- Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
- Architect and optimize data pipelines and storage solutions that power our AI-driven products.
- Collaborate closely with AI and data teams to bring machine learning models into production systems.
- Build integrations with external services and APIs to enable scalable, interoperable solutions.
- Ensure robust security, scalability, and observability across distributed systems.
- Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.
Responsibilities will include but are not limited to:
- Provide technical guidance and code reviews that raise the bar for quality and performance.
- Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.
What You’ll Need:
- Bachelor’s degree in Computer Science or equivalent practical experience.
- 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
- Strong experience with SQL and NoSQL data stores.
- Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
- Proven ability to design for performance, reliability, and security in data-intensive systems.
- Excellent communication skills and ability to work effectively in a global, cross-functional environment.
Set Yourself Apart With:
- Startup experience - specifically in building product from 0-1
- Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
- Experience in healthcare or fintech domains.
- Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).
Equal Employer/Veterans/Disabled
Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.
Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. Expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Required Skills
• 12+ years of proven experience in designing large-scale enterprise systems and distributed
architectures.
• Strong expertise in Azure, AWS, Python, Docker, LangChain, Solution Architecture, C#, .Net
• Frontend technologies like React, Angular and, ASP.Net MVC
• Deep knowledge of architecture frameworks (TOGAF).
• Understanding of security principles, identity management, and data protection.
• Experience with solution architecture methodologies and documentation standards
• Deep understanding of databases (SQL and NoSQL), RESTful APIs, and message brokers.
• Excellent communication, leadership, and stakeholder management skills.
Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)
Role Overview
Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.
Key Responsibilities
1. Client Engagement (India + International Markets)
- Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
- Explain Tradelab’s capabilities, architecture, and deployment options.
- Understand region-specific latency expectations, connectivity options, and regulatory constraints.
2. Requirement Gathering & Solutioning
- Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
- Assess infra readiness (cloud/on-prem/colo).
- Propose architecture aligned with forex markets.
3. Global Architecture & Deployment Design
- Design multi-region infrastructure using AWS/Azure/GCP.
- Architect low-latency routing between India–Singapore–Dubai.
- Support deployments in DCs like Equinix SG1/DX1.
4. Networking & Security Architecture
- Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
- Implement network hardening, segmentation, WAF/firewall rules.
5. DevOps, Cloud Engineering & Scalability
- Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
- Design global failover models.
6. BFSI & Trading Domain Expertise
- Indian broking, international forex, LP aggregation, HFT.
- OMS/RMS, risk engines, LP connectivity, and matching engines.
7. Latency, Performance & Capacity Planning
- Benchmark and optimize cross-region latency.
- Tune performance for high tick volumes and volatility bursts.
8. Documentation & Consulting
- Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
- Required Skills
- AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
- DevOps: Kubernetes, Docker, Helm, Terraform.
- Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
- Message buses: Kafka, RabbitMQ, Redis Streams.
Domain Skills
- Deep Broking Domain Understanding.
- Indian broking + global forex/CFD.
- FIX protocol, LP integration, market data feeds.
- Regulations: SEBI, DFSA, MAS, ESMA.
Soft Skills
- Excellent communication and client-facing ability.
- Strong presales and solutioning mindset.
- Preferred Qualifications
- B.Tech/BE/M.Tech in CS or equivalent.
- AWS Architect Professional, CCNP, CKA.
Why Join Us?
- Experience in colocation/global trading infra.
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.
"IoT Migration Architect (Azure to AWS)" – Role 1
Salary between 28LPA -33 LPA -Fixed
We have Other Positions as well in IOT.
- IoT Solutions Engineer - Role 2
- IoT Architect – 8+ Yrs - Role -3
Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.
Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,
Experience in Large Scale IoT Deployment.
Contract to Hire role.
Location – Pune/Hyderabad/Chennai/ Bangalore
Work Mode -2-3 days Hybrid from Office in week.
Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.
How much notice period we can consider.
15-25 Days( Not more than that)
Client Company – One of Leading Technology Consulting
Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.
Highlights of this role.
• It’s a long term role.
• High Possibility of conversion within 6 Months or After 6 months ( if you perform well).
• Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.
Point to be remember.
1. You should have valid experience cum relieving letters of your all past employer.
2. Must have available to join within 15 days’ time.
3. Must be ready to work 2-3 days from Client Office in a week.
4. Must have PF service history of last 4 years in Continuation
What we offer During the role.
- Competitive Salary
- Flexible working hours and hybrid work mode.
- Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.
How to Apply.
- Pls fill the given below summary sheet.
- Pls provide UAN Service history
- Latest Photo.
IoT Migration Architect (Azure to AWS) - Job Description
Job Title: IoT Migration Architect (Azure to AWS)
Experience Range: 10+ Years
Role Summary
The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.
Required Technical Skills & Qualifications
10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.
Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).
Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).
Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.
Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).
Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).
Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).
Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).
Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.
Your full Name ( Full Name means full name) –
Contact NO –
Alternate Contact No-
Email ID –
Alternate Email ID-
Total Experience –
Experience in IoT –
Experience in AWS IoT-
Experience in Azure IoT –
Experience in Kubernetes –
Experience in Docker –
Experience in EKS-
Do you have valid passport –
Current CTC –
Expected CTC –
What is your notice period in your current Company-
Are you currently working or not-
If not working then when you have left your last company –
Current location –
Preferred Location –
It’s a Contract to Hire role, Are you ok with that-
Highest Qualification –
Current Employer (Payroll Company Name)
Previous Employer (Payroll Company Name)-
2nd Previous Employer (Payroll Company Name) –
3rd Previous Employer (Payroll Company Name)-
Are you holding any Offer –
Are you Expecting any offer -
Are you open to consider Contract to Hire role , It’s a C2H Role-
PF Deduction is happening in Current Company –
PF Deduction happened in 2nd last Employer-
PF Deduction happened in 3 last Employer –
Latest Photo –
UAN Service History -
Shantpriya Chandra
Director & Head of Recruitment.
Harel Consulting India Pvt Ltd
https://www.linkedin.com/in/shantpriya/
www.harel-consulting.com
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed
Role Description
- Develop the tech stack of Pieworks to achieve the fly-wheel in the most efficient manner
- Focus on standardising the code, making it more modular to enable quick updations
- Integaring with various APIs to provide seamless solution to all stakeholders
- Build a robust node based information tracking & flow to capitalize on degrees of separation between members, candidates
- Bring in new design ideas to make the UI stunning and UX functional i.e one click actionable, as much as possible
Mandatory Criteria
- Ability to code in Java
- Ability to Scale on AWS
- Product Thinking
- Passion for Automation
If interested kindly share your updated resume at 82008 31681
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
🔧 Key Skills
- Strong expertise in Python (3.x)
- Experience with Django / Flask / FastAPI
- Good understanding of Microservices & RESTful API development
- Proficiency in MySQL/PostgreSQL – queries, stored procedures, optimization
- Solid grip on Data Structures & Algorithms (DSA)
- Comfortable working with Linux & Windows environments
- Hands-on experience with Git, CI/CD (Jenkins/GitHub Actions)
- Familiarity with Docker / Kubernetes is a plus
Role: Lead Software Engineer (Backend)
Salary: INR 28 to INR 40L per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Hulimavu, Bangalore, India
Experience: 6-10 years
About AbleCredit:
AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.
The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).
What Work You’ll Do
- Build best-in-class AI systems - that enterprises can trust, where reliability and explainability are not optional.
- Operate in founder mode — build, patch, or fork, whatever it takes to ship today, not next week.
- Work at the frontier of AI x Systems — making AI models behave predictably to solve real, enterprise-grade problems.
- Own end-to-end feature delivery — from requirement scoping to design, development, testing, deployment, and post-release optimization.
- Design and implement complex, distributed systems that support large-scale workflows and integrations for enterprise clients.
- Operate with full technical ownership — make architectural decisions, review code, and mentor junior engineers to maintain quality and velocity.
- Build scalable, event-driven services leveraging AWS Lambda, SQS/SNS, and modern asynchronous patterns.
- Work with cross-functional teams to design robust notification systems, third-party integrations, and data pipelines that meet enterprise reliability and security standards.
The Skills You Have..
- Strong background as an Individual Contributor — capable of owning systems from concept to production without heavy oversight.
- Expertise in system design, scalability, and fault-tolerant architecture.
- Proficiency in Node.js (bonus) or another backend language such as Go, Java, or Python.
- Deep understanding of SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) systems.
- Hands-on experience with AWS services — Lambda, API Gateway, S3, CloudWatch, ECS/EKS, and event-based systems.
- Experience in designing and scaling notification systems and third-party API integrations.
- Proficiency in event-driven architectures and multi-threading/concurrency models.
- Strong understanding of data modeling, security practices, and performance optimization.
- Familiarity with CI/CD pipelines, automated testing, and monitoring tools.
- Strong debugging, performance tuning, and code review skills.
What You Should Have Done in the Past
- Delivered multiple complex backend systems or microservices from scratch in a production environment.
- Led system design discussions and guided teams on performance, reliability, and scalability trade-offs.
- Mentored SDE-1 and SDE-2 engineers, enabling them to deliver features independently.
- Owned incident response and root cause analysis for production systems.
- (Bonus) Built or contributed to serverless systems using AWS Lambda, with clear metrics on uptime, throughput, and cost-efficiency.
Highlights:
- PTO & Holidays
- Opportunity to work with a core Gen AI startup.
- Flexible hours and an extremely positive work environment
Software Development Engineer III (Frontend)
About the company:
At WizCommerce, we’re building the AI Operating System for Wholesale Distribution — transforming how manufacturers, wholesalers, and distributors sell, serve, and scale.
With a growing customer base across North America, WizCommerce helps B2B businesses move beyond disconnected systems and manual processes with an integrated, AI-powered platform.
Our platform brings together everything a wholesale business needs to sell smarter and faster. With WizCommerce, businesses can:
- Take orders easily — whether at a trade show, during customer visits, or online.
- Save hours of manual work by letting AI handle repetitive tasks like order entry or creating product content.
- Offer a modern shopping experience through their own branded online store.
- Access real-time insights on what’s selling, which customers to focus on, and where new opportunities lie.
The wholesale industry is at a turning point — outdated systems and offline workflows can no longer keep up. WizCommerce brings the speed, intelligence, and design quality of modern consumer experiences to the B2B world, helping companies operate more efficiently and profitably.
Backed by leading global investors including Peak XV Partners (formerly Sequoia Capital India), Z47 (formerly Matrix Partners), Blume Ventures, and Alpha Wave Global, we’re rapidly scaling and redefining how wholesale and distribution businesses sell and grow.
If you want to be part of a fast-growing team that’s disrupting a $20 trillion global industry, WizCommerce is the place to be.
Read more about us in Economic Times, The Morning Star, YourStory, or on our website!
Founders:
Divyaanshu Makkar (Co-founder, CEO)
Vikas Garg (Co-founder, CCO)
Job Description:
Role & Responsibilities:
- Design, develop, and maintain complex web applications using ReactJS, and relevant web technologies.
- Work closely with Product Managers, Designers, and other stakeholders to understand requirements and translate them into technical specifications and deliverables.
- Take ownership of technical decisions, code reviews, and ensure best practices are followed in the team.
- Provide technical leadership and mentorship to junior developers, promoting their professional growth and skill development.
- Collaborate with cross-functional teams to integrate web applications with other systems and platforms.
- Stay up-to-date with emerging trends and technologies in web development to drive continuous improvement and innovation.
- Contribute to the design and architecture of the frontend codebase, ensuring high-quality, maintainable, and scalable code.
Requirements:
- Bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience in frontend development using ReactJS, Redux, and related web technologies.
- Strong understanding of web development concepts, including HTML, CSS, JavaScript, and responsive design principles.
- Experience with modern web development frameworks and tools such as ReactJS, Redux, Webpack, and Babel.
- Experience working in an Agile development environment and delivering software in a timely and efficient manner.
- Strong verbal and written communication skills, with the ability to effectively collaborate with cross-functional teams and stakeholders.
- Ability to take ownership of projects, prioritize tasks, and meet deadlines.
- Experience with backend development and AWS is a plus.
Benefits:
- Opportunity to work in a fast-paced, growing B2B SaaS company.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Growth and professional development opportunities.
- Flexible working hours to accommodate your schedule.
Compensation: Best in the industry
Role location: Bengaluru/Gurugram
Website Link: https://www.wizcommerce.com/
Job Title: Infrastructure Engineer
Experience: 4.5Years +
Location: Bangalore
Employment Type: Full-Time
Joining: Immediate Joiner Preferred
💼 Job Summary
We are looking for a skilled Infrastructure Engineer to manage, maintain, and enhance our on-premise and cloud-based systems. The ideal candidate will have strong experience in server administration, virtualization, hybrid cloud environments, and infrastructure automation. This role requires hands-on expertise, strong troubleshooting ability, and the capability to collaborate with cross-functional teams.
Roles & Responsibilities
- Install, configure, and manage Windows and Linux servers.
- Maintain and administer Active Directory, DNS, DHCP, and file servers.
- Manage virtualization platforms such as VMware or Hyper-V.
- Monitor system performance, logs, and uptime to ensure high availability.
- Provide L2/L3 support, diagnose issues, and maintain detailed technical documentation.
- Deploy and manage cloud servers and resources in AWS, Azure, or Google Cloud.
- Design, build, and maintain hybrid environments (on-premises + cloud).
- Administer data storage systems and implement/test backup & disaster recovery plans.
- Handle cloud services such as cloud storage, networking, and identity (IAM, Azure AD).
- Ensure compliance with security standards like ISO, SOC, GDPR, PCI DSS.
- Integrate and manage monitoring and alerting tools.
- Support CI/CD pipelines and automation for infrastructure deployments.
- Collaborate with Developers, DevOps, and Network teams for seamless system integration.
- Troubleshoot and resolve complex infrastructure & system-level issues.
Key Skills Required
- Windows Server & Linux Administration
- VMware / Hyper-V / Virtualization technologies
- Active Directory, DNS, DHCP administration
- Knowledge of CI/CD and Infrastructure as Code
- Hands-on experience in AWS, Azure, or GCP
- Experience with cloud migration and hybrid cloud setups
- Proficiency in backup, replication, and disaster recovery tools
- Familiarity with automation tools (Terraform, Ansible, etc. preferred)
- Strong troubleshooting and documentation skills
- Understanding of networking concepts (TCP/IP, VPNs, firewalls, routing) is an added advantage
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, Cloud- AWS/ GCP/ Azure, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
Key Responsibilities
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Qualifications
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 10+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Our Mission
To make video as accessible to machines as text and voice are today.
At lookup, we believe the world's most valuable asset is trapped. Video is everywhere, but it's unsearchable—a black box of insight that no one can open or atleast open affordably. We’re changing that. We're building the search engine for the visual world, so anyone can find or do anything with video just by asking.
Text is queryable. Voice is transcribed. Video, the largest and richest data source of all, is still a black box. A computer can't understand it, and so its value remains trapped.
Our mission at lookup is to fix this.
About the Role
We are looking for founding Backend Engineers to build a highly performant, reliable, and scalable API platform that makes enterprise video knowledge readily available for video search, summarization, and natural‑language Q&A. You will partner closely with our ML team working on vision‑language models to productionize research and deliver fast, trustworthy APIs for customers.
Examples of technical challenges you will work on include: distributed video storage, a unified application framework and data model for indexing large video libraries, low‑latency clip retrieval, vector search at scale, and end‑to‑end build, test, deploy, and observability in cloud environments.
What You’ll Do:
- Design and build robust backend services and APIs (REST, gRPC) for vector search, video summarization, and video Q&A.
- Own API performance and reliability, including low‑latency retrieval, pagination, rate limiting, and backwards‑compatible versioning.
- Design schemas and tune queries in Postgres, and integrate with unstructured storage.
- Implement observability across metrics, logs, and traces. Set error budgets and SLOs.
- Write clear design docs and ship high‑quality, well‑tested code.
- Collaborate with ML engineers to integrate and productionize VLMs and retrieval pipelines.
- Take ownership of architecture from inception to production launch.
Who You Are:
- 3+ years of professional experience in backend development.
- Proven experience building and scaling polished WebSocket, gRPC, and REST APIs.
- Exposure to distributed systems and container orchestration (Docker and Kubernetes).
- Hands‑on experience with AWS.
- Strong knowledge of SQL (Postgres) and NoSQL (e.g., Cassandra), including schema design, query optimization, and scaling.
- Familiarity with our stack is a plus, but not mandatory: Python (FastAPI), Celery, Kafka, Postgres, Redis, Weaviate, React.
- Ability to diagnose complex issues, identify root causes, and implement effective fixes.
- Comfortable working in a fast‑paced startup environment.
Nice to have:
- Hands-on work with LLM agents, vector embeddings, or RAG applications.
- Building video streaming pipelines and storage systems at scale (FFmpeg, RTSP, WebRTC).
- Proficiency with modern frontend frameworks (React, TypeScript, Tailwind CSS) and responsive UI design.
Location & Culture
- Full-time, in-office role in Bangalore (we’re building fast and hands-on).
- Must be comfortable with a high-paced environment and collaboration across PST time zones for our US customers and investors.
- Expect startup speed — daily founder syncs, rapid design-to-prototype cycles, and a culture of deep ownership.
Why You’ll Love This Role
- Work on the frontier of video understanding and real-world AI — products that can redefine trust and automation.
- Build core APIs that make video queryable and power real customer use.
- Own systems end to end: performance, reliability, and developer experience.
- Work closely with founders and collaborate in person in Bangalore.
- Competitive salary with meaningful early equity.
You will be responsible for building a highly-scalable and extensible robust application. This position reports to the Engineering Manager.
Responsibilities:
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Ability to understand business requirements and tie them to technology solutions
- Open to work from client location as per the demand of the project / customer.
- Facilitate in Technical Aspects
- Develop and evolve highly scalable and fault-tolerant distributed components using Java technologies.
- Excellent experience in Application development and support, integration development and quality assurance.
- Provide technical leadership and manage it day to day basis
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid
- Hands on coder with good understanding on enterprise level code.
- Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
- Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
- Culture
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and solution driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile : understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
Qualifications: -
- 3-5 year track record of relevant work experience and a computer Science or a related technical discipline is required
- Experience in development of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, and propose comprehensive solutions.
- Experience with functional and object-oriented programming, Java (Preferred) or Python is a must.
- Hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.
- Development and support experience in Big Data domain
- Experience with database modelling and development, data mining and warehousing.
- Unit, Integration and User Acceptance Testing.
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment.
Preferred Qualification:
- Experience in Agile methodology.
- Proficient with SQL and its variation among popular databases.
- Experience working with large, complex data sets from a variety of sources.
Domain - Credit risk / Fintech
Roles and Responsibilities:
1. Development, validation and monitoring of Application and Behaviour score cards
for Retail loan portfolio
2. Improvement of collection efficiency through advanced analytics
3. Development and deployment of fraud scorecard
4. Upsell / Cross-sell strategy implementation using analytics
5. Create modern data pipelines and processing using AWS PAAS components (Glue,
Sagemaker studio, etc.)
6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.
7. Experience with API tools such as REST, Swagger, and Postman
8. Model deployment in AWS and management of production environment
9. Team player who can work with cross-functional teams to gather data and derive
insights
Mandatory Technical skill set :
1. Previous experience in scorecard development and credit risk strategy development
2. Python and Jenkins
3. Logistic regression, Scorecard, ML and neural networks
4. Statistical analysis and A/B testing
5. AWS Sagemaker, S3 , Ec2, Dockers
6. REST API, Swagger and Postman
7. Excel
8. SQL
9. Visualisation tools such as Redash / Grafana
10. Bitbucket, Githubs and versioning tools
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)
Read less
Junior DevOps Engineer
Experience: 2–3 years
About Us
We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.
Key Responsibilities
- Support deployment, monitoring, and maintenance of trading and fintech applications.
- Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
- Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
- Troubleshoot and resolve production issues in a fast-paced environment.
- Implement and maintain CI/CD pipelines for continuous integration and delivery.
- Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
- Assist in maintaining compliance with financial industry standards and security best practices.
Required Skills
- 2–3 years of hands-on experience in DevOps or related roles.
- Proficiency in Linux/Unix environments.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Working knowledge of scripting languages (Bash, Python).
- Experience with configuration management tools (Ansible, Puppet, Chef).
- Understanding of networking concepts and security practices.
- Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
- Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).
Preferred Skills
- Experience in fintech, trading, or financial services.
- Knowledge of high-frequency trading systems or low-latency environments.
- Familiarity with financial data protocols and APIs.
- Understanding of regulatory requirements in financial technology.
What We Offer
- Opportunity to work on cutting-edge fintech/trading platforms.
- Collaborative and learning-focused environment.
- Competitive salary and benefits.
- Career growth in a rapidly expanding domain.
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
About Borderless Access
Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.
We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.
Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.
The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.
Key Responsibilities
- Lead, mentor, and grow a cross-functional team of engineers specializing.
- Foster a culture of collaboration, accountability, and continuous learning.
- Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
- Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
- Promote clean, maintainable, and well-documented code across the team.
- Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
- Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
- Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
- Ensure timely delivery of high-quality software aligned with business goals.
- Work closely with DevOps to ensure platform reliability, scalability, and observability.
- Conduct regular 1:1s, performance reviews, and career development planning.
- Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
- Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.
Added Responsibilities
- Defining and adhering to the development process.
- Taking part in regular external audits and maintaining artifacts.
- Identify opportunities for automation to reduce repetitive tasks.
- Mentor and coach team members in the teams.
- Continuously optimize application performance and scalability.
- Collaborate with the Marketing team to understand different user journeys.
Growth and Development
The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:
- Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
- Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
- Drive business objectives – Become part of defining and taking actions to meet the business objectives.
About You
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in software development.
- Experience with microservices architecture and container orchestration.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Solid understanding of data structures, algorithms, and software design patterns.
- Solid understanding of enterprise system architecture patterns.
- Experience in managing a small to medium-sized team with varied experiences.
- Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
- Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
- Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
- Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
- Knowledge of containerization technologies Docker and Kubernetes.
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Designation: Senior Python Django Developer
Position: Senior Python Developer
Job Types: Full-time, Permanent
Pay: Up to ₹800,000.00 per year
Schedule: Day shift
Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar
Experience: Back-end development: 4 years (Required)
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 4 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Details
- Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Proficiency:
Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.
Additional Comments:
Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science
Experience 8 to 10 Years
Location Bengaluru
Job Description
Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed
Education Qualification: Graduate
Skills: AI, NLP, Python, Data science
Must-Haves
Skills
AI, NLP, Python, Data science
NP: Immediate – 30 Days
Job Details
- Job Title: Lead I - Software Engineering - Java, J2Ee, Spring
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Summary:
We are looking for an experienced Senior Java Developer with expertise in building robust, scalable web applications using Java/J2EE, Spring Boot, REST APIs, and modern microservices architectures. The ideal candidate will be skilled in both back-end and middleware technologies, with strong experience in cloud platforms (AWS), and capable of mentoring junior developers while contributing to high-impact enterprise projects.
The developer will be responsible for full-cycle application development: from interpreting specifications and writing clean, reusable code, to testing, integration, and deployment. You will also work closely with customers and project teams to understand requirements and deliver solutions that optimize cost, performance, and maintainability.
Key Responsibilities:
Application Development & Delivery
- Design, code, debug, test, and document Java-based web applications aligned with design specifications.
- Build scalable and secure microservices using Spring Boot and RESTful APIs.
- Optimize application performance, maintainability, and reusability by using proven design patterns.
- Handle complex data structures and develop multi-threaded, performance-optimized applications.
- Ensure code quality through TDD (JUnit) and best practices.
Cloud & DevOps
- Develop and deploy applications on AWS Cloud Services: EC2, S3, DynamoDB, SNS, SES, etc.
- Leverage containerization tools like Docker and orchestration using Kubernetes.
Integration & Configuration
- Integrate with various databases (PostgreSQL, MySQL, Oracle, NoSQL).
- Configure development environments and CI/CD pipelines as per project needs.
- Follow configuration management processes and ensure compliance.
Testing & Quality Assurance
- Review and create unit test cases, scenarios, and support UAT phases.
- Perform defect root cause analysis (RCA) and proactively implement quality improvements.
Documentation
- Create and review technical documents: HLD, LLD, SAD, user stories, design docs, test cases, and release notes.
- Contribute to project knowledge bases and code repositories.
Team & Project Management
- Mentor team members; conduct code and design reviews.
- Assist Project Manager in effort estimation, planning, and task allocation.
- Set and review FAST goals for yourself and your team; provide regular performance feedback.
Customer Interaction
- Engage with customers to clarify requirements and present technical solutions.
- Conduct product demos and design walkthroughs.
- Interface with customer architects for design finalization.
Key Skills & Tools:
Core Technologies:
- Java/J2EE, Spring Boot, REST APIs
- Object-Oriented Programming (OOP), Design Patterns, Domain-Driven Design (DDD)
- Multithreading, Data Structures, TDD using JUnit
Web & Data Technologies:
- JSON, XML, AJAX, Web Services
- Database Technologies: PostgreSQL, MySQL, Oracle, NoSQL (e.g., DynamoDB)
- Persistence Frameworks: Hibernate, JPA
Cloud & DevOps:
- AWS: S3, EC2, DynamoDB, SNS, SES
- Version Control & Containerization: GitHub, Docker, Kubernetes
Agile & Development Practices:
- Agile methodologies: Scrum or Kanban
- CI/CD concepts
- IDEs: Eclipse, IntelliJ, or equivalent
Expected Outcomes:
- Timely delivery of high-quality code and application components
- Improved performance, cost-efficiency, and maintainability of applications
- High customer satisfaction through accurate requirement translation and delivery
- Team productivity through effective mentoring and collaboration
- Minimal post-production defects and technical issues
Performance Indicators:
- Adherence to coding standards and engineering practices
- On-time project delivery and milestone completion
- Reduction in defect count and issue recurrence
- Knowledge contributions to project and organizational repositories
- Completion of mandatory compliance and technology/domain certifications
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- Relevant certifications (e.g., AWS Certified Developer, Oracle Certified, Scrum Master)
Soft Skills:
- Strong analytical and problem-solving mindset
- Excellent communication and presentation skills
- Team leadership and mentorship abilities
- High accountability and ability to work under pressure
- Positive team dynamics and proactive collaboration
Skills
Java, J2Ee, Spring
Must-Haves
Java, J2Ee, Spring
Machine Learning + Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
NP: Immediate – 30 Days
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.



























