50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)
Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. Expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Required Skills
• 12+ years of proven experience in designing large-scale enterprise systems and distributed
architectures.
• Strong expertise in Azure, AWS, Python, Docker, LangChain, Solution Architecture, C#, .Net
• Frontend technologies like React, Angular and, ASP.Net MVC
• Deep knowledge of architecture frameworks (TOGAF).
• Understanding of security principles, identity management, and data protection.
• Experience with solution architecture methodologies and documentation standards
• Deep understanding of databases (SQL and NoSQL), RESTful APIs, and message brokers.
• Excellent communication, leadership, and stakeholder management skills.
Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)
Role Overview
Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.
Key Responsibilities
1. Client Engagement (India + International Markets)
- Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
- Explain Tradelab’s capabilities, architecture, and deployment options.
- Understand region-specific latency expectations, connectivity options, and regulatory constraints.
2. Requirement Gathering & Solutioning
- Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
- Assess infra readiness (cloud/on-prem/colo).
- Propose architecture aligned with forex markets.
3. Global Architecture & Deployment Design
- Design multi-region infrastructure using AWS/Azure/GCP.
- Architect low-latency routing between India–Singapore–Dubai.
- Support deployments in DCs like Equinix SG1/DX1.
4. Networking & Security Architecture
- Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
- Implement network hardening, segmentation, WAF/firewall rules.
5. DevOps, Cloud Engineering & Scalability
- Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
- Design global failover models.
6. BFSI & Trading Domain Expertise
- Indian broking, international forex, LP aggregation, HFT.
- OMS/RMS, risk engines, LP connectivity, and matching engines.
7. Latency, Performance & Capacity Planning
- Benchmark and optimize cross-region latency.
- Tune performance for high tick volumes and volatility bursts.
8. Documentation & Consulting
- Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
- Required Skills
- AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
- DevOps: Kubernetes, Docker, Helm, Terraform.
- Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
- Message buses: Kafka, RabbitMQ, Redis Streams.
Domain Skills
- Deep Broking Domain Understanding.
- Indian broking + global forex/CFD.
- FIX protocol, LP integration, market data feeds.
- Regulations: SEBI, DFSA, MAS, ESMA.
Soft Skills
- Excellent communication and client-facing ability.
- Strong presales and solutioning mindset.
- Preferred Qualifications
- B.Tech/BE/M.Tech in CS or equivalent.
- AWS Architect Professional, CCNP, CKA.
Why Join Us?
- Experience in colocation/global trading infra.
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.
"IoT Migration Architect (Azure to AWS)" – Role 1
Salary between 28LPA -33 LPA -Fixed
We have Other Positions as well in IOT.
- IoT Solutions Engineer - Role 2
- IoT Architect – 8+ Yrs - Role -3
Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.
Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,
Experience in Large Scale IoT Deployment.
Contract to Hire role.
Location – Pune/Hyderabad/Chennai/ Bangalore
Work Mode -2-3 days Hybrid from Office in week.
Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.
How much notice period we can consider.
15-25 Days( Not more than that)
Client Company – One of Leading Technology Consulting
Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.
Highlights of this role.
• It’s a long term role.
• High Possibility of conversion within 6 Months or After 6 months ( if you perform well).
• Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.
Point to be remember.
1. You should have valid experience cum relieving letters of your all past employer.
2. Must have available to join within 15 days’ time.
3. Must be ready to work 2-3 days from Client Office in a week.
4. Must have PF service history of last 4 years in Continuation
What we offer During the role.
- Competitive Salary
- Flexible working hours and hybrid work mode.
- Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.
How to Apply.
- Pls fill the given below summary sheet.
- Pls provide UAN Service history
- Latest Photo.
IoT Migration Architect (Azure to AWS) - Job Description
Job Title: IoT Migration Architect (Azure to AWS)
Experience Range: 10+ Years
Role Summary
The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.
Required Technical Skills & Qualifications
10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.
Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).
Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).
Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.
Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).
Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).
Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).
Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).
Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.
Your full Name ( Full Name means full name) –
Contact NO –
Alternate Contact No-
Email ID –
Alternate Email ID-
Total Experience –
Experience in IoT –
Experience in AWS IoT-
Experience in Azure IoT –
Experience in Kubernetes –
Experience in Docker –
Experience in EKS-
Do you have valid passport –
Current CTC –
Expected CTC –
What is your notice period in your current Company-
Are you currently working or not-
If not working then when you have left your last company –
Current location –
Preferred Location –
It’s a Contract to Hire role, Are you ok with that-
Highest Qualification –
Current Employer (Payroll Company Name)
Previous Employer (Payroll Company Name)-
2nd Previous Employer (Payroll Company Name) –
3rd Previous Employer (Payroll Company Name)-
Are you holding any Offer –
Are you Expecting any offer -
Are you open to consider Contract to Hire role , It’s a C2H Role-
PF Deduction is happening in Current Company –
PF Deduction happened in 2nd last Employer-
PF Deduction happened in 3 last Employer –
Latest Photo –
UAN Service History -
Shantpriya Chandra
Director & Head of Recruitment.
Harel Consulting India Pvt Ltd
https://www.linkedin.com/in/shantpriya/
www.harel-consulting.com
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
About the Role:
We are looking for an experienced AWS Engineer with strong expertise in cloud infrastructure design and backend development. The ideal candidate will be hands-on with AWS native services, microservices architecture, and API development, with a preference for those having experience in Go (Golang).
Key Responsibilities:
- Design, build, and maintain scalable, cloud-native applications on AWS
- Develop and manage microservices and containerized deployments using Docker and ECS/Fargate
- Implement and manage AWS services including Fargate, ECS, Lambda, Aurora (PostgreSQL), S3, CloudFront, and DMS
- Build and maintain RESTful APIs (OpenAPI/Swagger) and SOAP/XML services
- Ensure security, performance, and high availability of deployed applications
- Monitor and troubleshoot production issues using CloudWatch and CloudTrail
- Collaborate with cross-functional teams to deliver robust, efficient, and secure solutions
Requirements:
- 6+ years of software development experience with a focus on AWS-based solutions
- Strong hands-on expertise with AWS cloud services and microservices architecture
- Experience in API design and development (REST/SOAP)
- Go (Golang) experience preferred; other backend languages acceptable
- Proficiency with Docker, CI/CD pipelines, and container orchestration
- Strong knowledge of PostgreSQL/Aurora schema design and optimization
- Familiarity with AWS security best practices, IAM, and OAuth 2.0
Preferred:
- AWS Certifications (Solutions Architect / Developer Associate)
- Strong problem-solving, communication, and collaboration skills
About us
At Basal, we focus on building intelligent AI products for real-world business problems. Our product, Desible.ai, is an AI Voice Agent that talks, thinks, and acts like humans - automating conversations across voice, WhatsApp, email, and SMS. Desible.ai helps enterprises handle millions of calls every month, from lead qualification to reminders to follow-ups.
We’re a growing, high-impact team of engineers, designers, and AI specialists combining speech tech, large language models, and real-time infrastructure to power intelligent voice automation at scale.
What You’ll Do
- Build and scale Desible.ai’s internal and client-facing web platforms end-to-end
- Design APIs and data pipelines integrating with real-time voice and communication systems
- Implement modular, well-tested frontend and backend features using modern frameworks
- Collaborate closely with AI, infra, and product teams to build reliable, usable tools
- Ensure high performance and fault tolerance for live conversational workloads
- Contribute to deployment and DevOps pipelines (CI/CD, cloud setup, monitoring)
What We’re Looking For
- 3–5 years of strong full-stack development experience
- Hands-on experience with Node.js, React, Next.js, MongoDB, PostgreSQL/MySQL
- Strong understanding of APIs, Websockets, cloud deployments (AWS/GCP/Azure)
- Good grasp of system design, scalability, and microservices
- Comfortable owning complete features — from backend logic to frontend delivery
- Bonus: experience in real-time communication, telephony, or AI-driven systems
Why Join Us
- Build real products that talk to thousands of people daily
- Be part of a small, high-ownership engineering team where ideas ship fast
- Work directly with founders and the core AI team - your code will define Desible.ai
- Competitive pay, flexible work setup, and genuine scope for growth
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed
Role Description
- Develop the tech stack of Pieworks to achieve the fly-wheel in the most efficient manner
- Focus on standardising the code, making it more modular to enable quick updations
- Integaring with various APIs to provide seamless solution to all stakeholders
- Build a robust node based information tracking & flow to capitalize on degrees of separation between members, candidates
- Bring in new design ideas to make the UI stunning and UX functional i.e one click actionable, as much as possible
Mandatory Criteria
- Ability to code in Java
- Ability to Scale on AWS
- Product Thinking
- Passion for Automation
If interested kindly share your updated resume at 82008 31681
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
🔧 Key Skills
- Strong expertise in Python (3.x)
- Experience with Django / Flask / FastAPI
- Good understanding of Microservices & RESTful API development
- Proficiency in MySQL/PostgreSQL – queries, stored procedures, optimization
- Solid grip on Data Structures & Algorithms (DSA)
- Comfortable working with Linux & Windows environments
- Hands-on experience with Git, CI/CD (Jenkins/GitHub Actions)
- Familiarity with Docker / Kubernetes is a plus
Role: Lead Software Engineer (Backend)
Salary: INR 28 to INR 40L per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Hulimavu, Bangalore, India
Experience: 6-10 years
About AbleCredit:
AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.
The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).
What Work You’ll Do
- Build best-in-class AI systems - that enterprises can trust, where reliability and explainability are not optional.
- Operate in founder mode — build, patch, or fork, whatever it takes to ship today, not next week.
- Work at the frontier of AI x Systems — making AI models behave predictably to solve real, enterprise-grade problems.
- Own end-to-end feature delivery — from requirement scoping to design, development, testing, deployment, and post-release optimization.
- Design and implement complex, distributed systems that support large-scale workflows and integrations for enterprise clients.
- Operate with full technical ownership — make architectural decisions, review code, and mentor junior engineers to maintain quality and velocity.
- Build scalable, event-driven services leveraging AWS Lambda, SQS/SNS, and modern asynchronous patterns.
- Work with cross-functional teams to design robust notification systems, third-party integrations, and data pipelines that meet enterprise reliability and security standards.
The Skills You Have..
- Strong background as an Individual Contributor — capable of owning systems from concept to production without heavy oversight.
- Expertise in system design, scalability, and fault-tolerant architecture.
- Proficiency in Node.js (bonus) or another backend language such as Go, Java, or Python.
- Deep understanding of SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) systems.
- Hands-on experience with AWS services — Lambda, API Gateway, S3, CloudWatch, ECS/EKS, and event-based systems.
- Experience in designing and scaling notification systems and third-party API integrations.
- Proficiency in event-driven architectures and multi-threading/concurrency models.
- Strong understanding of data modeling, security practices, and performance optimization.
- Familiarity with CI/CD pipelines, automated testing, and monitoring tools.
- Strong debugging, performance tuning, and code review skills.
What You Should Have Done in the Past
- Delivered multiple complex backend systems or microservices from scratch in a production environment.
- Led system design discussions and guided teams on performance, reliability, and scalability trade-offs.
- Mentored SDE-1 and SDE-2 engineers, enabling them to deliver features independently.
- Owned incident response and root cause analysis for production systems.
- (Bonus) Built or contributed to serverless systems using AWS Lambda, with clear metrics on uptime, throughput, and cost-efficiency.
Highlights:
- PTO & Holidays
- Opportunity to work with a core Gen AI startup.
- Flexible hours and an extremely positive work environment
Software Development Engineer III (Frontend)
About the company:
At WizCommerce, we’re building the AI Operating System for Wholesale Distribution — transforming how manufacturers, wholesalers, and distributors sell, serve, and scale.
With a growing customer base across North America, WizCommerce helps B2B businesses move beyond disconnected systems and manual processes with an integrated, AI-powered platform.
Our platform brings together everything a wholesale business needs to sell smarter and faster. With WizCommerce, businesses can:
- Take orders easily — whether at a trade show, during customer visits, or online.
- Save hours of manual work by letting AI handle repetitive tasks like order entry or creating product content.
- Offer a modern shopping experience through their own branded online store.
- Access real-time insights on what’s selling, which customers to focus on, and where new opportunities lie.
The wholesale industry is at a turning point — outdated systems and offline workflows can no longer keep up. WizCommerce brings the speed, intelligence, and design quality of modern consumer experiences to the B2B world, helping companies operate more efficiently and profitably.
Backed by leading global investors including Peak XV Partners (formerly Sequoia Capital India), Z47 (formerly Matrix Partners), Blume Ventures, and Alpha Wave Global, we’re rapidly scaling and redefining how wholesale and distribution businesses sell and grow.
If you want to be part of a fast-growing team that’s disrupting a $20 trillion global industry, WizCommerce is the place to be.
Read more about us in Economic Times, The Morning Star, YourStory, or on our website!
Founders:
Divyaanshu Makkar (Co-founder, CEO)
Vikas Garg (Co-founder, CCO)
Job Description:
Role & Responsibilities:
- Design, develop, and maintain complex web applications using ReactJS, and relevant web technologies.
- Work closely with Product Managers, Designers, and other stakeholders to understand requirements and translate them into technical specifications and deliverables.
- Take ownership of technical decisions, code reviews, and ensure best practices are followed in the team.
- Provide technical leadership and mentorship to junior developers, promoting their professional growth and skill development.
- Collaborate with cross-functional teams to integrate web applications with other systems and platforms.
- Stay up-to-date with emerging trends and technologies in web development to drive continuous improvement and innovation.
- Contribute to the design and architecture of the frontend codebase, ensuring high-quality, maintainable, and scalable code.
Requirements:
- Bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience in frontend development using ReactJS, Redux, and related web technologies.
- Strong understanding of web development concepts, including HTML, CSS, JavaScript, and responsive design principles.
- Experience with modern web development frameworks and tools such as ReactJS, Redux, Webpack, and Babel.
- Experience working in an Agile development environment and delivering software in a timely and efficient manner.
- Strong verbal and written communication skills, with the ability to effectively collaborate with cross-functional teams and stakeholders.
- Ability to take ownership of projects, prioritize tasks, and meet deadlines.
- Experience with backend development and AWS is a plus.
Benefits:
- Opportunity to work in a fast-paced, growing B2B SaaS company.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Growth and professional development opportunities.
- Flexible working hours to accommodate your schedule.
Compensation: Best in the industry
Role location: Bengaluru/Gurugram
Website Link: https://www.wizcommerce.com/
Job Title: Infrastructure Engineer
Experience: 4.5Years +
Location: Bangalore
Employment Type: Full-Time
Joining: Immediate Joiner Preferred
💼 Job Summary
We are looking for a skilled Infrastructure Engineer to manage, maintain, and enhance our on-premise and cloud-based systems. The ideal candidate will have strong experience in server administration, virtualization, hybrid cloud environments, and infrastructure automation. This role requires hands-on expertise, strong troubleshooting ability, and the capability to collaborate with cross-functional teams.
Roles & Responsibilities
- Install, configure, and manage Windows and Linux servers.
- Maintain and administer Active Directory, DNS, DHCP, and file servers.
- Manage virtualization platforms such as VMware or Hyper-V.
- Monitor system performance, logs, and uptime to ensure high availability.
- Provide L2/L3 support, diagnose issues, and maintain detailed technical documentation.
- Deploy and manage cloud servers and resources in AWS, Azure, or Google Cloud.
- Design, build, and maintain hybrid environments (on-premises + cloud).
- Administer data storage systems and implement/test backup & disaster recovery plans.
- Handle cloud services such as cloud storage, networking, and identity (IAM, Azure AD).
- Ensure compliance with security standards like ISO, SOC, GDPR, PCI DSS.
- Integrate and manage monitoring and alerting tools.
- Support CI/CD pipelines and automation for infrastructure deployments.
- Collaborate with Developers, DevOps, and Network teams for seamless system integration.
- Troubleshoot and resolve complex infrastructure & system-level issues.
Key Skills Required
- Windows Server & Linux Administration
- VMware / Hyper-V / Virtualization technologies
- Active Directory, DNS, DHCP administration
- Knowledge of CI/CD and Infrastructure as Code
- Hands-on experience in AWS, Azure, or GCP
- Experience with cloud migration and hybrid cloud setups
- Proficiency in backup, replication, and disaster recovery tools
- Familiarity with automation tools (Terraform, Ansible, etc. preferred)
- Strong troubleshooting and documentation skills
- Understanding of networking concepts (TCP/IP, VPNs, firewalls, routing) is an added advantage
Key Responsibilities
- Full Stack Development: Design, develop, test, and deploy robust, scalable applications using Java (Spring Boot) on the backend and Angular on the frontend.
- Backend & API: Build and maintain efficient, reusable, and reliable Java code. Design and implement RESTful APIs to support frontend applications and third-party integrations.
- Frontend Development: Develop modern, responsive, and intuitive user interfaces using Angular, TypeScript, HTML5, and CSS3.
- AWS Cloud Services: Leverage AWS services for application deployment, monitoring, and management. This includes working with services such as EC2, S3, RDS, Lambda, EKS/ECS, and CloudWatch.
- Database Management: Design and optimize database schemas (e.g., PostgreSQL, MySQL, or NoSQL) and write complex queries. Integrate with data persistence layers using JPA, Hibernate, or other ORMs.
- CI/CD & DevOps: Participate in and help improve our CI/CD pipelines for automated builds, testing, and deployments (e.g., using Jenkins, GitLab CI, or AWS CodePipeline).
- Code Quality: Write high-quality, clean, and maintainable code. Champion best practices in software development, including code reviews, unit testing, and integration testing.
- Mentorship: Provide technical guidance and mentorship to junior and mid-level developers.
- Collaboration: Work closely with product owners and stakeholders to understand requirements and translate them into technical specifications and solutions.
Required Skills and Qualifications
Experience:
- A total of 6-8 years of professional software development experience.
- At least 4-5 years of strong, hands-on experience with Java and the Spring ecosystem (Spring Boot, Spring MVC, Spring Data).
Technical Skills:
- Java/Backend:
- Proficient in Java 8 or higher.
- Deep understanding of Spring Boot, Spring Security, and REST API design.
- Experience with JPA/Hibernate and working with relational databases (e.g., PostgreSQL, MySQL).
- Strong knowledge of build tools like Maven or Gradle.
- Frontend:
- 2+ years of hands-on experience with modern Angular (Angular 8+).
- Proficiency in TypeScript, HTML5, CSS3, and SCSS/SASS.
- Experience consuming RESTful APIs from an Angular application.
- AWS:
- Demonstrable experience with core AWS services (S3, EC2, RDS).
- Hands-on experience deploying and managing applications on AWS.
- Familiarity with containerization (Docker, Kubernetes) and serverless (Lambda) is a significant plus.
- General:
- Strong understanding of software development life cycle (SDLC) and Agile methodologies.
- Proficient with Git and Git-based workflows.
- Excellent problem-solving, analytical, and communication skills.
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP or any other Cloud platform, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
Key Responsibilities
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Qualifications
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 10+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Our Mission
To make video as accessible to machines as text and voice are today.
At lookup, we believe the world's most valuable asset is trapped. Video is everywhere, but it's unsearchable—a black box of insight that no one can open or atleast open affordably. We’re changing that. We're building the search engine for the visual world, so anyone can find or do anything with video just by asking.
Text is queryable. Voice is transcribed. Video, the largest and richest data source of all, is still a black box. A computer can't understand it, and so its value remains trapped.
Our mission at lookup is to fix this.
About the Role
We are looking for founding Backend Engineers to build a highly performant, reliable, and scalable API platform that makes enterprise video knowledge readily available for video search, summarization, and natural‑language Q&A. You will partner closely with our ML team working on vision‑language models to productionize research and deliver fast, trustworthy APIs for customers.
Examples of technical challenges you will work on include: distributed video storage, a unified application framework and data model for indexing large video libraries, low‑latency clip retrieval, vector search at scale, and end‑to‑end build, test, deploy, and observability in cloud environments.
What You’ll Do:
- Design and build robust backend services and APIs (REST, gRPC) for vector search, video summarization, and video Q&A.
- Own API performance and reliability, including low‑latency retrieval, pagination, rate limiting, and backwards‑compatible versioning.
- Design schemas and tune queries in Postgres, and integrate with unstructured storage.
- Implement observability across metrics, logs, and traces. Set error budgets and SLOs.
- Write clear design docs and ship high‑quality, well‑tested code.
- Collaborate with ML engineers to integrate and productionize VLMs and retrieval pipelines.
- Take ownership of architecture from inception to production launch.
Who You Are:
- 3+ years of professional experience in backend development.
- Proven experience building and scaling polished WebSocket, gRPC, and REST APIs.
- Exposure to distributed systems and container orchestration (Docker and Kubernetes).
- Hands‑on experience with AWS.
- Strong knowledge of SQL (Postgres) and NoSQL (e.g., Cassandra), including schema design, query optimization, and scaling.
- Familiarity with our stack is a plus, but not mandatory: Python (FastAPI), Celery, Kafka, Postgres, Redis, Weaviate, React.
- Ability to diagnose complex issues, identify root causes, and implement effective fixes.
- Comfortable working in a fast‑paced startup environment.
Nice to have:
- Hands-on work with LLM agents, vector embeddings, or RAG applications.
- Building video streaming pipelines and storage systems at scale (FFmpeg, RTSP, WebRTC).
- Proficiency with modern frontend frameworks (React, TypeScript, Tailwind CSS) and responsive UI design.
Location & Culture
- Full-time, in-office role in Bangalore (we’re building fast and hands-on).
- Must be comfortable with a high-paced environment and collaboration across PST time zones for our US customers and investors.
- Expect startup speed — daily founder syncs, rapid design-to-prototype cycles, and a culture of deep ownership.
Why You’ll Love This Role
- Work on the frontier of video understanding and real-world AI — products that can redefine trust and automation.
- Build core APIs that make video queryable and power real customer use.
- Own systems end to end: performance, reliability, and developer experience.
- Work closely with founders and collaborate in person in Bangalore.
- Competitive salary with meaningful early equity.
You will be responsible for building a highly-scalable and extensible robust application. This position reports to the Engineering Manager.
Responsibilities:
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Ability to understand business requirements and tie them to technology solutions
- Open to work from client location as per the demand of the project / customer.
- Facilitate in Technical Aspects
- Develop and evolve highly scalable and fault-tolerant distributed components using Java technologies.
- Excellent experience in Application development and support, integration development and quality assurance.
- Provide technical leadership and manage it day to day basis
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid
- Hands on coder with good understanding on enterprise level code.
- Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
- Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
- Culture
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and solution driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile : understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
Qualifications: -
- 3-5 year track record of relevant work experience and a computer Science or a related technical discipline is required
- Experience in development of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, and propose comprehensive solutions.
- Experience with functional and object-oriented programming, Java (Preferred) or Python is a must.
- Hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.
- Development and support experience in Big Data domain
- Experience with database modelling and development, data mining and warehousing.
- Unit, Integration and User Acceptance Testing.
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment.
Preferred Qualification:
- Experience in Agile methodology.
- Proficient with SQL and its variation among popular databases.
- Experience working with large, complex data sets from a variety of sources.
Domain - Credit risk / Fintech
Roles and Responsibilities:
1. Development, validation and monitoring of Application and Behaviour score cards
for Retail loan portfolio
2. Improvement of collection efficiency through advanced analytics
3. Development and deployment of fraud scorecard
4. Upsell / Cross-sell strategy implementation using analytics
5. Create modern data pipelines and processing using AWS PAAS components (Glue,
Sagemaker studio, etc.)
6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.
7. Experience with API tools such as REST, Swagger, and Postman
8. Model deployment in AWS and management of production environment
9. Team player who can work with cross-functional teams to gather data and derive
insights
Mandatory Technical skill set :
1. Previous experience in scorecard development and credit risk strategy development
2. Python and Jenkins
3. Logistic regression, Scorecard, ML and neural networks
4. Statistical analysis and A/B testing
5. AWS Sagemaker, S3 , Ec2, Dockers
6. REST API, Swagger and Postman
7. Excel
8. SQL
9. Visualisation tools such as Redash / Grafana
10. Bitbucket, Githubs and versioning tools
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)
Read less
Junior DevOps Engineer
Experience: 2–3 years
About Us
We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.
Key Responsibilities
- Support deployment, monitoring, and maintenance of trading and fintech applications.
- Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
- Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
- Troubleshoot and resolve production issues in a fast-paced environment.
- Implement and maintain CI/CD pipelines for continuous integration and delivery.
- Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
- Assist in maintaining compliance with financial industry standards and security best practices.
Required Skills
- 2–3 years of hands-on experience in DevOps or related roles.
- Proficiency in Linux/Unix environments.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Working knowledge of scripting languages (Bash, Python).
- Experience with configuration management tools (Ansible, Puppet, Chef).
- Understanding of networking concepts and security practices.
- Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
- Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).
Preferred Skills
- Experience in fintech, trading, or financial services.
- Knowledge of high-frequency trading systems or low-latency environments.
- Familiarity with financial data protocols and APIs.
- Understanding of regulatory requirements in financial technology.
What We Offer
- Opportunity to work on cutting-edge fintech/trading platforms.
- Collaborative and learning-focused environment.
- Competitive salary and benefits.
- Career growth in a rapidly expanding domain.
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
About Borderless Access
Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.
We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.
Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.
The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.
Key Responsibilities
- Lead, mentor, and grow a cross-functional team of engineers specializing.
- Foster a culture of collaboration, accountability, and continuous learning.
- Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
- Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
- Promote clean, maintainable, and well-documented code across the team.
- Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
- Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
- Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
- Ensure timely delivery of high-quality software aligned with business goals.
- Work closely with DevOps to ensure platform reliability, scalability, and observability.
- Conduct regular 1:1s, performance reviews, and career development planning.
- Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
- Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.
Added Responsibilities
- Defining and adhering to the development process.
- Taking part in regular external audits and maintaining artifacts.
- Identify opportunities for automation to reduce repetitive tasks.
- Mentor and coach team members in the teams.
- Continuously optimize application performance and scalability.
- Collaborate with the Marketing team to understand different user journeys.
Growth and Development
The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:
- Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
- Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
- Drive business objectives – Become part of defining and taking actions to meet the business objectives.
About You
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in software development.
- Experience with microservices architecture and container orchestration.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Solid understanding of data structures, algorithms, and software design patterns.
- Solid understanding of enterprise system architecture patterns.
- Experience in managing a small to medium-sized team with varied experiences.
- Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
- Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
- Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
- Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
- Knowledge of containerization technologies Docker and Kubernetes.
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Designation: Senior Python Django Developer
Position: Senior Python Developer
Job Types: Full-time, Permanent
Pay: Up to ₹800,000.00 per year
Schedule: Day shift
Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar
Experience: Back-end development: 4 years (Required)
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 4 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Details
- Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Proficiency:
Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.
Additional Comments:
Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science
Experience 8 to 10 Years
Location Bengaluru
Job Description
Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed
Education Qualification: Graduate
Skills: AI, NLP, Python, Data science
Must-Haves
Skills
AI, NLP, Python, Data science
NP: Immediate – 30 Days
Job Details
- Job Title: Lead I - Software Engineering - Java, J2Ee, Spring
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Summary:
We are looking for an experienced Senior Java Developer with expertise in building robust, scalable web applications using Java/J2EE, Spring Boot, REST APIs, and modern microservices architectures. The ideal candidate will be skilled in both back-end and middleware technologies, with strong experience in cloud platforms (AWS), and capable of mentoring junior developers while contributing to high-impact enterprise projects.
The developer will be responsible for full-cycle application development: from interpreting specifications and writing clean, reusable code, to testing, integration, and deployment. You will also work closely with customers and project teams to understand requirements and deliver solutions that optimize cost, performance, and maintainability.
Key Responsibilities:
Application Development & Delivery
- Design, code, debug, test, and document Java-based web applications aligned with design specifications.
- Build scalable and secure microservices using Spring Boot and RESTful APIs.
- Optimize application performance, maintainability, and reusability by using proven design patterns.
- Handle complex data structures and develop multi-threaded, performance-optimized applications.
- Ensure code quality through TDD (JUnit) and best practices.
Cloud & DevOps
- Develop and deploy applications on AWS Cloud Services: EC2, S3, DynamoDB, SNS, SES, etc.
- Leverage containerization tools like Docker and orchestration using Kubernetes.
Integration & Configuration
- Integrate with various databases (PostgreSQL, MySQL, Oracle, NoSQL).
- Configure development environments and CI/CD pipelines as per project needs.
- Follow configuration management processes and ensure compliance.
Testing & Quality Assurance
- Review and create unit test cases, scenarios, and support UAT phases.
- Perform defect root cause analysis (RCA) and proactively implement quality improvements.
Documentation
- Create and review technical documents: HLD, LLD, SAD, user stories, design docs, test cases, and release notes.
- Contribute to project knowledge bases and code repositories.
Team & Project Management
- Mentor team members; conduct code and design reviews.
- Assist Project Manager in effort estimation, planning, and task allocation.
- Set and review FAST goals for yourself and your team; provide regular performance feedback.
Customer Interaction
- Engage with customers to clarify requirements and present technical solutions.
- Conduct product demos and design walkthroughs.
- Interface with customer architects for design finalization.
Key Skills & Tools:
Core Technologies:
- Java/J2EE, Spring Boot, REST APIs
- Object-Oriented Programming (OOP), Design Patterns, Domain-Driven Design (DDD)
- Multithreading, Data Structures, TDD using JUnit
Web & Data Technologies:
- JSON, XML, AJAX, Web Services
- Database Technologies: PostgreSQL, MySQL, Oracle, NoSQL (e.g., DynamoDB)
- Persistence Frameworks: Hibernate, JPA
Cloud & DevOps:
- AWS: S3, EC2, DynamoDB, SNS, SES
- Version Control & Containerization: GitHub, Docker, Kubernetes
Agile & Development Practices:
- Agile methodologies: Scrum or Kanban
- CI/CD concepts
- IDEs: Eclipse, IntelliJ, or equivalent
Expected Outcomes:
- Timely delivery of high-quality code and application components
- Improved performance, cost-efficiency, and maintainability of applications
- High customer satisfaction through accurate requirement translation and delivery
- Team productivity through effective mentoring and collaboration
- Minimal post-production defects and technical issues
Performance Indicators:
- Adherence to coding standards and engineering practices
- On-time project delivery and milestone completion
- Reduction in defect count and issue recurrence
- Knowledge contributions to project and organizational repositories
- Completion of mandatory compliance and technology/domain certifications
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- Relevant certifications (e.g., AWS Certified Developer, Oracle Certified, Scrum Master)
Soft Skills:
- Strong analytical and problem-solving mindset
- Excellent communication and presentation skills
- Team leadership and mentorship abilities
- High accountability and ability to work under pressure
- Positive team dynamics and proactive collaboration
Skills
Java, J2Ee, Spring
Must-Haves
Java, J2Ee, Spring
Machine Learning + Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
NP: Immediate – 30 Days
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.
To design, develop, and maintain highly scalable, secure, and efficient backend systems that
power core business applications. The Senior Engineer – Backend will be responsible for
architecting APIs, optimizing data flow, and ensuring system reliability and performance. This
role will collaborate closely with frontend, DevOps, and product teams to deliver robust
solutions that enable seamless user experiences and support organizational growth through
clean, maintainable, and well-tested code.
Responsibilities:
• Design, develop, and maintain robust and scalable backend services using Node.js.
• Collaborate with front-end developers and product managers to define and implement
API specifications.
• Optimize application performance and scalability by identifying bottlenecks and
proposing solutions.
• Write clean, maintainable, and efficient code, and conduct code reviews to ensure
quality standards.
• Develop unit tests and maintain code coverage to ensure high quality.
• Document architectural solutions and system designs to ensure clarity and
maintainability.
• Troubleshoot and resolve issues in development, testing, and production environments.
• Stay up to date with emerging technologies and industry trends to continuously improve
our tech stack.
• Mentor and guide junior engineers, fostering a culture of learning and growth.
Key Skills and Qualifications:
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent
experience).
• 7+ years of experience in backend development with a focus on Node.js and Javascript.
• Strong understanding of RESTful APIs and microservices architecture.
• Proficiency in database technologies (SQL and NoSQL, such as DynamoDB, MongoDB,
PostgreSQL, etc.).
• Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
• Knowledge of cloud platform (AWS) and deployment best practices.
• Excellent problem-solving skills and the ability to work in a fast-paced environment.
• Strong communication and teamwork skills.
Good to have:
• Experience with front-end frameworks (e.g. Angular, React, Vue.js).
• Understanding of HTML, CSS, and JavaScript.
• Familiarity with responsive design and user experience principles.
Who we are: My AI Client is building the foundational platform for the "agentic economy," moving beyond simple chatbots to create an ecosystem for autonomous AI agents and they aim to provide tools for developers to launch, manage, and monetize AI agents as "digital coworkers."
The Challenge
The current AI stack is fragmented, leading to issues with multimodal data, silent webhook failures, unpredictable token usage, and nascent agent-to-agent collaboration. My AI Client is building a unified, robust backend to resolve these issues for the developer community.
Your Mission
As a foundational member of the backend team, you will architect core systems, focusing on:
- Agent Nervous System: Designing agent-to-agent messaging, lifecycle management, and high-concurrency, low-latency communication.
- Multimodal Chaos Taming: Engineering systems to process and understand real-time images, audio, video, and text.
- Bulletproof Systems: Developing secure, observable webhook systems with robust billing, metering, and real-time payment pipelines.
What You'll Bring
- My AI Client seeks an experienced engineer comfortable with complex systems and ambiguity.
Core Experience:
● Typically 3 to 5 years of experience in backend engineering roles.
● Expertise in Python, especially with async frameworks like FastAPI.
● Strong command of Docker and cloud deployment (AWS, Cloud Run, or similar).
● Proven experience designing and building microservice or agent-based architectures.
Specialized Experience (Ideal):
- Real-Time Systems: Experience with real-time media transmission like WebRTC, WebSockets and ways to process them.
- Scalable Systems: Experience in building scalable, fault-tolerant systems with a strong understanding of observability, monitoring, and alerting best practices.
- Reliable Webhooks: Knowledge of scalable webhook infrastructure with retry logic, backoffs, and security.
- Data Processing: Experience with multimodal data (e.g., OCR, audio transcription, video chunking with FFmpeg/OpenCV).
- Payments & Metering: Familiarity with usage-based billing systems or token-based ledgers.
Your Impact
- The systems designed by this role will form the foundation for:
- Thousands of AI agents for major partners across chat, video, and APIs.
- A new creator economy enabling developers to earn revenue through agents.
- The overall speed, security, and scalability of my client’s AI platform.
Why Join Us?
- Opportunity to solve hard problems with clean, scalable code.
- Small, fast-paced team with high ownership and zero micromanagement.
- Belief in platform engineering as a craft and care for developer experience.
- Conviction that AI agents are the future, and a desire to build their powering platform.
- Dynamic, collaborative in-office work environment in Bengaluru in a Hybrid setup (weekly 2 days from office)
- Meaningful equity in a growing, well-backed company.
- Direct work with founders and engineers from top AI companies.
- A real voice in architectural and product decisions.
- Opportunity to solve cutting-edge problems with no legacy code.
Ready to Build the Future?
My AI Client is building the core platform for the next software paradigm. Interested candidates are encouraged to apply with their GitHub, resume, or anything that showcases their thinking.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Key Responsibilities:
- Application Development: Design and implement both client-side and server-side architecture using JavaScript frameworks and back-end technologies like Golang.
- Database Management: Develop and maintain relational and non-relational databases (MySQL, PostgreSQL, MongoDB) and optimize database queries and schema design.
- API Development: Build and maintain RESTfuI APIs and/or GraphQL services to integrate with front-end applications and third-party services.
- Code Quality & Performance: Write clean, maintainable code and implement best practices for scalability, performance, and security.
- Testing & Debugging: Perform testing and debugging to ensure the stability and reliability of applications across different environments and devices.
- Collaboration: Work closely with product managers, designers, and DevOps engineers to deliver features aligned with business goals.
- Documentation: Create and maintain documentation for code, systems, and application architecture to ensure knowledge transfer and team alignment.
Requirements:
- Experience: 1+ years in backend development in micro-services ecosystem, with proven experience in front-end and back-end frameworks.
- 1+ years experience Golang is mandatory
- Problem-Solving & DSA: Strong analytical skills and attention to detail.
- Front-End Skills: Proficiency in JavaScript and modern front-end frameworks (React, Angular, Vue.js) and familiarity with HTML/CSS.
- Back-End Skills: Experience with server-side languages and frameworks like Node.js, Express, Python or GoLang.
- Database Knowledge: Strong knowledge of relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB).
- API Development: Hands-on experience with RESTfuI API design and integration, with a plus for GraphQL.
- DevOps Understanding: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a bonus.
- Soft Skills: Excellent problem-solving skills, teamwork, and strong communication abilities.
Nice-to-Have:
- UI/UX Sensibility: Understanding of responsive design and user experience principles.
- CI/CD Knowledge: Familiarity with CI/CD tools and workflows (Jenkins, GitLab CI).
- Security Awareness: Basic understanding of web security standards and best practices.
Job Role
The role is for an SAP UI5 Consultant responsible for developing and enhancing web applications using SAP UI5 and related technologies.
Responsibilities
- Develop Fiori-like web applications based on SAPUI5.
- Implement REST Web services and enhancements to SAP Fiori apps.
- Work with Web Technologies including HTML5, CSS3, and JavaScript.
- Participate in knowledge transfers and evaluate new technologies.
- Understand and implement software architecture for enterprise applications.
Qualifications
The candidate should have a BA/BE/BTech qualification with excellent verbal and written communication skills in English. Ability to work flexible hours is necessary.
🚀 Hiring: QA Engineer (Manual + Automation)
⭐ Experience: 3+ Years
📍 Location: Bangalore
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
💫 About the Role:
We’re looking for a skilled QA Engineer You’ll ensure product quality through manual and automated testing across web, mobile, and APIs — working with tools and technologies like Postman, Playwright, Appium, Rest Assured, GCP/AWS, and React/Next.js.
Key Responsibilities:
✅ Develop & maintain automated tests using Cucumber, Playwright, Pytest, etc.
✅ Perform API testing using Postman.
✅ Work on cloud platforms (GCP/AWS) and CI/CD (Jenkins).
✅ Test web & mobile apps (Appium, BrowserStack, LambdaTest).
✅ Collaborate with developers to ensure seamless releases.
Must-Have Skills:
✅ API Testing (Postman)
✅ Cloud (GCP / AWS)
✅ Frontend understanding (React / Next.js)
✅ Strong SQL & Git skills
✅ Familiarity with OpenAI APIs
Role: Full-Time
Work Location: Bangalore (Client Location – LeadSquared)
Address: 2nd & 3rd Floor, Omega, Embassy Tech Square, Marathahalli - Sarjapur Outer Ring Rd, Kaverappa Layout, Kadubeesanahalli, Bellandur, Bengaluru, Karnataka – 560103
Interview Process: Test and Technical Discussion (Face-to-Face at Client Location)
Work Mode: 4 Days Work from Office
Preference: Local Bangalore Candidates
Responsibilities & Skills Required
- 4–6 years of experience in building Web Applications & APIs
- Proficient in C#
- Web Framework: React.js
- API Framework: .NET Core
- Database: MySQL or SQL Server
- Strong knowledge of multi-threading and asynchronous programming
- Experience with Cloud Platforms: AWS, GCP, or Azure
- Strong SQL programming skills with experience in handling large datasets (millions of records)
- Ability to write clean, maintainable, and scalable code
- Sound understanding of scalable web application design principles
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.
🚀 Your Role at a Glance
We are hiring a Senior Staff Backend Engineer – Site Reliability for our client (Code Name: SORIN), a global leader building high-scale observability platforms. In this high-impact leadership role, you’ll architect, scale, and optimise the systems that drive how enterprises monitor their distributed applications. You’ll collaborate across teams, mentor engineers, and shape the technical direction of mission-critical backend services in a modern cloud-native environment.
This role is ideal for experienced backend engineers who thrive in distributed systems, care deeply about system performance and observability, and want to influence large-scale technical decisions that span multiple teams.
🏢 About the Client (Code Name: SORIN)
Our client is a global enterprise software company focused on simplifying IT management through powerful, secure, and scalable platforms. With a strong commitment to innovation and customer-centricity, they help organisations accelerate digital transformation across observability, incident response, and performance monitoring. You’ll join a team of passionate engineers working on critical systems trusted by Fortune 500 companies and growing SaaS-first businesses alike.
🔧 Key Responsibilities
● Collaborate with software engineering teams to define infrastructure requirements, drive best practices in reliability, monitoring, incident response, and automation, ensuring seamless integration and optimal
performance of applications and systems
● Lead and mentor a team of SREs, providing technical guidance and
support to ensure the ongoing reliability and performance of our systems
● Play a key role in driving the automation, tools, and observability initiatives, assuming ownership of designing and implementing scalable and efficient solutions.
● Leading the response to production incidents, conducting comprehensive learning reviews, driving continuous improvement initiatives, and actively participating in an on-call rotation, fostering a culture of learning, resilience, and ongoing enhancement within our systems.
● Establish and drive operations performance through SLOs
● Demonstrate proficiency in technical skills, exhibit an expert-level understanding of relevant technologies and tools, and use this knowledge to mentor and support team members, helping them improve their skills and succeed in their roles.
✅ Required Skills & Experience
● At least 10 + years of experience designing, building & maintaining SAAS environments
● 8+ years of experience designing, building & maintaining AWS/AZURE infrastructure with Terraform
● 5+ years of experience building and running Kubernetes clusters
● Strong experience working with data platform infrastructure.
● Experience with coding in a high-level programming language like Python, Go (Golang). Knowledge of writing shell scripts, SQL queries.
● Experience with observability (monitoring – logging, tracing, metrics)
● Experience with SQL/NoSQL database technologies.
● Experience with GitOps and CI/CD processes is a plus
● Experience with security operations – security policies, infrastructure, key management, setup of encryption at rest and transport.
● Experience in mentoring and fostering the professional development of team members, promoting a culture of continuous learning and collaboration.
🌟 Nice to Have
● Strong customer orientation
● Excellent interpersonal and organisational skills
● Attention to detail and focus on quality
● Strong communication skills to effectively liaise with both technical and non-technical staff
● Ability to act decisively and work well under pressure
● Must be a collaborative problem solver
● Strong bias for ownership and action
💡 Why Join This Role
● Architect foundational systems for one of the world’s most trusted observability platforms
● Join a fast-paced, mission-driven team with global reach
● Collaborate with senior leaders and contribute to technology direction across the company
● Drive innovation in how modern engineering teams monitor and respond to incidents
● Competitive compensation, strong engineering culture, and direct influence on product direction
Apply Here: https://airtable.com/app0U8lKNDT4KiEnD/pagKI67YRVCs8Yrrn/form?prefill_Applied+Position=recOhfqwuEbUIAsCb
Join the MachineHack - AIM Media House Team
At MachineHack, we craft AI-powered platforms and products that push the boundaries of
learning and innovation. From MachineHack (the hackathon hub) to CloudLab (hands-on AI
playground) to Datalyze (data-driven insights), plus countless experimental AI projects, we’re
always building, always shipping.
We’re not just coding software, we’re creating experiences where people can learn, compete, and create. If you’re passionate about solving problems, experimenting with AI, and vibing with a team that learns every day, you’ll feel right at home here
Why Work With Us?
● Be part of a fast-moving AI-first team where new ideas turn into products quickly.
● Work with modern tech stacks across AI, backend, frontend, and DevOps.
● Collaborate with engineers who are learners first,humble, supportive, and curious.
● Opportunity to contribute to high-impact platforms used by thousands in the AI/ML community.
● A culture that values learning > titles and team vibe > ego.
Our Tech Playground
We don’t shy away from complexity. We thrive in it. Here’s what we love working with:
● Backend & AI: Node.js, Python, TypeScript, Low-Level System Design, DSA, Prompt Engineering
● Frontend: React, Next.js, TypeScript
● DevOps & Cloud: AWS (EC2, Lambda, Terraform, CI/CD), Docker, Kubernetes, Helm Charts
● Testing & Quality: Playwright, Selenium, Unit Testing frameworks
● Infra & Systems: Container orchestration, Virtual Machines, AWS Infra Deployment
● Bonus Points: ML/AI model experience, guiding juniors, proven UX understanding
What Makes You a Good Fit
● 2–6 years of experience in software development
● Strong grounding in JS, Python, Node, and cloud-native tools
● Comfortable with system design, backend architecture, and team collaboration
● Can mentor juniors and handle multiple projects at once
● Fluent in English and able to communicate ideas clearly
● Humble, supportive, approachable,like a teammate and a friend
● Excited to learn, build, and experiment in AI projects
● Bonus: Some ML exposure (but not mandatory)
Position: Lead Software Engineer
Practice: Development
Reporting To: Project Manager
Experience: 8 - 10 Years
Job Summary
We are seeking a highly skilled Lead Software Engineer to lead a team of software developers in designing, developing, testing, and maintaining high-quality software applications. The ideal candidate will provide technical guidance, collaborate with cross-functional teams, and ensure that software solutions align with business objectives and user requirements
Key Responsibilities
Technical Leadership
- Lead, mentor, and guide a team of developers, providing technical direction, coaching, and performance feedback.
- Define and implement architectural designs for scalable, robust, and maintainable applications.
- Ensure adherence to coding standards, quality assurance practices, and performance optimization.
Full-Stack Development
- Design and develop responsive and user-friendly interfaces using React.js and reusable component architecture.
- Build and maintain backend services using Node.js , including RESTful APIs, business logic, and data integrations.
- Implement real-time features and integrate third-party APIs for enhanced functionality.
Project Management & Collaboration
- Collaborate with product managers, designers, and stakeholders to translate requirements into effective technical solutions.
- Participate in design and technical discussions, evaluating alternatives and mitigating potential risks.
- Oversee the end-to-end software development lifecycle, from requirement analysis to deployment and post-release support.
Quality Assurance & DevOps
- Conduct regular code reviews to ensure clean, maintainable, and well-tested code.
- Identify and resolve complex technical issues and performance bottlenecks.
- Contribute to cloud deployment strategies, CI/CD pipelines, and containerization practices.
Innovation & Continuous Improvement
- Stay updated with emerging technologies and frameworks in full-stack development.
- Recommend process improvements and technical upgrades to enhance system performance and team efficiency.
Mandatory Skills
Frontend: JavaScript, React.js, Redux, HTML5 CSS3
Backend: Node.js, Express.js, REST API design
Database: PostgreSQL, MongoDB, or other relational/non-relational databases
DevOps & Cloud: Familiarity with AWS, Azure, GCP, CI/CD pipelines
Version Control: Git and branching strategies
Architecture: Scalable design principles, microservices architecture
Desired Skills
- Experience with Docker, Kubernetes, and containerized deployments.
- Exposure to Agile/Scrum development methodologies.
- Familiarity with performance monitoring and application observability tools.
- Strong analytical and problem-solving capabilities.
- Excellent communication and leadership skills to collaborate effectively with stakeholders.
Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Proven experience as a full-stack developer, with significant hands-on experience in React.js and Node.js .
- Demonstrated experience in leading and managing software development teams.
- Solid understanding of software development methodologies and best practices.
- Passion for innovation, continuous learning, and delivering high-quality solutions.
Company Name – Wissen Technology
Location : Pune / Bangalore / Mumbai (Based on candidate preference)
Work mode: Hybrid
Experience: 5+ years
Job Description
Wissen Technology is seeking an experienced C# .NET Developer to build and maintain applications related to streaming market data. This role involves developing message-based C#/.NET applications to process, normalize, and summarize large volumes of market data efficiently. The candidate should have a strong foundation in Microsoft .NET technologies and experience working with message-driven, event-based architecture. Knowledge of capital markets and equity market data is highly desirable.
Responsibilities
- Design, develop, and maintain message-based C#/.NET applications for processing real-time and batch market data feeds.
- Build robust routines to download and process data from AWS S3 buckets on a frequent schedule.
- Implement daily data summarization and data normalization routines.
- Collaborate with business analysts, data providers, and other developers to deliver high-quality, scalable market data solutions.
- Troubleshoot and optimize market data pipelines to ensure low latency and high reliability.
- Contribute to documentation, code reviews, and team knowledge sharing.
Required Skills and Experience
- 5+ years of professional experience programming in C# and Microsoft .NET framework.
- Strong understanding of message-based and real-time programming architectures.
- Experience working with AWS services, specifically S3, for data retrieval and processing.
- Experience with SQL and Microsoft SQL Server.
- Familiarity with Equity market data, FX, Futures & Options, and capital markets concepts.
- Excellent interpersonal and communication skills.
- Highly motivated, curious, and analytical mindset with the ability to work well both independently and in a team environment.
Education
- Bachelor’s degree in Computer Science, Engineering, or a related technical field.

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Job Description:
We’re looking for a Software Engineer who’s passionate about building scalable, high-performance applications using Java and Kotlin on AWS. You’ll collaborate closely with cross-functional teams — including DevOps, QA, and Product — to design, develop, and deliver robust software solutions.
Our culture values collaboration, ownership, and continuous learning. We embrace modern technologies to solve real-world problems and continuously evolve our platform to meet changing business needs.
Key Responsibilities
- Develop and enhance features to collect, process, and deliver user-generated content.
- Collaborate with engineers, product managers, and designers to build end-to-end solutions.
- Write clean, maintainable, and efficient code following best practices.
- Participate in design discussions, code reviews, and technical brainstorming sessions.
- Identify and fix performance bottlenecks and technical issues.
- Contribute to CI/CD pipelines, infrastructure automation, and developer tooling initiatives.
- Maintain and improve application reliability, scalability, and security.
Required Skills & Experience
- 4+ years of full-stack development experience using Java/Kotlin and AWS.
- Hands-on experience in API development for front-end applications.
- Strong understanding of relational databases (PostgreSQL, MySQL) and handling large datasets.
- Experience working with CI/CD tools (CircleCI, GitHub Actions, Drone).
- Experience with Infrastructure as Code using Terraform.
- Exposure to event-driven architectures (SQS, SNS, Kafka, Kinesis, Pub/Sub) and idempotent patterns.
Nice-to-Have Skills
- Familiarity with automated database migration tools (Liquibase, Flyway).
- Experience with cloud storage (AWS S3, GCP Cloud Storage, Azure Blob) or document data stores (DynamoDB, MongoDB).
- Experience containerizing applications using Docker and deploying to ECS or Kubernetes.
- Proficiency with Git and collaborative development workflows.
Job Title: Mid-Level .NET Developer (Agile/SCRUM)
Location: Mohali, PTP or anywhere else)
Night Shift from 6:30 pm to 3:30 am IST
Experience:
5 Years
Job Summary:
We are seeking a proactive and detail-oriented Mid-Level .NET Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality applications using Microsoft technologies with a strong emphasis on .NET Core, C#, Web API, and modern front-end frameworks. You will collaborate with cross-functional teams in an Agile/SCRUM environment and participate in the full software development lifecycle—from requirements gathering to deployment—while ensuring adherence to best coding and delivery practices.
Key Responsibilities:
- Design, develop, and maintain applications using C#, .NET, .NET Core, MVC, and databases such as SQL Server, PostgreSQL, and MongoDB.
- Create responsive and interactive user interfaces using JavaScript, TypeScript, Angular, HTML, and CSS.
- Develop and integrate RESTful APIs for multi-tier, distributed systems.
- Participate actively in Agile/SCRUM ceremonies, including sprint planning, daily stand-ups, and retrospectives.
- Write clean, efficient, and maintainable code following industry best practices.
- Conduct code reviews to ensure high-quality and consistent deliverables.
- Assist in configuring and maintaining CI/CD pipelines (Jenkins or similar tools).
- Troubleshoot, debug, and resolve application issues effectively.
- Collaborate with QA and product teams to validate requirements and ensure smooth delivery.
- Support release planning and deployment activities.
Required Skills & Qualifications:
- 4–6 years of professional experience in .NET development.
- Strong proficiency in C#, .NET Core, MVC, and relational databases such as SQL Server.
- Working knowledge of NoSQL databases like MongoDB.
- Solid understanding of JavaScript/TypeScript and the Angular framework.
- Experience in developing and integrating RESTful APIs.
- Familiarity with Agile/SCRUM methodologies.
- Basic knowledge of CI/CD pipelines and Git version control.
- Hands-on experience with AWS cloud services.
- Strong analytical, problem-solving, and debugging skills.
- Excellent communication and collaboration skills.
Preferred / Nice-to-Have Skills:
- Advanced experience with AWS services.
- Knowledge of Kubernetes or other container orchestration platforms.
- Familiarity with IIS web server configuration and management.
- Experience in the healthcare domain.
- Exposure to AI-assisted code development tools (e.g., GitHub Copilot, ChatGPT).
- Experience with application security and code quality tools such as Snyk or SonarQube.
- Strong understanding of SOLID principles and clean architecture patterns.
Technical Proficiencies:
- ASP.NET Core, ASP.NET MVC
- C#, Entity Framework, Razor Pages
- SQL Server, MongoDB
- REST API, jQuery, AJAX
- HTML, CSS, JavaScript, TypeScript, Angular
- Azure Services, Azure Functions, AWS
- Visual Studio
- CI/CD, Git
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681

























