50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.



Quidcash is seeking a skilled Backend Developer to architect, build, and optimize mission-critical financial systems. You’ll leverage your expertise in JavaScript, Python, and OOP to develop scalable backend services that power our fintech/lending solutions. This role offers
the chance to solve complex technical challenges, integrate cutting-edge technologies, and directly impact the future of financial services for Indian SMEs.
If you are a leader who thrives on technical challenges, loves building high-performing teams, and is excited by the potential of AI/ML in fintech, we want to hear from you!
What You ll Do:
Design & Development: Build scalable backend services using JavaScript(Node.js) and Python, adhering to OOP principles and microservices architecture.
Fintech Integration: Develop secure APIs (REST/gRPC) for financial workflows(e.g., payments, transactions, data processing) and ensure compliance with regulations (PCI-DSS, GDPR).
System Optimization: Enhance performance, reliability, and scalability of cloud- native applications on AWS.
Collaboration: Partner with frontend, data, and product teams to deliver end-to-end features in Agile/Scrum cycles.
Quality Assurance: Implement automated testing (unit/integration), CI/CD pipelines, and DevOps practices.
Technical Innovation: Contribute to architectural decisions and explore AI/ML integration opportunities in financial products.
What You'll Bring (Must-Haves):
Experience:
3–5 years of backend development with JavaScript (Node.js) and Python.
Proven experience applying OOP principles, design patterns, and micro services.
Background in fintech, banking, or financial systems (e.g., payment gateways, risk engines, transactional platforms).
Technical Acumen:
Languages/Frameworks:
JavaScript (Node.js + Express.js/Fastify)
Python (Django/Flask/FastAPI)
Databases: SQL (PostgreSQL/MySQL) and/or NoSQL (MongoDB/Redis).
Cloud & DevOps: AWS/GCP/Azure, Docker, Kubernetes, CI/CD tools (Jenkins/GitLab).
Financial Tech: API security (OAuth2/JWT), message queues (Kafka/RabbitMQ), and knowledge of financial protocols (e.g., ISO 20022).
Mindset:
Problem-solver with a passion for clean, testable code and continuous improvement.
Adaptability in fast-paced environments and commitment to deadlines.
Collaborative spirit with strong communication skills.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward-thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.
If you are interested, pls share your profile to smithaquidcash.in

Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.

Quidcash seeks a versatile full-stack developer to build transformative fintech applications from end to end. You ll leverage Flutter for frontend development and JavaScript/Python for backend systems to create seamless, high-performance solutions for Indian SMEs. This role
blends UI craftsmanship with backend logic, offering the chance to architect responsive web/mobile experiences while integrating financial workflows and AI-driven features. If you excel at turning complex requirements into intuitive interfaces, thrive in full lifecycle development, and are passionate about fintech innovation – join us!
What You’ll Do:
Full-stack Development:
Design and build responsive cross-platform applications using Flutter (Dart) for web and mobile native app development.
Develop robust backend services with JavaScript (Node.js) and Python, applying OOP principles and RESTful/gRPC APIs.
Integrations:
Implement secure financial features (e.g., payment processing, dashboards, transaction workflows) with regulatory compliance.
Connect frontend UIs to backend systems (databases, cloud APIs, AI/ML models).
System Architecture: Architect scalable solutions using microservices, state management (Provider/Bloc), and cloud patterns (AWS/GCP).
Collaboration & Delivery:
Partner with product, UX, and QA teams in Agile/Scrum cycles to ship features from concept to production.
Quality & Innovation:
Enforce testing (unit/widget/integration), CI/CD pipelines, and DevOps practices.
Explore AI/ML integration for data-driven UI/UX enhancements.
What You’ll Bring (Must-Haves):
Experience:
3–5 years in full-stack development, including:
Flutter (Dart) for cross-platform apps (iOS, Android, Web).
JavaScript (Node.js + React/Express) and Python (Django/Flask).
Experience with OOP, design patterns, and full SDLC in Agile environments.
Technical Acumen:
Frontend:
Flutter (state management, animations, custom widgets).
HTML/CSS, responsive design, and performance optimization.
Backend:
Node.js/Python frameworks, API design, and database integration (SQL/NoSQL).
Tools & Practices:
Cloud platforms (AWS/GCP/Azure), Docker, CI/CD (Jenkins/GitHub Actions).
Git, testing suites (Jest/Pytest, Flutter Test), and financial security standards.
Mindset:
User-centric approach with a passion for intuitive, accessible UI/UX.
Ability to bridge technical gaps between frontend and backend teams.
Agile problem-solver thriving in fast-paced fintech environments.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward- thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.


Job Description: Software Engineer - Backend ( 3-5 Years)
Location: Bangalore
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives
through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
●Grow at the Edge: We embrace personal growth by stepping out of our comfort zones to
discover our genius zones, driven by self-awareness and integrity. No excuses.
●Understanding through Listening and Speaking the Truth: Transparency, radical candor,
and authenticity define our communication. We challenge ideas, but once decisions are
made, we commit fully.
●I Win for Teamwin: We operate within our genius zones, taking ownership of our work
and inspiring our team with energy and attitude to win together.
Responsibilities:
• Contribute to the entire implementation process including driving the definition of improvements based on business needs and architectural improvements.
• Review code for quality and implementation of best practices.
• Promote coding, testing, and deployment best practices through hands-on research and demonstration.
• Write testable code that enables extremely high levels of code coverage.
• Ability to review frameworks and design principles toward suitability in the project context.
• Candidates who will demonstrate an ability to identify an opportunity lay out a rational plan for pursuing that opportunity, and see it through to completion.
Requirements:
• Engineering graduate with 3+ years of experience in software product development.
• Proficient in Python, Django, Pandas, GitHub, and AWS.
• Good knowledge of PostgreSQL, and MongoDB.
• Strong Experience in designing REST APIs.
• Experience with working on scalable interactive web applications.
• A clear understanding of software design constructs and their implementation.
• Understanding of the threading limitations of Python and multi-process architecture.
• Familiarity with some ORM (Object Relational Mapper) libraries.
• Good understanding of Test Driven Development.
• Unit and Integration testing.
• Preferred exposure to Finance domain.
• Strong written and oral communication skills.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.

Position Overview:
We are looking for a talented and self-motivated Back-end Developer to join our development team. The ideal candidate will be responsible for writing clean, efficient, and maintainable code that enhances workflow organization and automates various internal processes within the organization. The role involves continuous improvement of our software solutions to save man-hours and ensure overall organizational efficiency. Key tasks include development, testing, debugging, troubleshooting, and maintenance of both new and existing programs.
Key responsibilities:
1) Software Development: Write clean, efficient, and maintainable code/flows that automate internal processes and improve workflow efficiency.
2) Testing and Maintenance: Perform testing, debugging, troubleshooting, and daily maintenance of created or integrated programs.
3) Adherence to Standards: Follow preferred development methodologies and adhere to organizational development standards.
4) Team work: To work closely with other team members to ensure the successful implementation of projects. Maintain clear and concise documentation of code, APIs, and software components to aid in knowledge sharing and future development.
5) Stay Current: Keep up to date with the latest developments in the Python, RPA ecosystem and engage in best practices of software engineering.

Position Overview:
We are seeking a skilled Software Developer with a focus on Front-End Development with Strong proficiency in HTML, JavaScript, CSS, Sass, Bootstrap and modern JavaScript frameworks including ReactJS and jQuery to join our team. The successful candidate will be responsible for writing clean, efficient, and maintainable code that enhances workflow organization and automates various internal processes within the organization. The role involves continuous improvement of our software solutions to save man-hours and ensure overall organizational efficiency. Key tasks include development, testing, debugging, troubleshooting, and maintenance of both new and existing programs.
Key responsibilities:
1) Software Development: Write clean, efficient, and maintainable code/flows that automate internal processes and improve workflow efficiency.
2) Testing and Maintenance: Perform testing, debugging, troubleshooting, and daily maintenance of created or integrated programs.
3) UI/UX Development and Design: Design and develop intuitive and visually appealing user interfaces in the software.
4) Web Services Integration: Integrate UI with web services to ensure seamless functionality.
5) Adherence to Standards: Follow preferred development methodologies and adhere to organizational development standards.
6) Collaboration: To work closely with other team members to ensure the successful implementation of projects. Maintain clear and concise documentation of code, APIs, and software components to aid in knowledge sharing and future development.
7) Stay Current: Keep up to date with the latest developments in the Frond end development ecosystem and engage in best practices of software engineering.

Sr. Software Engineer
Role expects you to
• Design, code, test, debug, deploy Web App and Mobile Apps
• Collaborate with SMEs to resolve Technical Issues and Achieve Goals
• Lead Projects, teams
• Strong problem solving skills
• Open to learn new technologies
Qualification
▪ 4+ years of IT Development Experience
▪ Comprehensive Programming Experience in Web and Mobile app development using
MEAN or MERN stack
▪ Knowledge of Python, Ruby
▪ Knowledge of RDBMS like MS SQL, PostgreSQL, MySQL
▪ Understanding of Dev-Ops Tools like git, Jenkins, Azure Dev-Ops
▪ Exposure Cloud technologies, Agile development process, Estimation techniques
▪ Understanding of End to End Development process

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required

Role Overview
As an AI/ML Engineer at Hotelzify, you will be at the forefront of designing and deploying AI components that power our conversational agent (voice + chat). You’ll be responsible for NLP/NLU pipelines, real-time inference optimization, dialogue management, retrieval-augmented generation (RAG), and LLM integration, helping the agent understand user intents, answer questions, and complete bookings live.
Key Responsibilities
- Design and implement NLP/NLU models to understand real-time user intents from text and voice.
- Build and fine-tune LLM-based conversational flows using RAG, prompt engineering, and retrieval mechanisms.
- Integrate external tools for hotel availability, pricing APIs, CRM data, and transactional workflows.
- Develop efficient real-time inference pipelines with latency under 300ms for voice and chat.
- Collaborate with frontend/backend teams to ensure seamless LLM API orchestration.
- Optimize prompt logic, dialogue memory, and fallback strategies for natural conversations.
- Conduct A/B experiments and continuous learning pipelines for feedback-driven improvement.
- Use vector databases (e.g., FAISS, Pinecone, Weaviate) for retrieval over hotel-related data.
- Work on voice-specific challenges: STT (Speech-to-Text), TTS, and intent detection over audio streams.
Tech Stack
- Languages: Python, Node.js
- AI/ML: LangChain, Transformers (Hugging Face), OpenAI APIs, LlamaIndex, RAG, Whisper, NVIDIA NeMo
- Infra: AWS (EC2, RDS, EKS, Lambda), Docker, Redis
- Databases: PostgreSQL, MongoDB, Pinecone / Weaviate / Qdrant
- Voice APIs: Plivo, Twilio, Google Speech, AssemblyAI
What We’re Looking For
- 3+ years in AI/ML/NLP-focused roles, preferably in production environments.
- Strong understanding of modern LLM pipelines, LangChain/RAG architectures.
- Experience building or integrating real-time conversational AI systems.
- Comfortable with voice-based systems: STT, TTS, and real-time latency tuning.
- Hands-on experience with fine-tuning or prompt-tuning transformer models.
- Bonus: Experience working in travel, hospitality, or e-commerce domains.
Nice to Have
- Prior work with agents that use [tool calling] / [function calling] paradigms.
- Knowledge of reinforcement learning for dialogue optimization (e.g., RLHF).
- Experience deploying on GPU-based infrastructure (e.g., AWS EC2 with NVIDIA).
Why Join Us?
- Work on a real product used by thousands of guests every day.
- Build India’s first real-time AI agent for hotel sales.
- Flexible work environment with deep ownership and autonomy.
- Get to experiment and deploy bleeding-edge ML/AI tech in production.

Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

As a senior backend engineer you will lead the development of our loan origination system and it's interface with our loan management system. You will have an opportunity to contribute to the development of the overall technology strategy of the company. 0yrs- 3yrs of Experience in Python Experience with Shell Script,Linux, with testing Experience working in the banking or financial services industry Devops experience will be preferred Senior Backend Engineer
Additional qualifications
Experience working in the banking or financial services industry
Devops experience


Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
Key Responsibilities:
- Design and implement advanced AI/ML models and algorithms to address real-world challenges.
- Analyze large and complex datasets to derive actionable insights and train predictive models.
- Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
- Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
- Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
- Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
- Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
- Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.
Required Skills:
- Strong programming skills in Python (preferred); experience with R or Java is also valuable.
- Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
- Hands-on experience with cloud platforms like AWS, Azure, or GCP.
- Solid foundation in data structures, algorithms, statistics, and machine learning principles.
- Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
- Proven experience in deploying and maintaining AI/ML models in production environments.
- Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.

Job Title: Python Developer
Location: Bangalore
Experience: 5–7 Years
Employment Type: Full-Time
Job Description:
We are seeking an experienced Python Developer with strong proficiency in data analysis tools and PySpark, along with a solid understanding of SQL syntax. The ideal candidate will work on large-scale data processing and analysis tasks within a fast-paced environment.
Key Requirements:
Python: Hands-on experience with Python, specifically in data analysis using libraries such as pandas, numpy, etc.
PySpark: Proficiency in writing efficient PySpark code for distributed data processing.
SQL: Strong knowledge of SQL syntax and experience in writing optimized queries.
Ability to work independently and collaborate effectively with cross-functional teams.

Job Title: Full-Stack developer
Location: Bengaluru, India
Experience: 5 to 8+ Years
Employment Type: Full-time
Company Overview:
IAI Solution Pvt Ltd (www.iaisolution.com),operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. We are seeking a Full Stack Developer who thrives in high-velocity environments, enjoys technical problem-solving, and is passionate about building scalable and impactful systems.
Position Summary :
We’re hiring a Full Stack Developer with strong experience in Python, React.js, and Next.js, capable of handling end-to-end development. The ideal candidate should have hands-on exposure to FastAPI, Django, Node.js, and cloud platforms like Azure or AWS. Familiarity with Docker, Kubernetes, Terraform, CI/CD tools, and databases like PostgreSQL, MongoDB, and Redis is required. This role demands building and deploying scalable systems in a fast-paced, agile environment.
Experience in a start-up environment is preferred, where agility, ownership, and cross-functional collaboration are key.
Key Responsibilities
- Develop and maintain end-to-end web applications, including frontend interfaces and backend services.
- Build responsive and scalable UIs using React.js and Next.js.
- Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.
- Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.
- Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.
- Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.
- Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.
- Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.
- Troubleshoot, debug, and improve application performance and security.
- Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.
Must-Have Qualifications
- Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .
- Proficiency in frontend development using React.js and Next.js
- Experience in building and consuming RESTful APIs
- Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis
- Practical experience with cloud platforms, preferably Azure, or AWS
- Familiarity with containerization and orchestration tools like Docker and Kubernetes
- Working knowledge of Infrastructure as Code (IaC) using Terraform
- Experience with CI/CD pipelines using GitHub Actions or Azure DevOps
- Ability to work in an agile development environment with cross-functional teams
- Strong problem-solving, debugging, and communication skills
- Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.
Technical Stack
- Frontend: React.js, Next.js
- Backend: Python, FastAPI, Django, Spring Boot, Node.js
- DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform
- CI/CD: GitHub Actions, Azure DevOps
- Databases: PostgreSQL, MongoDB, Redis
Perks & Benefits
- Competitive compensation with performance incentives
- High-impact role in a product-driven, fast-moving environment
- Opportunity to lead mission-critical software and AI initiatives
- Flexible work culture, learning support, and health benefits

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.

Backend Engineer - Python
Location
Bangalore, India
Experience Required
2-3 years minimum
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
CS fundamentals are must (CN, DBMS, OS, System Design, OOPS) • Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
• Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
• API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
• Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
• Containerization: Proficiency with Docker and Kubernetes
• Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
• Version Control: Advanced Git workflows and collaboration
Experience Requirements
• Minimum 2-3 years of backend development experience
• Proven track record of working on enterprise-level applications
• Experience building scalable systems handling high traffic loads
• Background in microservices architecture and distributed systems
• Experience with CI/CD pipelines and DevOps practices
Responsibilities
• Design, develop, and maintain robust backend services and APIs
• Optimize application performance and scalability
• Collaborate with frontend teams and product managers
• Implement security best practices and data protection measures
• Write comprehensive tests and maintain code quality
• Participate in code reviews and architectural discussions
• Monitor system performance and troubleshoot production issues
Preferred Qualifications
• Knowledge of caching strategies (Redis, Memcached)
• Understanding of software architecture patterns
• Experience with Agile/Scrum methodologies
• Open source contributions or personal projects

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing

🔍 Job Description:
We are looking for an experienced and highly skilled Technical Lead to guide the development and enhancement of a large-scale Data Observability solution built on AWS. This platform is pivotal in delivering monitoring, reporting, and actionable insights across the client's data landscape.
The Technical Lead will drive end-to-end feature delivery, mentor junior engineers, and uphold engineering best practices. The position reports to the Programme Technical Lead / Architect and involves close collaboration to align on platform vision, technical priorities, and success KPIs.
🎯 Key Responsibilities:
- Lead the design, development, and delivery of features for the data observability solution.
- Mentor and guide junior engineers, promoting technical growth and engineering excellence.
- Collaborate with the architect to align on platform roadmap, vision, and success metrics.
- Ensure high quality, scalability, and performance in data engineering solutions.
- Contribute to code reviews, architecture discussions, and operational readiness.
🔧 Primary Must-Have Skills (Non-Negotiable):
- 5+ years in Data Engineering or Software Engineering roles.
- 3+ years in a technical team or squad leadership capacity.
- Deep expertise in AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, S3.
- Advanced programming experience with PySpark, Python, and SQL.
- Proven experience in building scalable, production-grade data pipelines on cloud platforms.


Backend Engineer - Python
Location: Bangalore, India
Experience Required: 2-3 years minimum
About Us:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
- Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
- Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
- API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
- Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
- Containerization: Proficiency with Docker and Kubernetes
- Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
- Version Control: Advanced Git workflows and collaboration
Experience Requirements
- Minimum 2-3 years of backend development experience
- Proven track record of working on enterprise-level applications
- Experience building scalable systems handling high traffic loads
- Background in microservices architecture and distributed systems
- Experience with CI/CD pipelines and DevOps practices
Responsibilities
- Design, develop, and maintain robust backend services and APIs
- Optimize application performance and scalability
- Collaborate with frontend teams and product managers
- Implement security best practices and data protection measures
- Write comprehensive tests and maintain code quality
- Participate in code reviews and architectural discussions
- Monitor system performance and troubleshoot production issues
Preferred Qualifications
- Knowledge of caching strategies (Redis, Memcached)
- Understanding of software architecture patterns
- Experience with Agile/Scrum methodologies
- Open source contributions or personal projects


Required skills / knowledge
• Experienced in software development and proficient in one or more of the following
programming languages - Python, JAVA and/or Perl.
• Experience with SDLC (software development lifecycle) best practices with large software
development projects is a must.
• Good knowledge in Computer Science fundamental
• Good organizational and English communication skills, prioritization of multiple projects and
objective.
Preferred skills / knowledge:
• Knowledge of storage technologies and disciplines namely, Fibre Channel, ISCSI, SCSI, NFS, CIFS,
POSIX, Object Storage, SAS, SATA, FLASH (NVME, SSD, etc.), RAID, Erasure Coding,
Distributed/Scale-out Storage, File systems, High-availability methods, working knowledge of
databases etc. are strongly preferred. Understanding networking protocols and connectivity
preferred.
• System Architecture experience within enterprise UNIX (RHEL)/Windows Server environments,
Container environments (RH Open Shift/Kubernetes/Docker, etc.) and Cloud environments
(Azure/AWS/Google) is highly desirable but not required.
• Understanding of Client/Server, Scaleout architectures, virtualization, performance and capacity
management strongly preferred.
• Good knowledge and experience of using Linux provisioning and system configuration
management tools such as Ansible, Puppet, Salt Stack, or Chef strongly preferred. Experience in
automation of a large-scale Linux deployment is preferred.
• Effective troubleshooting skills across O/S, network and storage.
• Knowledge in the following vendor products is preferred but not required: NetApp
7mode/cDOT/Engenio, IBM Spectrum Scale (aka GPFS), HDS Storage Arrays/HCP, EMC Atmos,
Brocade SAN/FOS, Veritas Storage Foundation Suite


- 5+ years of experience
- FlaskAPI, RestAPI development experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows. This is a high-ownership role for an "all-rounder" who is passionate about
designing scalable architectures, writing robust code, and ensuring seamless deployments and operations.
What You'll Be Doing
As a critical member of our small, dedicated team, you will take on a versatile role encompassing development, infrastructure, and operations.
Cloud & DevOps Ownership
- Architect and implement containerized services on AWS (S3, EC2, ECS, ECR, CodeBuild, Lambda, Fargate, RDS, CloudWatch) under secure IAM policies.
- Take ownership of CI/CD pipelines, optimizing and managing GitHub Actions workflows.
- Configure and manage microservice versioning and CI/CD deployments.
- Implement secrets rotation and IP-based request rate limiting for enhanced security.
- Configure auto-scaling instances and Kubernetes for high-workload microservices to ensure performance and cost efficiency.
- Hands-on experience with Docker and Kubernetes/EKS fundamentals.
Backend & API Design
- Design, build, and maintain scalable REST/OpenAPI services in Django (DRF), WebSocket implementations, and asynchronous microservices in FastAPI.
- Model relational data in PostgreSQL 17 and optimize with Redis for caching and pub/sub.
- Orchestrate background tasks using Celery or RQ with Redis Streams or Amazon SQS.
- Collaborate closely with the frontend team (React/React Native) to define and build robust APIs.
Testing & Observability
- Ensure code quality via comprehensive testing using Pytest, React Testing Library, and Playwright.
- Instrument applications with CloudWatch metrics, contributing to our observability strategy.
- Maintain a Git-centric development workflow, including branching strategies and pull-request discipline.
Qualifications & Skills
Must-Have
- Experience: 2-4 years of hands-on experience delivering production-level full-stack applications with a strong emphasis on backend and DevOps.
- Backend Expertise: Proficiency in Python, with strong command of Django or FastAPI, including async Python patterns and REST best practices.
- Database Skills: Strong SQL skills with PostgreSQL; practical experience with Redis for caching and messaging.
- Cloud & DevOps Mastery: Hands-on experience with Docker and Kubernetes/EKS fundamentals.
- AWS Proficiency: Experience deploying and managing services on AWS (EC2, S3, RDS, Lambda, ECS Fargate, ECR, SQS).
- CI/CD: Deep experience with GitHub Actions or similar platforms, including semantic-release, Blue-Green Deployments, and artifact signing.
- Automation: Fluency in Python/Bash or Go for automation scripts; comfort with YAML.
- Ownership Mindset: Entrepreneurial spirit, strong sense of ownership, and ability to deliver at scale.
- Communication: Excellent written and verbal communication skills; comfortable in async and distributed team environments.
Nice-to-Have
- Frontend Familiarity: Exposure to React with Redux Toolkit and React Native.
- Event Streaming: Experience with Kafka or Amazon EventBridge.
- Serverless: Knowledge of AWS Lambda, Step Functions, CloudFront Functions, or Cloudflare Workers.
- Observability: Familiarity with Datadog, Posthog, Prometheus/Grafana/Loki.
- Emerging Tech: Interest in GraphQL (Apollo Federation) or generative AI frameworks (Amazon Bedrock, LangChain) and AI/ML.
Key Responsibilities
- Architectural Leadership: Design and lead the technical strategy for migrating our platform from a monolithic to a microservices architecture.
- System Design: Translate product requirements into scalable, secure, and reliable system designs.
- Backend Development: Build and maintain core backend services using Python (Django/FastAPI).
- CI/CD & Deployment: Own and manage CI/CD pipelines for multiple services using GitHub Actions, AWS CodeBuild, and automated deployments.
- Infrastructure & Operations: Deploy production-grade microservices using Docker, Kubernetes, and AWS EKS.
- FinOps & Performance: Drive cloud cost optimization and implement auto-scaling for performance and cost-efficiency.
- Security & Observability: Implement security, monitoring, and compliance using tools like Prometheus, Grafana, Datadog, Posthog, and Loki to ensure 99.99% uptime.
- Collaboration: Work with product and development teams to align technical strategy with business growth plans.

Mode of Hire: Permanent
Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git
Desired Skills (Good if you have): Ansible, Terraform
Job Responsibilities
- Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
- Manage infrastructure and services in production AWS environments.
- Drive platform improvements with a focus on security, scalability, and operational excellence.
- Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
- Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.
Job Requirements
- Strong foundational understanding of Linux systems.
- Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
- Proven track record of delivering robust, well-documented, and secure automation solutions.
- Comfortable owning end-to-end delivery of infrastructure components and tooling.
Preferred Qualifications
- Advanced system and cloud optimization skills.
- Prior experience in platform teams or DevOps roles at product-focused startups.
- Demonstrated contributions to internal tooling, open-source, or automation projects.

About Us:
At Vahan.ai, we are building India’s first AI powered recruitment marketplace for India’s 300 million strong Blue Collar workforce, opening doors to economic opportunities and brighter futures.
Already India’s largest recruitment platform, Vahan.ai is supported by marquee investors like Khosla Ventures, Bharti Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook. Our customers include names like Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious: to become the go-to platform for blue-collar professionals worldwide, empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives worldwide, creating a future where everyone has access to economic prosperity. If our vision excites you, Vahan.ai might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. If this sounds like your kind of journey, dive into the details and see where you can make your mark.
What you will be doing:
- Stay at the Cutting Edge: Continuously research, evaluate, and implement the latest AI technologies and state-of-the-art algorithms to ensure our platform remains innovative and effective.
- Develop and Optimize: Design, develop, and maintain AI systems and applications that enhance our recruitment marketplace.
- Prototype and Document: Create proof-of-concept models, write technical specifications, and document AI systems and processes.
- Test, Debug and Optimize: Conduct thorough testing of AI models, debug issues, and optimize for performance and scalability.
- Collaborate: Work closely with cross-functional teams, including data scientists, software engineers, and product managers.
- Own and Deliver: Take ownership of major product areas end-to-end, becoming a subject matter expert in your domain.
- Communicate Effectively: Translate complex technical concepts into clear, understandable terms for diverse stakeholders.
- Deploy and Scale: Implement production-ready AI solutions that can scale to meet our growing user base.
You will thrive in this role if you:
- Stay current with the latest AI developments and demonstrate ability to quickly adapt and implement cutting-edge technologies
- Have 2+ years of experience in AI projects with strong proficiency in Python and frameworks like PyTorch, Hugging Face, and LangChain
- Possess strong skills in testing, debugging, and optimizing AI models for performance and scalability
- Can communicate complex technical concepts in an easy-to-understand manner and adapt quickly to changing priorities
- Excel at creating prototypes and documenting AI systems, including writing comprehensive technical specifications
- Demonstrate strong familiarity with implementing state-of-the-art AI technologies and algorithms
- Show ownership capabilities to handle large product areas end-to-end and become an SME in your domain
- Have experience with voice data and conversational bots
- Possess creative problem-solving skills with the ability to find scrappy solutions under tight timelines
- Expertise in Large Language Models (LLMs), evals and fine-tuning techniques, and Retrieval-Augmented Generation (RAG), AI agents.
At Vahan.ai, you’ll have the opportunity to make a real impact in a sector that touches millions of lives. We’re committed to not only advancing the livelihoods of our workforce but also, in taking care of the people who make this mission possible. Here’s what we offer:
- Unlimited PTO: Trust and flexibility to manage your time in the way that works best for you.
- Comprehensive Medical Insurance: We’ve got you covered with plans designed to support you and your loved ones.
- Monthly Wellness Leaves: Regular time off to recharge and focus on what matters most.
- Competitive Pay: Your contributions are recognized and rewarded with a compensation package that reflects your impact.
Join us, and be part of something bigger—where your work drives real, positive change in the world.

What We’re Looking For:
- Strong experience in Python (5+ years).
- Hands-on experience with any database (SQL or NoSQL).
- Experience with frameworks like Flask, FastAPI, or Django.
- Knowledge of ORMs, API development, and unit testing

Technical Expertise
- Advanced proficiency in Python
- Expertise in Deep Learning Frameworks: PyTorch and TensorFlow
- Experience with Computer Vision Models:
- YOLO (Object Detection)
- UNet, Mask R-CNN (Segmentation)
- Deep SORT (Object Tracking)
Real-Time & Deployment Skills
- Real-time video analytics and inference optimization
- Model pipeline development using:
- Docker
- Git
- MLflow or similar tools
- Image processing proficiency: OpenCV, NumPy
- Deployment experience on Linux-based GPU systems and edge devices (Jetson Nano, Google Coral, etc.)
Professional Background
- Minimum 4+ years of experience in AI/ML, with a strong focus on Computer Vision and System-Level Design
- Educational qualification: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or a related field
- Strong project portfolio or experience in production-level deployments

Role: Python Developer
Location: HYD , BLR,
Experience : 6+ Years
Skills needed:
Python developer experienced 5+ Years in designing, developing, and maintaining scalable applications with a strong focus on API integration. Must demonstrate proficiency in RESTful API consumption, third-party service integration, and troubleshooting API-related issues.


Location: Bangalore/ Mangalore
Experience required: 2-6 years.
Key skills: Python, Django, Flask, FastAPI
We are seeking a skilled Python Developer with 2–6 years of experience who can contribute as an individual performer while also supporting technical decision-making and mentoring junior developers. The role involves designing and building scalable backend systems using Django/Flask, FastAPI, and collaborating closely with cross-functional teams to deliver high-quality software solutions.
Responsibilities:
• Develop robust, scalable, and efficient backend applications using Python (Django/Flask, FastAPI).
• Build and maintain RESTful APIs that are secure, performant, and easy to integrate.
• Collaborate with cross-functional teams to deliver seamless and impactful software solutions.
• Participate actively in all phases of the software development life cycle: requirements gathering, design, development, testing, deployment, and maintenance.
• Write clean, maintainable, and well-documented code that meets industry best practices.
• Troubleshoot, debug, and optimize existing systems for performance and scalability.
• Contribute ideas for continuous improvement in development processes and team culture.
Requirements:
• 2–6 years of hands-on development experience in Python, with proficiency in frameworks like Django/Flask, FastAPI.
• Strong understanding of OOP concepts, design principles, and design patterns.
• Solid experience working with databases.
• Good knowledge of designing and consuming RESTful APIs.
• Comfortable working with version control systems like Git and collaborating in code reviews.
• Exposure to cloud platforms (AWS, Azure, or GCP) is an added advantage.
• Familiarity with Docker and containerized application development is a plus.
• Understanding of CI/CD pipelines is desirable.
• Analytical mindset with strong problem-solving skills.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Set up dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field

Role Overview
We are seeking a highly skilled and experienced Senior AI Engineer with deep expertise in computer vision and architectural design. The ideal candidate will lead the development of robust, scalable AI systems, drive architectural decisions, and contribute significantly to the deployment of real-time video analytics, multi-model systems, and intelligent automation
solutions.
Key Responsibilities
Design and lead the architecture of complex AI systems in the domain of computer vision and real-time inference.
Build and deploy models for object detection, image segmentation, classification, and tracking.
Mentor and guide junior engineers on deep learning best practices and scalable software engineering.
Drive end-to-end ML pipelines: from data ingestion and augmentation to training, deployment, and monitoring.
Work with YOLO-based and transformer-based models for industrial use-cases.
Lead integration of AI systems into production with hardware, backend, and DevOps teams.
Develop automated benchmarking, annotation, and evaluation tools.
Ensure maintainability, scalability, and reproducibility of models through version control, CI/CD, and containerization.
Required Skills
Advanced proficiency in Python and deep learning frameworks (PyTorch, TensorFlow).
Strong experience with YOLO, segmentation networks (UNet, Mask R-CNN), and
tracking (Deep SORT).
Sound understanding of real-time video analytics and inference optimization.
Hands-on experience designing model pipelines using Docker, Git, MLflow, or similar tools.
Familiarity with OpenCV, NumPy, and image processing techniques.
Proficiency in deploying models on Linux systems with GPU or edge devices (Jetson, Coral)
Good to Have
Experience with multi-model orchestration, streaming inference (DeepStream), or virtual camera inputs.
Exposure to production-level MLOps practices.
Knowledge of cloud-based deployment on AWS, GCP, or DigitalOcean.
Familiarity with synthetic data generation, augmentation libraries, and 3D modeling tools.
Publications, patents, or open-source contributions in the AI/ML space.
Qualifications
B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related field.
4+ years of proven experience in AI/ML with a focus on computer vision and system- level design.
Strong portfolio or demonstrable projects in production environments

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.

Background
Fisdom is a leading digital wealth management platform. Fisdom platform (mobile apps and web apps) provides access to consumers to a wide bouquet of financial solutions – investments, savings and protection (and many more in the pipeline). Fisdom blends cutting-edge technology with conventional financial wisdom, awesome UX and friendly customer service to make financial products simpler and more accessible to millions of Indians. We are growing and constantly looking for high performers to participate in our growth story. We have recently been certified as the great place to work. For more info, visit www.fisdom.com.
Objectives of this Role
Improve, execute, and effectively communicate significant analyses that identify opportunities across the business
Participate in meetings with management, assessing and addressing issues to identify and implement toward operations
Provide strong and timely financial and business analytic decision support to various organizational stakeholders
Responsibilities
Interpret data, analyze results using analytics, research methodologies, and statistical techniques
Develop and implement data analyses, leverage data collection and other strategies that statistical efficiency and quality
Prepare summarize various weekly, monthly, and periodic results for use by various key stakeholders
Conduct full lifecycle of analytics projects, including pulling, manipulating, and exporting data from project requirements documentation to design and execution
Evaluate key performance indicators, provide ongoing reports, and recommend business plan updates
Skills and Qualification
Bachelor’s degree, preferably in computer science, mathematics, or economics
Advanced analytical skills with experience collecting, organizing, analyzing, and disseminating information with accuracy
The ability to present findings in a polished way
Proficiency with statistics and dataset analytics (using SQL, Python, Excel)
Entrepreneurial mindset, with an innovative approach to business planning
Relevant Industry Experience of more than 2 - 6years, more than 4yrs python experience is must
Preferable- Product startup's, fintech
Why join us and where?
We have an excellent work culture and an opportunity to be a part of a growing organization with immense learning possibilities. You have an opportunity to build a brand from the scratch. All of this, along with top of the line remuneration and challenging work. You will based out of Bangalore.

JOB REQUIREMENT:
Wissen Technology is now hiring a Azure Data Engineer with 7+ years of relevant experience.
We are solving complex technical problems in the financial industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team, which has made a mark as a high-end technical consultant.
Required Skills:
· 6+ years of being a practitioner in data engineering or a related field.
· Proficiency in programming skills in Python
· Experience with data processing frameworks like Apache Spark or Hadoop.
· Experience working on Snowflake and Databricks.
· Familiarity with cloud platforms (AWS, Azure) and their data services.
· Experience with data warehousing concepts and technologies.
· Experience with message queues and streaming platforms (e.g., Kafka).
· Excellent communication and collaboration skills.
· Ability to work independently and as part of a geographically distributed team.

Job Title: Full Stack Developer (MERN + Python) / Senior Mern Developer
Location: Bangalore
Job Type: Full-time
Experience: 4–8 years
About Miror
Miror is a pioneering FemTech company transforming how midlife women navigate perimenopause and menopause. We offer medically-backed solutions, expert consultations, community engagement, and wellness products to empower women in their health journey. Join us to make a meaningful difference through technology.
Role Overview
· We are seeking a passionate and experienced Full Stack Developer skilled in MERN stack and Python (Django/Flask) to build and scale high-impact features across our web and mobile platforms. You will collaborate with cross-functional teams to deliver seamless user experiences and robust backend systems.
Key Responsibilities
· Design, develop, and maintain scalable web applications using MySQL/Postgres, MongoDB, Express.js, React.js, and Node.js
· Build and manage RESTful APIs and microservices using Python (Django/Flask/FastAPI)
· Integrate with third-party platforms like OpenAI, WhatsApp APIs (Whapi), Interakt, and Zoho
· Optimize performance across the frontend and backend
· Collaborate with product managers, designers, and other developers to deliver high-quality features
· Ensure security, scalability, and maintainability of code
· Write clean, reusable, and well-documented code
· Contribute to DevOps, CI/CD, and server deployment workflows (AWS/Lightsail)
· Participate in code reviews and mentor junior developers if needed
Required Skills
· Strong experience with MERN Stack: MongoDB, Express.js, React.js, Node.js
· Proficiency in Python and web frameworks like Django, Flask, or FastAPI
· Experience working with REST APIs, JWT/Auth, and WebSockets
· Good understanding of frontend design systems, state management (Redux/Context), and responsive UI
· Familiarity with database design and queries (MongoDB, PostgreSQL/MySQL)
· Experience with Git, Docker, and deployment pipelines
· Comfortable working in Linux-based environments (e.g., Ubuntu on AWS)
Bonus Skills
· Experience with AI integrations (e.g., OpenAI, LangChain)
· Familiarity with WooCommerce, WordPress APIs
· Experience in chatbot development or WhatsApp API integration
Who You Are
· You are a problem-solver with a product-first mindset
· You care about user experience and performance
· You enjoy working in a fast-paced, collaborative environment
· You have a growth mindset and are open to learning new technologies
Why Join Us?
· Work at the intersection of healthcare, community, and technology
· Directly impact the lives of women across India and beyond
· Flexible work environment and collaborative team
· Opportunity to grow with a purpose-driven startup
·
In you are interested please apply here and drop me a message here in cutshort.
Employment type- Contract basis
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
- Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
- Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
- Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
- Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
- Maintain documentation and implement best practices for data architecture, governance, and security.
⚙️ Required Skills
- Programming: Proficient in PySpark, Python, and SQL.
- Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
- Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
- Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
- Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
- CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.
🧰 Preferred Qualifications
- Bachelor's or Master's in Computer Science, Engineering, or related field.
- Certifications in Azure/AWS are highly desirable.
- Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.

Supply Wisdom: Full Stack Developer
Location: Hybrid Position based in Bangalore
Reporting to: Tech Lead Manager
Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objective: We are seeking a skilled Full Stack Developer to design and build scalable software solutions. You will be part of a cross-functional team responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be proficient in both front-end and back-end technologies, development frameworks, and third-party libraries. We’re looking for a team player with strong problem-solving abilities, attention to visual design, and a focus on utility. Familiarity with Agile methodologies, including Scrum and Kanban, is essential.
Responsibilities
- Collaborate with the development team and product manager to ideate software solutions.
- Write effective and secure REST APIs.
- Integrate third-party libraries for product enhancement.
- Design and implement client-side and server-side architecture.
- Work with data scientists and analysts to enhance software using RPA and AI/ML techniques.
- Develop and manage well-functioning databases and applications.
- Ensure software responsiveness and efficiency through testing.
- Troubleshoot, debug, and upgrade software as needed.
- Implement security and data protection settings.
- Create features and applications with mobile-responsive design.
- Write clear, maintainable technical documentation.
- Build front-end applications with appealing, responsive visual design.
Requirements
- Degree in Computer Science (or related field) with 4+ years of hands-on experience in Python development, with strong expertise in the Django framework and Django REST Framework (DRF).
- Proven experience in designing and building RESTful APIs, with a solid understanding of API versioning, authentication (JWT/OAuth2), and best practices.
- Experience with relational databases such as PostgreSQL or MySQL; familiarity with query optimization and database migrations.
- Basic front-end development skills using HTML, CSS, and JavaScript; experience with any JavaScript framework (like React or Next Js) is a plus.
- Good understanding of Object-Oriented Programming (OOP) and design patterns in Python.
- Familiarity with Git and collaborative development workflows (e.g., GitHub, GitLab).
- Knowledge of Docker, CI/CD pipelines.
- Hands-on experience with AWS services, Nginx web server, RabbitMQ (or similar message brokers), event handling, and synchronization.
- Familiarity with Postgres, SSO implementation (desirable), and integration of third-party libraries.
- Experience with unit testing, debugging, and code reviews.
- Experience using tools like Jira and Confluence.
- Ability to work in Agile/Scrum teams with good communication and problem-solving skills.
Our Commitment to You:
We offer a competitive salary and generous benefits. In addition, we offer a vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and backgrounds.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you.
You can learn more at supplywisdom.com and on LinkedIn.
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.
A BIT ABOUT US
Appknox is a leading mobile application security platform that helps enterprises automate security testing across their mobile apps, APIs, and DevSecOps pipelines. Trusted by global banks, fintechs, and government agencies, we enable secure mobile experiences with speed and confidence.
About the Role:
We're looking for a Jr. Technical Support Engineer to join our global support team and provide world-class assistance to customers in the US time zones from 8pm to 5am IST. You will troubleshoot, triage, and resolve technical issues related to Appknox’s mobile app security platform, working closely with Engineering, Product, and Customer Success teams.
Key Responsibilities:
- Respond to customer issues via email, chat, and voice/voip calls during US business hours.
- Diagnose, replicate, and resolve issues related to DAST, SAST, and API security modules.
- Troubleshoot integration issues across CI/CD pipelines, API connections, SDKs, and mobile app builds.
- Document known issues and solutions in the internal knowledge base and help center.
- Escalate critical bugs to engineering with full context, reproduction steps, and logs.
- Guide customers on secure implementation best practices and platform usage.
- Collaborate with product and QA teams to suggest feature improvements based on customer feedback.
- Participate in on-call support rotations if needed.
Requirements:
- 1–4 years of experience in technical support, Delivery or QA roles at a SaaS or cybersecurity company.
- Excellent communication and documentation skills in English.
- Comfortable working independently and handling complex technical conversations with customers.
- Basic understanding of mobile platforms (Android, iOS), REST APIs, Networking Architecture, and security concepts (OWASP, CVEs, etc.).
- Familiarity with command-line tools, mobile build systems (Gradle/Xcode), and HTTP proxies (Burp).
- Ability to work full-time within US time zones. Ensure that you have a stable internet connection and work station.
Good to have skills:
- Experience working in a product-led cybersecurity company.
- Knowledge of scripting languages (Python, Bash) or log analysis tools.
- Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI) is a plus.
- Familiarity with ticketing and support tools like Freshdesk, Jira, Postman, and Slack.
Compensation
- As per Industry Standards
Interview Process:
- Application- Submit your resume and complete your application via our job portal.
- Screening-We’ll review your background and fit—typically invite you on cutshort for a Profile Evaluation call (15 mins)
- Assignment Round- You'll receive a real-world take-home task to complete within 48 hours.
- Panel Interview- Meet with a cross-functional interview panel to assess technical skills, problem-solving, and collaboration.
- Stakeholder Interview- A focused discussion with the Director to evaluate strategic alignment and high-level fit.
- HR Round- Final chat to discuss cultural fit, compensation, notice period, and next steps.
Personality Traits We Admire:
- A confident and dynamic working persona, which can bring fun to the team, and a sense of humour, is an added advantage.
- Great attitude to ask questions, learn and suggest process improvements.
- Has attention to details and helps identify edge cases.
- Highly motivated and coming up with fresh ideas and perspectives to help us move towards our goals faster.
- Follow timelines and absolute commitment to deadlines.
Why Join Us:
- Freedom & Responsibility: If you are a person who enjoys challenging work & pushing your boundaries, then this is the right place for you. We appreciate new ideas & ownership as well as flexibility with working hours.
- Great Salary & Equity: We keep up with the market standards & provide pay packages considering updated standards. Also as Appknox continues to grow, you’ll have a great opportunity to earn more & grow with us. Moreover, we also provide equity options for our top performers.
- Holistic Growth: We foster a culture of continuous learning and take a much more holistic approach to train and develop our assets: the employees. We shall also support you all on that journey of yours.
- Transparency: Being a part of a start-up is an amazing experience, one of the reasons being open communication & transparency at multiple levels. Working with Appknox will give you the opportunity to experience it all first-hand.

Job Overview
We are seeking an agile AI Engineer with a strong focus on both AI engineering and SaaS product development in a 0-1
product environment. This role is perfect for a candidate skilled in building and iterating quickly, embracing a fail fast
approach to bring innovative AI solutions to market rapidly. You will be responsible for designing, developing, and
deploying SaaS products using advanced Large Language Models (LLMs) such as Meta, Azure OpenAI, Claude, and Mistral,
while ensuring secure, scalable, and high-performance architecture. Your ability to adapt, iterate, and deliver in fast-
paced environments is critical.
Responsibilities
Lead the design, development, and deployment of SaaS products leveraging LLMs, including platforms
like Meta, Azure OpenAI, Claude, and Mistral.
Support product lifecycle, from conceptualization to deployment, ensuring seamless integration of AI
models with business requirements and user needs.
Build secure, scalable, and efficient SaaS products that embody robust data management and comply
with security and governance standards.
Collaborate closely with product management, and other stakeholders to align AI-driven SaaS solutions
with business strategies and customer expectations.
Fine-tune AI models using custom instructions to tailor them to specific use cases and optimize
performance through techniques like quantization and model tuning.
Architect AI deployment strategies using cloud-agnostic platforms (AWS, Azure, Google Cloud), ensuring
cost optimization while maintaining performance and scalability.
Apply retrieval-augmented generation (RAG) techniques to build AI models that provide contextually
accurate and relevant outputs.
Build the integration of APIs and third-party services into the SaaS ecosystem, ensuring robust and
flexible product architecture.
Monitor product performance post-launch, iterating and improving models and infrastructure to
enhance user experience and scalability.
Stay current with AI advancements, SaaS development trends, and cloud technology to apply innovative
solutions in product development.
Qualifications
Bachelor’s degree or equivalent in Information Systems, Computer Science, or related fields.
6+ years of experience in product development, with at least 2 years focused on AI-based SaaS
products.
Demonstrated experience in leading the development of SaaS products, from ideation to deployment,
with a focus on AI-driven features.
Hands-on experience with LLMs (Meta, Azure OpenAI, Claude, Mistral) and SaaS platforms.
Proven ability to build secure, scalable, and compliant SaaS solutions, integrating AI with cloud-based
services (AWS, Azure, Google Cloud).
Strong experience with RAG model techniques and fine-tuning AI models for business-specific needs.
Proficiency in AI engineering, including machine learning algorithms, deep learning architectures (e.g.,
CNNs, RNNs, Transformers), and integrating models into SaaS environments.
Solid understanding of SaaS product lifecycle management, including customer-focused design,
product-market fit, and post-launch optimization.
Excellent communication and collaboration skills, with the ability to work cross-functionally and drive
SaaS product success.
Knowledge of cost-optimized AI deployment and cloud infrastructure, focusing on scalability and
performance.

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.

🛠️ Key Responsibilities
- Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
- Develop and optimize ETL processes for batch and real-time data ingestion
- Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
- Ensure data quality, integrity, and governance across all stages of the data lifecycle
- Implement data validation, monitoring, and alerting mechanisms for production pipelines
- Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
- Participate in code reviews, performance tuning, and documentation
🎓 Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 3–6 years of experience in data engineering with a focus on Python and Spark
- Experience with distributed computing and handling large-scale datasets (10TB+)
- Familiarity with data security, PII handling, and compliance standards is a plus

Design, develop and maintain robust test automation frameworks for financial applications
Create detailed test plans, test cases, and test scripts based on business requirements and user stories
Execute functional, regression, integration, and API testing with a focus on financial data integrity
Validate complex financial calculations, transaction processing, and reporting functionalities
Collaborate with Business Analysts and development teams to understand requirements and ensure complete test coverage
Implement automated testing solutions within CI/CD pipelines for continuous delivery
Perform data validation testing against financial databases and data warehouses
Identify, document, and track defects through resolution using defect management tools
Verify compliance with financial regulations and industry standards

About the Role
We are looking for a Python Developer with expertise in data synchronization (ETL & Reverse ETL), automation workflows, AI functionality, and connectivity to work directly with a customer in Peliqan. In this role, you will be responsible for building seamless integrations, enabling AI-driven functionality, and ensuring data flows smoothly across various systems.
Key Responsibilities
- Build and maintain data sync pipelines (ETL & Reverse ETL) to ensure seamless data transfer between platforms.
- Develop automation workflows to streamline processes and improve operational efficiency.
- Implement AI-driven functionality, including AI-powered analytics, automation, and decision-making capabilities.
- Build and enhance connectivity between different data sources, APIs, and enterprise applications.
- Work closely with the customer to understand their technical needs and design tailored solutions in Peliqan.
- Optimize performance of data integrations and troubleshoot issues as they arise.
- Ensure security and compliance in data handling and integrations.
Requirements
- Strong experience in Python and related libraries for data processing & automation.
- Expertise in ETL, Reverse ETL, and workflow automation tools.
- Experience working with APIs, data connectors, and integrations across various platforms.
- Familiarity with AI & machine learning concepts and their practical application in automation.
- Hands-on experience with Peliqan or similar integration/data automation platforms is a plus.
- Strong problem-solving skills and the ability to work directly with customers to define and implement solutions.
- Excellent communication and collaboration skills.
Preferred Qualifications
- Experience in SQL, NoSQL databases, and cloud platforms (AWS, GCP, Azure).
- Knowledge of data governance, security best practices, and performance optimization.
- Prior experience in customer-facing engineering roles.
If you’re a Python & Integration Engineer who loves working on cutting-edge AI, automation, and data connectivity projects, we’d love to hear from you

The Assistant Professor in CSE will teach undergraduate and graduate courses, conduct independent and collaborative research, mentor students, and contribute to departmental and institutional service.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.