50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.



In this role, you will:
- Own the development of secure backend services and responsive frontend experiences end-to-end using Go, React, and event-driven architectures
- Design and optimize database systems that handle complex manufacturing data (a mix of 3D and 2D) with PostgreSQL and AWS RDS
- Build security-first systems with proper authentication, authorization, and secret management practices
- Architect event-driven systems that process real-time manufacturing data and enable seamless workflows
- Work directly with engineers to understand their daily challenges and build tools that meaningfully improve their operations
- Ensure all systems are production-ready with comprehensive monitoring, logging, and CI/CD pipelines
Your background looks something like:
- 3-7+ years of engineering experience building full-stack applications (ideally in Go and React)
- Strong experience with event-driven architectures, message queues, and distributed systems
- Proven track record managing production databases including schema design and performance optimization
- Deep understanding of security best practices and production deployment strategies
- Experience with cloud platforms like AWS and containerization technologies
- Familiar with self-hosting (we work w on-premises deployment a lot)
- Flexibility to work across multiple languages (Go, Python, React, and potentially Rust)
- [BONUS] You have worked with Electron and are familiar with deploying desktop apps on Windows


Required Skills & Qualifications
- 4+ years of professional experience in backend development with Python.
- Strong hands-on experience with FastAPI (or Flask/Django with migration experience).
- Familiarity with asynchronous programming in Python.
- Working knowledge of version control systems (Git).
- Good problem-solving and debugging skills.
- Strong communication and collaboration abilities.
- should have a solid background in backend development, RESTful API design, and scalable application development.
Shift: Night Shift 6:30 pm to 3:30 AM IST

Role Overview
Are you passionate about revolutionizing the world of cloud platforms and eager to make a lasting impact on the industry? Do you thrive on the challenge of crafting large-scale cloud services that further simplify cloud adoption for other businesses? Are you driven to create impeccably designed APIs that will delight thousands of developers and turn them into loyal advocates? Or maybe you're excited by the prospect of engineering a near-real-time, millisecond-latency event processing pipeline?
If you are a driven, innovative, and experienced Principal Platform Engineer who is passionate about cloud technologies and looking to make a significant impact, we want to hear from you. Apply today to join our growing team!
MontyCloud is seeking an experienced Principal Platform Engineer with extensive knowledge in SaaS platforms, AWS cloud, and a proven track record of building and shipping highly scalable SaaS platforms. The ideal candidate will have strong system architecture and design skills, as well as exceptional programming and platform development experience.
Key Responsibilities:
- Lead the design and implementation of our Cloud Management Platform, ensuring its scalability, reliability, and performance.
- Collaborate with cross-functional teams to define system architecture, develop innovative solutions, and drive continuous improvements.
- Utilize your expertise in AWS and other cloud technologies to build and maintain cloud-native components and services.
- Serve as a technical mentor for the engineering team, fostering a culture of learning, collaboration, and innovation.
- Drive the adoption of best practices, design patterns, and emerging technologies in the cloud domain.
- Actively participate in code and design reviews, providing constructive feedback to team members.
- Ensure the successful delivery of high-quality software by defining and implementing effective development processes and methodologies.
- Communicate complex technical concepts effectively via well written technical documents and knowledge sessions to both technical and non-technical stakeholders.
Must Have Skills:
- Experience in software development, with a focus on SaaS platforms and AWS cloud technologies.
- Proven experience in designing, building, and maintaining highly scalable, performant, and resilient systems.
- Expertise in serverless architecture, event-driven design, distributed systems, and event streams.
- Experience in designing, building and shipping low-latency and high-performance APIs.
- Experience building with services such as Kafka, Open Search/Elastic Search, AWS Kinesis, Azure Streams.
- Strong programming skills in languages such as Python, Java, or Go.
- Experience working with infrastructure as code tools, such as Terraform or CloudFormation.
- Experience in building AI/ML systems and familiarity with Large Language Models is nice to have.
- Excellent verbal and written communication skills, with the ability to articulate complex concepts to diverse audiences.
- Demonstrated leadership experience, with the ability to inspire and motivate a team.
Good to Have
- Familiarity with containerization technologies, such as Docker and Kubernetes.
Experience:
- 12-15 years of experience in software development, with a focus on SaaS platforms and AWS cloud technologies.
- 8+ years of experience in building and shipping SaaS platforms
- 4+ years of experience in building platforms using Serverless Architecture
- 8+ years. of experience in building cloud native applications on AWS or Azure
- 8+ years of experience in in building applications using either Test Driven Development (TDD) or Behavior Driven Development (BDD) methodologies.
Education
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

Role Overview:
We are seeking a seasoned Lead Platform Engineer with a strong background in platform development and a proven track record of leading technology design and teams. The ideal candidate will have at least 8 years of overall experience, with a minimum of 5 years in relevant roles. This position entails owning module design and spearheading the implementation process alongside a team of talented platform engineers.
At MontyCloud, you'll have the opportunity to work on cutting-edge technology, shaping the future of our cloud management platform with your expertise. If you're passionate about building scalable, efficient, and innovative cloud solutions, we'd love to have you on our team.
Responsibilities:
- Lead the design and architecture of robust, scalable platform modules, ensuring alignment with business objectives and technical standards.
- Drive the implementation of platform solutions, collaborating closely with platform engineers and cross-functional teams to achieve project milestones.
- Mentor and guide a team of platform engineers, fostering an environment of growth and continuous improvement.
- Stay abreast of emerging technologies and industry trends, incorporating them into the platform to enhance functionality and user experience.
- Ensure the reliability and security of the platform through comprehensive testing and adherence to best practices.
- Collaborate with senior leadership to set technical strategy and goals for the platform engineering team.
Must have skills:
- Expertise in Python programming, with a solid foundation in writing clean, efficient, and scalable code.
- Proven experience in serverless application development, designing and implementing microservices, and working within event-driven architectures.
- Demonstrated experience in building and shipping high-quality SaaS platforms/applications on AWS, showcasing a portfolio of successful deployments.
- Comprehensive understanding of cloud computing concepts, AWS architectural best practices, and familiarity with a range of AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway.
- Exceptional problem-solving skills, with a proven ability to optimize complex systems for efficiency and scalability.
- Excellent communication skills, with a track record of effective collaboration with team members and successful engagement with stakeholders across various levels.
- Previous experience leading technology design and engineering teams, with a focus on mentoring, guiding, and driving the team towards achieving project milestones and technical excellence.
Good to Have skills:
- AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications.
- Experience with the AWS Boto3 SDK for Python.
- Exposure to other cloud platforms such as Azure or GCP.
- Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes.
Experience:
- Total Experience: 8 + years of experience in Software or Platform Engineering
- Minimum 5 years of experience in roles directly relevant to platform development and team leadership
- Minimum 4 years of experience in developing applications with Python
- Minimum 3 years of experience in serverless application development
- Minimum 3 years of experience in building event-driven architectures
Education:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

Skills :
Expertise in Selenium WebDriver for web application testing, design patterns including page object model (POM) and cross browser automation.
Experience with different test automation approaches (including BDD , hybrid and keyword- driven).
Proficiency in Java or Python for automation scripting and framework development.
Experience with API testing frameworks such as Rest Assured, Postman, or SOAPUI, and proficiency in testing RESTful and SOAP web services.
Experience of Test Automation Integration with DevOps Pipeline and CI/CD.
Knowledge of other testing frameworks and tools will be an advantage ( Katalon, Cypress, Protector etc)

Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities
If Interested kindly share your updated resume on 82008 31681

Role Details:
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. You will play a key role in designing, implementing, and maintaining our automation testing framework, with a primary focus on Selenium, Python and BDD
Position: Automation Engineer
Must-Have Skills & Qualifications:
- Bachelor’s degree in Engineering (Computer Science, IT, or related field)
- Hands-on experience with Selenium using Python and BDD framework
- Strong foundation in Manual Testing
Good-to-Have Skills:
- Allure
- Boto3
- Appium
Benefits:
- Competitive salary & comprehensive benefits package
- Work with cutting-edge technologies & industry-leading experts
- Flexible hybrid work environment
- Professional development & continuous learning opportunities
- Dynamic, collaborative culture with career growth paths

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required

Job Summary:
We are looking for a highly skilled and experienced Data Engineer with deep expertise in Airflow, dbt, Python, and Snowflake. The ideal candidate will be responsible for designing, building, and managing scalable data pipelines and transformation frameworks to enable robust data workflows across the organization.
Key Responsibilities:
- Design and implement scalable ETL/ELT pipelines using Apache Airflow for orchestration.
- Develop modular and maintainable data transformation models using dbt.
- Write high-performance data processing scripts and automation using Python.
- Build and maintain data models and pipelines on Snowflake.
- Collaborate with data analysts, data scientists, and business teams to deliver clean, reliable, and timely data.
- Monitor and optimize pipeline performance and troubleshoot issues proactively.
- Follow best practices in version control, testing, and CI/CD for data projects.
Must-Have Skills:
- Strong hands-on experience with Apache Airflow for scheduling and orchestrating data workflows.
- Proficiency in dbt (data build tool) for building scalable and testable data models.
- Expert-level skills in Python for data processing and automation.
- Solid experience with Snowflake, including SQL performance tuning, data modeling, and warehouse management.
- Strong understanding of data engineering best practices including modularity, testing, and deployment.
Good to Have:
- Experience working with cloud platforms (AWS/GCP/Azure).
- Familiarity with CI/CD pipelines for data (e.g., GitHub Actions, GitLab CI).
- Exposure to modern data stack tools (e.g., Fivetran, Stitch, Looker).
- Knowledge of data security and governance best practices.
Note : One face-to-face (F2F) round is mandatory, and as per the process, you will need to visit the office for this.

We’re looking for a passionate Data & Automation Engineer to join our team and assist in managing and processing large volumes of structured and unstructured data. You'll work closely with our engineering and product teams to extract, transform, and load (ETL) data, automate data workflows, and format data for different use cases.
Key Responsibilities:
- Write efficient scripts using Python and Node.js to process and manipulate data
- Scrape and extract data from public and private sources (APIs, websites, files)
- Format and clean raw datasets for consistency and usability
- Upload data to various databases, including MongoDB and other storage solutions
- Create and maintain data pipelines and automation scripts
- Document processes, scripts, and schema changes clearly
- Collaborate with backend and product teams to support data-related needs
Skills Required:
- Proficiency in Python (especially for data manipulation using libraries like pandas, requests, etc.)
- Experience with Node.js for backend tasks or scripting
- Familiarity with MongoDB and understanding of NoSQL databases
- Basic knowledge of web scraping tools (e.g., BeautifulSoup, Puppeteer, or Cheerio)
- Understanding of JSON, APIs, and data formatting best practices
- Attention to detail, debugging skills, and a data-driven mindset
Good to Have:
- Experience with data visualization or reporting tools
- Knowledge of other databases like PostgreSQL or Redis
- Familiarity with version control (Git) and working in agile teams


Looking for a 5+ years experienced Senior Full-stack Engineer (Backend Heavy) for a research-focused, product-based, US-based Startup.
AI Assistant for Research using state-of-the-art language models (AI Agent for Academic Research)
At SciSpace, we're using language models to automate and streamline research workflows from start to finish. And the best part? We're
already making waves in the industry, with a whopping 5 million users
on board as of November 2025! Our users love us too, with a 40%
MOM retention rate and 10% of them using our app more than once
a week! We're growing by more than 50% every month, all thanks to
our awesome users spreading the word (see it yourself on Twitter). Andwith almost weekly feature launches since our inception, we're
constantly pushing the boundaries of what's possible. Our team of
experts in design, frontend, fullstack engineering, and machine learning
are already in place, but we're always on the lookout for new talent to help us take things to the next level. Our user base is super engaged and always eager to provide feedback, making Scispace one of the most advanced applications of language models out there.
We are looking for insatiably curious, always learning SDE 2 Fullstack Engineers. You could get a chance to work on the most important and challenging problems at scale.
Responsibilities:
* Work in managing products as part of SciSpace product suite.
* Partner with product owners in designing software that becomes part of researchers’ lives
* Model real-world scenarios into code that can build the SciSpace platform
* Test code that you write and continuously improve practices at SciSpace
* Arrive at technology decisions after extensive debates with other engineers
* Manage large projects from conceptualisation, all the way through deployments
* Evolve an ecosystem of tools and libraries that make it possible for SciSpace to provide reliable, always-on, performant services to our users
* Partner with other engineers in developing an architecture that is resilient to changes in product requirements and usage
* Work on the user-interface side and deliver a snappy, enjoyable experience to your users
Our Ideal Candidate would:
* Strong grasp of one high-level language like Python, JavaScript, etc.
* Strong grasp of front-end HTML/CSS, non-trivial browser-side JavaScript
* General awareness of SQL and database design concepts
* Solid understanding of testing fundamentals
* Strong communication skills
* Should have prior experience in managing and executing technology products.
Bonus:
* Prior experience working with high-volume, always-available web applications
* Experience in ElasticSearch.
* Experience in distributed systems.
* Experience working with a Start-up is a plus point.

What You’ll Do:
- Architect & build our core backend using modern microservices patterns
- Develop intelligent AI/ML-driven systems for financial document processing at scale
- Own database design (SQL + NoSQL) for speed, reliability, and compliance
- Integrate vector search, caching layers, and pipelines to power real-time insights
- Ensure security, compliance, and data privacy at every layer of the stack
- Collaborate directly with founders to translate vision into shippable features
- Set engineering standards & culture for future hires
What You Bring:
- Core SkillsDeep expertise in Python (Django, FastAPI, or Flask)
- Strong experience in SQL & NoSQL database architecture
- Hands-on with vector databases and caching systems (e.g., Redis)
- Proven track record building scalable microservices
- Strong grounding in security best practices for sensitive data
Experience:
- 1+ years building production-grade backend systems
- History of owning technical decisions that impacted product direction
- Ability to solve complex, high-scale technical problems
- Bonus Points ForExperience building LLM-powered applications at scale
- Background in enterprise SaaS, or financial software
- Early-stage startup experience
- Familiarity with financial reporting/accounting concepts
Why Join Us:
- Founding team equity with significant upside
- Direct influence on product architecture & company direction
- Work with cutting-edge AI/ML tech on real-world financial data
- Backed by top-tier VC
- Join at ground zero and help shape our engineering culture

About the Role :
We are seeking an experienced Python Backend Lead to design, develop, and optimize scalable backend solutions.
The role involves working with large-scale data, building efficient APIs, integrating middleware solutions, and ensuring high performance and reliability.
You will lead a team of developers while also contributing hands-on to coding, design, and architecture.
Mandatory Skills : Python (Pandas, NumPy, Matplotlib, Plotly), FastAPI/FlaskAPI, SQL & NoSQL (MongoDB, CRDB, Postgres), Middleware tools (Mulesoft/BizTalk), CI/CD, RESTful APIs, OOP, OOD, DS & Algo, Design Patterns.
Key Responsibilities :
1. Lead backend development projects using Python (FastAPI/FlaskAPI).
2. Design, build, and maintain scalable APIs and microservices.
3. Work with SQL and NoSQL databases (MongoDB, CRDB, Postgres).
4. Implement and optimize middleware integrations (Mulesoft, BizTalk).
5. Ensure smooth deployment using CI/CD pipelines.
6. Apply Object-Oriented Programming (OOP), Design Patterns, and Data Structures & Algorithms to deliver high-quality solutions.
7. Collaborate with cross-functional teams (frontend, DevOps, product) to deliver business objectives.
8. Mentor and guide junior developers, ensuring adherence to best practices and coding standards.
Required Skills :
1. Strong proficiency in Python with hands-on experience in Pandas, NumPy, Matplotlib, Plotly.
2. Expertise in FastAPI / FlaskAPI frameworks.
3. Solid knowledge of SQL & NoSQL databases (MongoDB, CRDB, Postgres).
4. Experience with middleware tools such as Mulesoft or BizTalk.
5. Proficiency in RESTful APIs, Web Services, and CI/CD pipelines.
6. Strong understanding of OOP, OOD, Design Patterns, and DS & Algo.
7. Excellent problem-solving, debugging, and optimization skills.
8. Prior experience in leading teams is highly desirable.

Job Title : Python Backend Lead / Senior Python Developer
Experience : 6 to 10 Years
Location : Bangalore (CV Raman Nagar)
Openings : 8
Interview Rounds : 1 Virtual + 1 In-Person (Face-to-Face with Client)
Note : Only local Bangalore candidates will be considered
About the Role :
We are seeking an experienced Python Backend Lead / Senior Python Developer to design, develop, and optimize scalable backend solutions.
The role involves working with large-scale data, building efficient APIs, integrating middleware solutions, and ensuring high performance and reliability.
You will lead a team of developers while also contributing hands-on to coding, design, and architecture.
Mandatory Skills : Python (Pandas, NumPy, Matplotlib, Plotly), FastAPI/FlaskAPI, SQL & NoSQL (MongoDB, CRDB, Postgres), Middleware tools (Mulesoft/BizTalk), CI/CD, RESTful APIs, OOP, OOD, DS & Algo, Design Patterns.
Key Responsibilities :
- Lead backend development projects using Python (FastAPI/FlaskAPI).
- Design, build, and maintain scalable APIs and microservices.
- Work with SQL and NoSQL databases (MongoDB, CRDB, Postgres).
- Implement and optimize middleware integrations (Mulesoft, BizTalk).
- Ensure smooth deployment using CI/CD pipelines.
- Apply Object-Oriented Programming (OOP), Design Patterns, and Data Structures & Algorithms to deliver high-quality solutions.
- Collaborate with cross-functional teams (frontend, DevOps, product) to deliver business objectives.
- Mentor and guide junior developers, ensuring adherence to best practices and coding standards.
Required Skills :
- Strong proficiency in Python with hands-on experience in Pandas, NumPy, Matplotlib, Plotly.
- Expertise in FastAPI / FlaskAPI frameworks.
- Solid knowledge of SQL & NoSQL databases (MongoDB, CRDB, Postgres).
- Experience with middleware tools such as Mulesoft or BizTalk.
- Proficiency in RESTful APIs, Web Services, and CI/CD pipelines.
- Strong understanding of OOP, OOD, Design Patterns, and DS & Algo.
- Excellent problem-solving, debugging, and optimization skills.
- Prior experience in leading teams is highly desirable.
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


- 4= years of experience
- Proficiency in Python programming.
- Experience with Python Service Development (RestAPI/FlaskAPI)
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with Cloud (Azure /AWS) technologies



Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.

Demonstrated experience as a Python developer i
Good understanding and practical experience with Python frameworks including Django, Flask, and Bottle Proficient with Amazon Web Service and experienced in working with API’s
Solid understanding of databases SQL and mySQL Experience and knowledge of JavaScript is a benefit
Highly skilled with attention to detail
Good mentoring and leadership abilities
Excellent communication skills Ability to prioritize and manage own workload
We are looking for a highly skilled and experienced Senior IT Infrastructure Automation Engineer to join our dynamic team. This critical role involves designing, developing, and implementing automation solutions that streamline IT operations, enhance reliability, and improve scalability across on-premises, cloud, and hybrid environments. Lead initiatives that reduce manual effort, increase consistency, and support critical infrastructure systems while collaborating closely with cross-functional teams.
Must Have Skills:
Scripting Languages:
PowerShell (advanced modules)
Python (advanced scripting & modules)
Bash (for Linux automation)
Configuration & Orchestration:
Strong expertise in Ansible
Operating Systems:
Windows Server (advanced)
Linux (Ubuntu, CentOS)
Virtualization:
VMware (advanced knowledge)
Nutanix (working knowledge)
Integration: REST APIs for system integrations
Good to Have Skills:
DevOps tools: Terraform, Kubernetes
Cloud Platforms: AWS, Azure
Citrix Cloud & NetScaler experience
Commvault backup automation
Endpoint management with BigFix, JAMF
Networking: routing, firewalls, ACLs
Database administration and scripting
Basic web development (Ruby, PHP, Python, JavaScript)
Job Description:
Automation Strategy & Development: Design and implement automation workflows for routine IT operations across Windows and Linux environments (e.g., user management, patching, service configuration). Create and manage scripts for: Active Directory (user/group management, GPOs, security settings), DNS & DHCP management. Develop custom automation tools using Python, PowerShell, and Bash to support various infrastructure needs.
Virtualization & Cloud Automation: Automate provisioning, scaling, and monitoring for VMware and Nutanix environments. Develop automation for Citrix Cloud and NetScaler for tasks like app publishing and load balancing. Integrate and automate cloud components (AWS or Azure) to support a hybrid infrastructure.
Monitoring & Backup Reliability: Implement automated LogicMonitor solutions for proactive issue detection. Build automation scripts for Commvault to manage backup and recovery, including reporting. Create monitoring and alerting tools using Python, PowerShell, or Bash.
Integration & Tooling: Use REST APIs to integrate third-party platforms (e.g., ServiceNow, LogicMonitor, NetScaler). Leverage Ansible for configuration management across Windows and Linux. (Preferred) Use DevOps tools such as Terraform and Kubernetes for provisioning and orchestration.
Endpoint Management & Security: Automate endpoint patching, software deployment, and asset tracking using tools like BigFix, JAMF, and Active Directory. Collaborate with cybersecurity teams to automate threat response and enforce compliance standards.
Documentation & Collaboration: Maintain clear and accurate documentation for all automation solutions, procedures, and systems. Keep the CMDB up to date for server and workstation deployments. Provide guidance and support to other administrators using automation tools. Record project and support activities using ServiceNow or similar platforms. Participate in regular team and stakeholder meetings to share updates and gather feedback.


We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.

Description
Collaborate closely with various stakeholders to gain a deep understanding of requirements and use cases. Develop detailed test plans, tools, and utilities to comprehensively test RtBrick products and features. Work closely with developers to review functional and design specifications, ensuring a thorough testing perspective. Create detailed feature test plans, design test bed configurations, and establish complex test setups based on project requirements. Develop Python and robot scripts for automated testing. Assist development engineers in diagnosing product defects, and actively participate in customer calls to troubleshoot issues, gather data, and communicate resolutions and fixes. Join our team and be at the forefront of innovation in technology.
Requirements
- Strong testing experience in any of the Layer-3 Unicast routing protocols (e.g. OSPF, BGP, IS-IS), MPLS signalling protocols (e.g. RSVP, LDP), Layer-3 VPNs, Layer2-VPNs, VPLS, Multicast VPN, EVPN,
- Hands-on experience with scripting languages or python programming to test system/application software (SWIG)
- Ability to scope and develop test cases for a given requirement including Scale/Performance testing in a distributed asynchronous environment
- Experience with "Robot Framework" for automation, RESTful API is a plus
- EC, IS , CS with networking back ground with 2-6 years of related experience is required
- Strong written and verbal communication skills
- Able to plan and execute tasks with minimal supervision
- Team-player, can-do attitude, will work well in a group environment while being able to contribute well on an individual basis
Responsibilities
- Collaborate closely with stakeholders, gaining insights into their unique requirements and use cases.
- Engineer comprehensive test plans, craft specialized tools and utilities for in-depth feature assessments.
- Provide a critical testing perspective by thoroughly evaluating documents like functional specs and design specs.
- Create exhaustive feature test plans and innovative test bed designs tailored to project needs.
- Configure intricate test environments, aligning them with project-specific parameters.
- Develop Python and robot scripts, automating key testing processes.
- Aid development engineers in diagnosing and resolving product defects.
- ·Engage in customer calls, actively participating in issue resolution and effectively communicating fixes.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.

Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities


We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Requirements
● Bachelor’s degree in Computer Science, Engineering, or related field.
● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g.,
Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar).
● Understanding of observability, monitoring, and incident response tooling (e.g.,Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments
● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration,regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Success Metrics
● Achieve >95% release success rate with minimal hotfix rollbacks.
● Reduce mean release deployment time by 30% within 6 months.
● Maintain a weekly release readiness report with zero critical blockers.
● Enable full traceability of builds from commit to deployment.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.

Role Description:
As a Senior Data Science and Modeling Specialist at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable.
Roles & Responsibilities:
- Developing and implementing machine learning models and algorithms to solve complex business problems
- Conducting data analysis and modeling using statistical and data analysis tools
- Collaborating with other teams to ensure the consistency and integrity of data
- Providing guidance and mentorship to junior data science and modeling specialists
- Presenting findings and recommendations to stakeholders
Technical Skills
Skills Requirements:
- Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis.
- Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning.
- Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL.
- Understanding of big data technologies such as Hadoop, Spark, or Hive.
- Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner.
- Must understand the company's long-term vision and align with it.
- Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team.
Qualifications
- 4-6 years of work experience in relevant field
- B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Responsibilities :
- Design and develop user-friendly web interfaces using HTML, CSS, and JavaScript.
- Utilize modern frontend frameworks and libraries such as React, Angular, or Vue.js to build dynamic and responsive web applications.
- Develop and maintain server-side logic using programming languages such as Java, Python, Ruby, Node.js, or PHP.
- Build and manage APIs for seamless communication between the frontend and backend systems.
- Integrate third-party services and APIs to enhance application functionality.
- Implement CI/CD pipelines to automate testing, integration, and deployment processes.
- Monitor and optimize the performance of web applications to ensure a high-quality user experience.
- Stay up-to-date with emerging technologies and industry trends to continuously improve development processes and application performance.
Qualifications :
- Bachelors/master's in computer science or related subjects or hands-on experience demonstrating working understanding of software applications.
- Knowledge of building applications that can be deployed in a cloud environment or are cloud native applications.
- Strong expertise in building backend applications using Java/C#/Python with demonstrable experience in using frameworks such as Spring/Vertx/.Net/FastAPI.
- Deep understanding of enterprise design patterns, API development and integration and Test-Driven Development (TDD)
- Working knowledge in building applications that leverage databases such as PostgreSQL, MySQL, MongoDB, Neo4J or storage technologies such as AWS S3, Azure Blob Storage.
- Hands-on experience in building enterprise applications adhering to their needs of security and reliability.
- Hands-on experience building applications using one of the major cloud providers (AWS, Azure, GCP).
- Working knowledge of CI/CD tools for application integration and deployment.
- Working knowledge of using reliability tools to monitor the performance of the application.


As an L3 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 6–9 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or a strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.


As an L1/L2 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 2.5–5 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.
Role Overview:
We’re looking for an exceptionally passionate, logical, and smart Backend Developer to join our core tech team. This role goes beyond writing code — you’ll help shape the architecture, lead entire backend team if needed, and be deeply involved in designing scalable systems almost from scratch.
This is a high-impact opportunity to work directly with the founders and play a pivotal role in building the core product. If you’re looking to grow alongside a fast-growing startup, take complete ownership, and influence the direction of the technology and product, this role is made for you.
Why Clink?
Clink is a fast-growing product startup building innovative solutions in the food-tech space. We’re on a mission to revolutionize how restaurants connect with customers and manage offers seamlessly. Our platform is a customer-facing app that needs to scale rapidly as we grow. We also aim to leverage Generative AI to enhance user experiences and drive personalization at scale.
Key Responsibilities:
- Design, develop, and completely own high-performance backend systems.
- Architect scalable, secure, and efficient system designs.
- Own database schema design and optimization for performance and reliability.
- Collaborate closely with frontend engineers, product managers, and designers.
- Guide and mentor junior team members .
- Explore and experiment with Generative AI capabilities for product innovation.
- Participate in code reviews and ensure high engineering standards.
Must-Have Skills:
- 2–5 years of experience in backend development at a product-based company.
- Strong expertise in database design and system architecture.
- Hands-on experience building multiple production-grade applications.
- Solid programming fundamentals and logical problem-solving skills.
- Experience with Python or Ruby on Rails (one is mandatory).
- Experience integrating third-party APIs and services.
Good-to-Have Skills:
- Familiarity with Generative AI tools, APIs, or projects.
- Contributions to open-source projects or personal side projects.
- Exposure to frontend basics (React, Vue, or similar) is a plus.
- Exposure to containerization, cloud deployment, or CI/CD pipelines.
What We’re Looking For:
- Extremely high aptitude and ability to solve tough technical problems.
- Passion for building products from scratch and shipping fast.
- A hacker mindset — someone who builds cool stuff even in their spare time.
- Team player who can lead when required and work independently when needed.

Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill

Job Title: Backend Developer
Experience: 2–7 Years
Location: On-site – Bangalore
Employment Type: Full-Time
Company: Pepsales AI (Multiplicity Technologies Inc.)
About Pepsales
Pepsales AI is a real-time sales enablement and conversation intelligence platform built for B2B SaaS sales teams. It empowers sellers across every stage of the sales cycle—before, during, and after discovery calls—by providing actionable insights that move deals forward.
- For Account Executives: Pepsales AI transforms the discovery process, ensuring objective deal qualification and frictionless handoffs to solution engineers—enabling AEs to focus on winning, not chasing.
- For Solution Engineers and Consultants: It elevates demo experiences by delivering real-time buyer context and actionable insights, ensuring every interaction is highly personalized and impactful.
- For Sales Leaders: It provides enterprise-grade intelligence across forecasting, pipeline health, team performance, coaching, and the authentic voice of the customer—empowering data-driven decision-making at scale.
With Pepsales AI, sales teams run sharper meetings, accelerate deal cycles, and close with confidence.
Role Overview
We’re seeking a passionate Backend Developer to join our fast-paced team in Bangalore and help build and scale the core systems powering Pepsales. In this full-time, on-site role, you’ll work on high-impact features that directly influence product innovation, customer success, and business growth.
You’ll collaborate closely with the founding team and leadership, gaining end-to-end ownership and the chance to bring bold, innovative ideas to life in a rapidly scaling startup environment
Key Responsibilities
- Design, develop, and maintain scalable backend systems and microservices.
- Write clean, efficient, and well-documented code in Python.
- Build and optimize RESTful APIs and WebSocket services for high performance.
- Manage and optimize MongoDB databases for speed and scalability.
- Deploy and maintain containerized applications using Docker.
- Work extensively with AWS services (EC2, ALB, S3, Route 53) for robust cloud infrastructure.
- Implement and maintain CI/CD pipelines for smooth and automated deployments.
- Collaborate closely with frontend engineers, product managers, and leadership on architecture and feature planning.
- Participate in sprint planning, technical discussions, and code reviews.
- Take full ownership of features, embracing uncertainty with a problem-solving mindset.
Required Skills & Qualifications
- 2–7 years of backend development experience with a proven track record of building scalable systems.
- Strong proficiency in Python and its ecosystem.
- Hands-on experience with MongoDB, Docker, and Microservice Architecture.
- Practical experience with AWS services (EC2, ALB, S3, Route 53).
- Familiarity with CI/CD tools and deployment best practices.
- Strong understanding of REST API design principles and WebSocket communication.
- Excellent knowledge of data structures, algorithms, and performance optimization.
- Strong communication skills and ability to work in a collaborative, fast-paced environment.
What We Value
- Excitement to work on cutting-edge technology and platforms , helping redefine how businesses engage and convert customers.
- Thrives in a dynamic startup environment that values diversity, rapid innovation, and a growth mindset—adapting to change, challenging the status quo, and making a real impact.
- Passion for ownership beyond coding, contributing to product strategy and innovation.
Why Join Pepsales?
- Direct access to founders and a voice in high-impact decisions.
- Opportunity to shape a next-gen AI SaaS product transforming sales worldwide.
- Exposure to cutting-edge technologies in a rapidly growing startup.
- Ownership-driven culture with fast career growth and learning opportunities.
- A collaborative, innovation-driven environment that values creativity and problem-solving.



Lead Drone Software Engineer
About the Role
We are hiring a Lead Drone Software Engineer to architect and develop embedded flight software, autonomy algorithms, and AI-powered navigation systems. You will build the intelligence that powers our drones.
Responsibilities
- Develop flight control algorithms using PX4/ArduPilot/ROS.
- Build real-time embedded systems for obstacle avoidance, GPS-denied navigation, and autonomy.
- Implement sensor fusion algorithms (LiDAR, IMU, GPS, vision-based SLAM).
- Develop AI/ML and computer vision systems for object detection and terrain mapping.
- Create ground control systems (GCS) for mission planning and telemetry.
- Integrate cloud-based platforms for fleet management and data analytics.
- Ensure regulatory compliance (geo-fencing, fail-safe, BVLOS).
Qualifications
- B.E./M.Tech in Computer Science, Robotics, or related field.
- 4- 5+ years in embedded software, robotics, or UAV development.
- Strong programming in C/C++, Python.
- Hands-on with ROS, PX4, ArduPilot, MAVLink protocols.
- Experience in AI/ML frameworks (TensorFlow, PyTorch, OpenCV).
- Knowledge of networking, cloud APIs, and cybersecurity for UAVs.
- Prior UAV/robotics product experience highly preferred.

Salary (Lacs): Up to 22 LPA
Required Qualifications
• 4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role
• Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus
• Proficient in Kubernetes administration and container technologies (Docker, containerd)
• Strong Linux fundamentals
• Scripting skills in Python and shell scripting
• Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)
• Experience in maintaining and troubleshooting production environments
• Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines
If Interested kindly share your updated resume on 82008 31681

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.

We are looking for a detail-oriented QA Engineer with 2+ years of experience in software testing, especially with a strong focus on automation using Python and Pytest. The ideal candidate will work closely with the development and DevOps teams to ensure the quality, stability, and performance of our applications across different environments.
Key Responsibilities:
- Design, develop, and maintain automated test suites using Python and Pytest
- Write clear, concise, and comprehensive test plans and test cases
- Perform functional, regression, integration, and API testing
- Collaborate with developers to understand requirements and implement test strategies
- Identify, log, and track bugs in a structured way using bug tracking tools (e.g., Jira)
- Work with CI/CD pipelines to integrate automated testing
- Analyze test results and provide meaningful reports to stakeholders
- Continuously improve test coverage and testing frameworks
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field
- 2+ years of professional experience in software QA or test automation
- Strong hands-on experience with Python and Pytest
- Solid understanding of software testing methodologies (Agile, Scrum)
- Experience with REST API testing using tools like Postman or through code
- Familiarity with version control systems like Git
- Understanding of CI/CD tools (e.g., Bitbucket ,Jenkins)
- Excellent problem-solving and analytical skills
- Strong written and verbal communication skills
Good to Have:
- Experience with Selenium or Playwright for UI test automation

Skills
- Strong analytical thinking and problem-solving abilities.
- Solid understanding of system design, data structures, and algorithms.
- Proven experience in designing and implementing scalable architectures and robust design patterns.
- Ability to deliver end-to-end features and enhancements with minimal supervision.
- Proficient in debugging and resolving complex production issues and identifying root causes quickly.
- Ability to quickly learn and integrate new systems into existing platforms.
Programming Languages
- Strong expertise in Java or Python.
Database Skills
- Proficiency in SQL, with hands-on experience in PostgreSQL and MySQL.
Experience with NoSQL technologies, including MongoDB and Redis.


At Yugen, we're buildingCansoan AI-agent driven fraud Investigation platform for financial organisations. We're currently in the0-1 stageand we're looking for curious andhigh-agencyengineers to join us in this journey to shape a category defining product with global impact.
You should be eager to learn, prefer a fast-paced environment, and are excited about getting agentic systems to work in production. More importantly, no matter how tough the challenge, you're someone who's never afraid toshow up.
Responsibilities
- Develop, test, and maintain scalable backend systems (APIs, Pipelines and AI applications)
- Work with other team members (Backend, ML, Data Engineers, Product & Design) to ship features quickly
- Learn how to debug & resolve production issues to ensure system uptime
- Participate in architectural & system design discussions
- Explore new technologies and frameworks & come up with suggestions to improve existing systems
Requirements
- Strong programming skills in one or more of: Python, Go, TypeScript or similar
- Strong understanding of software development principles (OOPS, REST APIs, etc. ) and tooling such Git
- Familiarity with databases (SQL or NoSQL)
- A deep sense of curiosity and a knack for experimentation with AI applications
- You're proactive and love getting things done regardless of external factors
- Strong communication skills and a very good eye for detail
Must Have
- Excellent coding skills with strong problem-solving abilities
- Meaningful contributions to open-source projects
- Strong proof of building and deep focus. For example
- Active participation in multiple hackathons, with top-3 finishes or notable wins
- At-least 1 self-driven portfolio project that has been maintained over 4 months. Github repos with frequent commits showing continuous progress would be great.

Job Role:
The role is for an SAP UI5 Consultant responsible for developing and enhancing web applications using SAP UI5 and related technologies.
Responsibilities:
- Develop Fiori-like web applications based on SAPUI5.
- Implement REST Web services and enhancements to SAP Fiori apps.
- Work with Web Technologies including HTML5, CSS3, and JavaScript.
- Participate in knowledge transfers and evaluate new technologies.
- Understand and implement software architecture for enterprise applications.
Qualifications:
The candidate should have a BA/BE/BTech qualification with excellent verbal and written communication skills in English. Ability to work flexible hours is necessary.



Job Summary:
Experienced Full-Stack Developer with expertise in Angular (TypeScript) for front-end development and Python Flask for back-end API development. Strong background in Microsoft SQL Server, authentication using Azure AD (MSAL), and implementing efficient API integrations. Skilled in unit testing, debugging, and optimizing performance.
Key Skills:
• Front-End: Angular, TypeScript, PrimeNG, RxJS, State Management, React JS
• Back-End: Python Flask, SQLAlchemy, RESTful API Development
• Database: Microsoft SQL Server (SQL, Joins, Query Optimization)
• Authentication: Azure AD, MSAL, JWT-based authentication
• DevOps & Deployment: Git, CI/CD (Azure DevOps, GitHub Actions)
• Additional: data validation, pagination, performance tuning



Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.
· Technical expertiseregarding data models, database design development, data mining and segmentation techniques
· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks
· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features
· Hands on experience in data visualization tools – Power BI preferred
· Solid understanding of machine learning
· Knowledge of data management and visualization techniques
· A knack for statistical analysis and predictive modeling
· Good knowledge of Python and Matlab
· Experience with SQL and NoSQL databases including ability to write complex queries and procedures

Job Title: Technical Manager – Digital & Emerging Tech
Location: Kempegowda International Airport, Bengaluru
Department: ICT / Architecture & Digital Solutions
Experience: 12+ years
About the Role:
We’re looking for an experienced Technical Manager to lead innovative digital initiatives at Bangalore Airport. You will identify, validate, and implement emerging technologies such as AI/ML, IoT, robotics, biometrics, and more — from proof-of-concept to full-scale deployment. This role demands strong project management, vendor coordination, and technical expertise to deliver impactful solutions aligned with our business goals.
Key Responsibilities:
- Lead end-to-end project management of IT and digital innovation initiatives.
- Research, evaluate, and implement emerging technologies relevant to smart airports.
- Develop RFPs, evaluate vendor proposals, and manage partner relationships.
- Conduct and manage Proof of Concepts (PoCs) with measurable success criteria.
- Define and validate target architectures in line with best practices and security standards.
- Oversee application development, ensuring technical, UX, and security compliance.
- Collaborate closely with internal teams and external vendors to meet project goals.
Requirements:
- MCA / B.E. / B.Tech in IT or related field.
- 12+ years of experience in technology leadership with focus on innovation.
- Hands-on exposure to emerging technologies (AI, Blockchain, IoT, AR/VR, Cloud).
- Experience with AWS/Azure, agile project management, and vendor management.
- Strong understanding of cybersecurity, DevSecOps, and IT infrastructure.
- Excellent stakeholder management and documentation skills.
Behavioral Competencies:
- Strategic Leadership – Proficient
- Change Influencer & Innovation Mindset – Advanced
- Customer Centricity, Execution Excellence, Collaboration – Expert
Why Join Us:
Work at one of India’s premier airports, driving future-ready digital transformation that enhances passenger experience and operational efficiency.

Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment
EDI Developer / Map Conversion Specialist
Role Summary:
Responsible for converting 441 existing EDI maps into the PortPro-compatible format and testing them for 147 customer configurations.
Key Responsibilities:
- Analyze existing EDI maps in Profit Tools.
- Convert, reconfigure, or rebuild maps for PortPro.
- Ensure accuracy in mapping and transformation logic.
- Unit test and debug EDI transactions.
- Support system integration and UAT phases.
Skills Required:
- Proficiency in EDI standards (X12, EDIFACT) and transaction sets.
- Hands-on experience in EDI mapping tools.
- Familiarity with both Profit Tools and PortPro data structures.
- SQL and XML/JSON data handling skills.
- Experience with scripting for automation (Python, Shell scripting preferred).
- Strong troubleshooting and debugging skills.

Job Title: Senior Data Engineer
Location: Bangalore | Hybrid
Company: krtrimaIQ Cognitive Solutions
Role Overview:
As a Senior Data Engineer, you will design, build, and optimize robust data foundations and end-to-end solutions to unlock maximum value from data across the organization. You will play a key role in fostering data-driven thinking, not only within the IT function but also among broader business stakeholders. You will serve as a technology and subject matter expert, providing mentorship to junior engineers and translating the company’s vision and Data Strategy into actionable, high-impact IT solutions.
Key Responsibilities:
- Design, develop, and implement scalable data solutions to support business objectives and drive digital transformation.
- Serve as a subject matter expert in data engineering, providing guidance and mentorship to junior team members.
- Enable and promote data-driven culture throughout the organization, engaging both technical and business stakeholders.
- Lead the design and delivery of Data Foundation initiatives, ensuring adoption and value realization across business units.
- Collaborate with business and IT teams to capture requirements, design optimal data models, and deliver high-value insights.
- Manage and drive change management, incident management, and problem management processes related to data platforms.
- Present technical reports and actionable insights to stakeholders and leadership teams, acting as the expert in Data Analysis and Design.
- Continuously improve efficiency and effectiveness of solution delivery, driving down costs and reducing implementation times.
- Contribute to organizational knowledge-sharing and capability building (e.g., Centers of Excellence, Communities of Practice).
- Champion best practices in code quality, DevOps, CI/CD, and data governance throughout the solution lifecycle.
Key Characteristics:
- Technology expert with a passion for continuous learning and exploring multiple perspectives.
- Deep expertise in the data engineering/technology domain, with hands-on experience across the full data stack.
- Excellent communicator, able to bridge the gap between technical teams and business stakeholders.
- Trusted leader, respected across levels for subject matter expertise and collaborative approach.
Mandatory Skills & Experience:
- Mastery in public cloud platforms: AWS, Azure, SAP
- Mastery in ELT (Extract, Load, Transform) operations
- Advanced data modeling expertise for enterprise data platforms
Hands-on skills:
- Data Integration & Ingestion
- Data Manipulation and Processing
- Source/version control and DevOps tools: GITHUB, Actions, Azure DevOps
- Data engineering/data platform tools: Azure Data Factory, Databricks, SQL Database, Synapse Analytics, Stream Analytics, AWS Glue, Apache Airflow, AWS Kinesis, Amazon Redshift, SonarQube, PyTest
- Experience building scalable and reliable data pipelines for analytics and other business applications
Optional/Preferred Skills:
- Project management experience, especially running or contributing to Scrum teams
- Experience working with BPC (Business Planning and Consolidation), Planning tools
- Exposure to working with external partners in the technology ecosystem and vendor management
What We Offer:
- Opportunity to leverage cutting-edge technologies in a high-impact, global business environment
- Collaborative, growth-oriented culture with strong community and knowledge-sharing
- Chance to influence and drive key data initiatives across the organization


Quidcash is seeking a skilled Backend Developer to architect, build, and optimize mission-critical financial systems. You’ll leverage your expertise in JavaScript, Python, and OOP to develop scalable backend services that power our fintech/lending solutions. This role offers
the chance to solve complex technical challenges, integrate cutting-edge technologies, and directly impact the future of financial services for Indian SMEs.
If you are a leader who thrives on technical challenges, loves building high-performing teams, and is excited by the potential of AI/ML in fintech, we want to hear from you!
What You ll Do:
Design & Development: Build scalable backend services using JavaScript(Node.js) and Python, adhering to OOP principles and microservices architecture.
Fintech Integration: Develop secure APIs (REST/gRPC) for financial workflows(e.g., payments, transactions, data processing) and ensure compliance with regulations (PCI-DSS, GDPR).
System Optimization: Enhance performance, reliability, and scalability of cloud- native applications on AWS.
Collaboration: Partner with frontend, data, and product teams to deliver end-to-end features in Agile/Scrum cycles.
Quality Assurance: Implement automated testing (unit/integration), CI/CD pipelines, and DevOps practices.
Technical Innovation: Contribute to architectural decisions and explore AI/ML integration opportunities in financial products.
What You'll Bring (Must-Haves):
Experience:
3–5 years of backend development with JavaScript (Node.js) and Python.
Proven experience applying OOP principles, design patterns, and micro services.
Background in fintech, banking, or financial systems (e.g., payment gateways, risk engines, transactional platforms).
Technical Acumen:
Languages/Frameworks:
JavaScript (Node.js + Express.js/Fastify)
Python (Django/Flask/FastAPI)
Databases: SQL (PostgreSQL/MySQL) and/or NoSQL (MongoDB/Redis).
Cloud & DevOps: AWS/GCP/Azure, Docker, Kubernetes, CI/CD tools (Jenkins/GitLab).
Financial Tech: API security (OAuth2/JWT), message queues (Kafka/RabbitMQ), and knowledge of financial protocols (e.g., ISO 20022).
Mindset:
Problem-solver with a passion for clean, testable code and continuous improvement.
Adaptability in fast-paced environments and commitment to deadlines.
Collaborative spirit with strong communication skills.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward-thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.
If you are interested, pls share your profile to smithaquidcash.in

Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.

Quidcash seeks a versatile full-stack developer to build transformative fintech applications from end to end. You ll leverage Flutter for frontend development and JavaScript/Python for backend systems to create seamless, high-performance solutions for Indian SMEs. This role
blends UI craftsmanship with backend logic, offering the chance to architect responsive web/mobile experiences while integrating financial workflows and AI-driven features. If you excel at turning complex requirements into intuitive interfaces, thrive in full lifecycle development, and are passionate about fintech innovation – join us!
What You’ll Do:
Full-stack Development:
Design and build responsive cross-platform applications using Flutter (Dart) for web and mobile native app development.
Develop robust backend services with JavaScript (Node.js) and Python, applying OOP principles and RESTful/gRPC APIs.
Integrations:
Implement secure financial features (e.g., payment processing, dashboards, transaction workflows) with regulatory compliance.
Connect frontend UIs to backend systems (databases, cloud APIs, AI/ML models).
System Architecture: Architect scalable solutions using microservices, state management (Provider/Bloc), and cloud patterns (AWS/GCP).
Collaboration & Delivery:
Partner with product, UX, and QA teams in Agile/Scrum cycles to ship features from concept to production.
Quality & Innovation:
Enforce testing (unit/widget/integration), CI/CD pipelines, and DevOps practices.
Explore AI/ML integration for data-driven UI/UX enhancements.
What You’ll Bring (Must-Haves):
Experience:
3–5 years in full-stack development, including:
Flutter (Dart) for cross-platform apps (iOS, Android, Web).
JavaScript (Node.js + React/Express) and Python (Django/Flask).
Experience with OOP, design patterns, and full SDLC in Agile environments.
Technical Acumen:
Frontend:
Flutter (state management, animations, custom widgets).
HTML/CSS, responsive design, and performance optimization.
Backend:
Node.js/Python frameworks, API design, and database integration (SQL/NoSQL).
Tools & Practices:
Cloud platforms (AWS/GCP/Azure), Docker, CI/CD (Jenkins/GitHub Actions).
Git, testing suites (Jest/Pytest, Flutter Test), and financial security standards.
Mindset:
User-centric approach with a passion for intuitive, accessible UI/UX.
Ability to bridge technical gaps between frontend and backend teams.
Agile problem-solver thriving in fast-paced fintech environments.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward- thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.


Job Description: Software Engineer - Backend ( 3-5 Years)
Location: Bangalore
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives
through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
●Grow at the Edge: We embrace personal growth by stepping out of our comfort zones to
discover our genius zones, driven by self-awareness and integrity. No excuses.
●Understanding through Listening and Speaking the Truth: Transparency, radical candor,
and authenticity define our communication. We challenge ideas, but once decisions are
made, we commit fully.
●I Win for Teamwin: We operate within our genius zones, taking ownership of our work
and inspiring our team with energy and attitude to win together.
Responsibilities:
• Contribute to the entire implementation process including driving the definition of improvements based on business needs and architectural improvements.
• Review code for quality and implementation of best practices.
• Promote coding, testing, and deployment best practices through hands-on research and demonstration.
• Write testable code that enables extremely high levels of code coverage.
• Ability to review frameworks and design principles toward suitability in the project context.
• Candidates who will demonstrate an ability to identify an opportunity lay out a rational plan for pursuing that opportunity, and see it through to completion.
Requirements:
• Engineering graduate with 3+ years of experience in software product development.
• Proficient in Python, Django, Pandas, GitHub, and AWS.
• Good knowledge of PostgreSQL, and MongoDB.
• Strong Experience in designing REST APIs.
• Experience with working on scalable interactive web applications.
• A clear understanding of software design constructs and their implementation.
• Understanding of the threading limitations of Python and multi-process architecture.
• Familiarity with some ORM (Object Relational Mapper) libraries.
• Good understanding of Test Driven Development.
• Unit and Integration testing.
• Preferred exposure to Finance domain.
• Strong written and oral communication skills.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.