50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

🚀 Why join Bound AI (OIP Insurtech)?
We build real-world AI workflows that transform insurance operations—from underwriting to policy issuance. You’ll join a fast-growing, global team of engineers and innovators tackling the toughest problems in document intelligence and agent orchestration. We move fast, ship impact, and value autonomy over bureaucracy.
🧭 What You'll Be Doing
- Design and deliver end‑to‑end AI solutions: from intake of SOVs, loss runs, and documents to deployed intelligent agent workflows.
- Collaborate closely with stakeholders (product, operations, engineering) to architect scalable ML & GenAI systems that solve real insurance challenges.
- Translate business needs into architecture diagrams, data flows, and system integrations.
- Choose and integrate components such as RAG pipelines, LLM orchestration (LangChain, DSPy), vector databases, and MLOps tooling.
- Oversee technical proof-of-concepts, pilot projects, and production rollout strategies.
- Establish governance and best practices for model lifecycle, monitoring, error handling, and versioning.
- Act as a trusted advisor and technical leader—mentor engineers and evangelize design principles across teams.
🎯 What We’re Looking For
- 6+ years of experience delivering technical solutions in machine learning, AI engineering or solution architecture.
- Proven track record leading design, deployment, and integration of GenAI-based systems (LLM tuning, RAG, multi-agent orchestration).
- Fluency with Python production code, cloud platforms (AWS, GCP, Azure), and container orchestration tools.
- Excellent communication skills—able to bridge technical strategy and business outcomes with clarity.
- Startup mindset—resourceful, proactive, and hands‑on when needed.
- Bonus: experience with insurance-specific workflows or document intelligence domains (SOVs, loss runs, ACORD forms).
🛠️ Core Skills & Tools
- Foundation models, LLM pipelines, and vector-based retrieval (embedding search, RAG).
- Architecture modeling and integration: APIs, microservices, orchestration frameworks (LangChain, Haystack, DSPy).
- MLOps: CI/CD for models, monitoring, feedback loops, and retraining pipelines.
- Data engineering: preprocessing, structured/unstructured data integration, pipelines.
- Infrastructure: Kubernetes, Docker, cloud deployment, serverless components.
📈 Why This Role Matters
As an AI Solution Architect, you’ll shape the blueprint for how AI transforms insurance workflows—aligning product strategy, operational impact, and technical scalability. You're not just writing code; you’re orchestrating systems that make labor-intensive processes smarter, faster, and more transparent.

Job Role: Sr. Data Engineer
Location: Navrangpura, Ahmedabad
WORK FROM OFFICE - 5 DAYS A WEEK (UK Shift)
Job Description:
• 5+ years of core experience in python & Data Engineering.
• Must have experience with Azure Data factory and Databricks.
• Exposed to python-oriented Algorithm’s libraries such as NumPy, pandas, beautiful soup, Selenium, pdfplumber, Requests etc.
• Proficient in SQL programming.
• Knowledge on DevOps like CI/CD, Jenkins, Git.
• Experience working with Azure Databricks.
• Able to co-ordinate with Teams across multiple locations and time zones
• Strong interpersonal and communication skills with an ability to lead a team and keep them motivated.
Mandatory Skills : Data Engineer - Azure Data factory, Databricks, Python, SQL/MySQL/PostgreSQ

Role Details:
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. You will play a key role in designing, implementing, and maintaining our automation testing framework, with a primary focus on Selenium, Python and BDD
Position: Automation Engineer
Must-Have Skills & Qualifications:
- Bachelor’s degree in Engineering (Computer Science, IT, or related field)
- Hands-on experience with Selenium using Python and BDD framework
- Strong foundation in Manual Testing
Good-to-Have Skills:
- Allure
- Boto3
- Appium
Benefits:
- Competitive salary & comprehensive benefits package
- Work with cutting-edge technologies & industry-leading experts
- Flexible hybrid work environment
- Professional development & continuous learning opportunities
- Dynamic, collaborative culture with career growth paths

About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required

About the Role
We are looking for a skilled and dedicated FreeSWITCH Engineer with hands-on experience in VoIP systems. You will play a key role in developing, configuring, and maintaining scalable and reliable FreeSWITCH-based voice infrastructures. This is a 100% remote opportunity, giving you the flexibility to work from anywhere while collaborating with a global team.
Key Responsibilities
- Design, deploy, and maintain FreeSWITCH servers and related VoIP infrastructure.
- Troubleshoot and resolve FreeSWITCH and VoIP-related issues.
- Develop custom dial plans, modules, and call routing logic.
- Work with SIP, RTP, and related VoIP protocols.
- Monitor system performance and ensure high availability.
- Collaborate with development, network, and support teams to optimize voice systems.
- Document configurations, workflows, and system changes.
Requirements
- Minimum 3 years of hands-on experience with FreeSWITCH in a production environment.
- Strong understanding of VoIP technologies and SIP protocol.
- Experience with Linux system administration.
- Familiarity with scripting languages (Bash, Python, Lua).
- Ability to work independently in a remote setup.
- Strong problem-solving and analytical skills.
Preferred Skills
- Experience with other VoIP platforms (Asterisk, Kamailio, OpenSIPS).
- Knowledge of WebRTC, RTP engines, or media servers.
- Exposure to monitoring tools (Grafana, Prometheus, Nagios).
- Familiarity with APIs and backend integration.
Why Join Us?
- 100% remote – work from anywhere.
- Collaborative and supportive team environment.
- Opportunity to work on innovative VoIP solutions at scale.

Job Summary:
We are looking for a highly skilled and experienced Data Engineer with deep expertise in Airflow, dbt, Python, and Snowflake. The ideal candidate will be responsible for designing, building, and managing scalable data pipelines and transformation frameworks to enable robust data workflows across the organization.
Key Responsibilities:
- Design and implement scalable ETL/ELT pipelines using Apache Airflow for orchestration.
- Develop modular and maintainable data transformation models using dbt.
- Write high-performance data processing scripts and automation using Python.
- Build and maintain data models and pipelines on Snowflake.
- Collaborate with data analysts, data scientists, and business teams to deliver clean, reliable, and timely data.
- Monitor and optimize pipeline performance and troubleshoot issues proactively.
- Follow best practices in version control, testing, and CI/CD for data projects.
Must-Have Skills:
- Strong hands-on experience with Apache Airflow for scheduling and orchestrating data workflows.
- Proficiency in dbt (data build tool) for building scalable and testable data models.
- Expert-level skills in Python for data processing and automation.
- Solid experience with Snowflake, including SQL performance tuning, data modeling, and warehouse management.
- Strong understanding of data engineering best practices including modularity, testing, and deployment.
Good to Have:
- Experience working with cloud platforms (AWS/GCP/Azure).
- Familiarity with CI/CD pipelines for data (e.g., GitHub Actions, GitLab CI).
- Exposure to modern data stack tools (e.g., Fivetran, Stitch, Looker).
- Knowledge of data security and governance best practices.
Note : One face-to-face (F2F) round is mandatory, and as per the process, you will need to visit the office for this.

NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards

We’re looking for a passionate Data & Automation Engineer to join our team and assist in managing and processing large volumes of structured and unstructured data. You'll work closely with our engineering and product teams to extract, transform, and load (ETL) data, automate data workflows, and format data for different use cases.
Key Responsibilities:
- Write efficient scripts using Python and Node.js to process and manipulate data
- Scrape and extract data from public and private sources (APIs, websites, files)
- Format and clean raw datasets for consistency and usability
- Upload data to various databases, including MongoDB and other storage solutions
- Create and maintain data pipelines and automation scripts
- Document processes, scripts, and schema changes clearly
- Collaborate with backend and product teams to support data-related needs
Skills Required:
- Proficiency in Python (especially for data manipulation using libraries like pandas, requests, etc.)
- Experience with Node.js for backend tasks or scripting
- Familiarity with MongoDB and understanding of NoSQL databases
- Basic knowledge of web scraping tools (e.g., BeautifulSoup, Puppeteer, or Cheerio)
- Understanding of JSON, APIs, and data formatting best practices
- Attention to detail, debugging skills, and a data-driven mindset
Good to Have:
- Experience with data visualization or reporting tools
- Knowledge of other databases like PostgreSQL or Redis
- Familiarity with version control (Git) and working in agile teams


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


Looking for a 5+ years experienced Senior Full-stack Engineer (Backend Heavy) for a research-focused, product-based, US-based Startup.
AI Assistant for Research using state-of-the-art language models (AI Agent for Academic Research)
At SciSpace, we're using language models to automate and streamline research workflows from start to finish. And the best part? We're
already making waves in the industry, with a whopping 5 million users
on board as of November 2025! Our users love us too, with a 40%
MOM retention rate and 10% of them using our app more than once
a week! We're growing by more than 50% every month, all thanks to
our awesome users spreading the word (see it yourself on Twitter). Andwith almost weekly feature launches since our inception, we're
constantly pushing the boundaries of what's possible. Our team of
experts in design, frontend, fullstack engineering, and machine learning
are already in place, but we're always on the lookout for new talent to help us take things to the next level. Our user base is super engaged and always eager to provide feedback, making Scispace one of the most advanced applications of language models out there.
We are looking for insatiably curious, always learning SDE 2 Fullstack Engineers. You could get a chance to work on the most important and challenging problems at scale.
Responsibilities:
* Work in managing products as part of SciSpace product suite.
* Partner with product owners in designing software that becomes part of researchers’ lives
* Model real-world scenarios into code that can build the SciSpace platform
* Test code that you write and continuously improve practices at SciSpace
* Arrive at technology decisions after extensive debates with other engineers
* Manage large projects from conceptualisation, all the way through deployments
* Evolve an ecosystem of tools and libraries that make it possible for SciSpace to provide reliable, always-on, performant services to our users
* Partner with other engineers in developing an architecture that is resilient to changes in product requirements and usage
* Work on the user-interface side and deliver a snappy, enjoyable experience to your users
Our Ideal Candidate would:
* Strong grasp of one high-level language like Python, JavaScript, etc.
* Strong grasp of front-end HTML/CSS, non-trivial browser-side JavaScript
* General awareness of SQL and database design concepts
* Solid understanding of testing fundamentals
* Strong communication skills
* Should have prior experience in managing and executing technology products.
Bonus:
* Prior experience working with high-volume, always-available web applications
* Experience in ElasticSearch.
* Experience in distributed systems.
* Experience working with a Start-up is a plus point.

AccioJob is conducting a Walk-In Hiring Drive with Sceniuz IT Pvt. Ltd. for the position of Data Analyst.
To apply, register and select your slot here: https://go.acciojob.com/43pq6c
Required Skills: Excel, Python, MySQL, Power BI
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: All
Work Details:
- Work Location: Mumbai (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at Lokmanya Tilak College of Engineering
Further Rounds (for shortlisted candidates only):
Technical Interview 1
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/43pq6c

AccioJob is conducting a Walk-In Hiring Drive with Sceniuz IT Pvt. Ltd. for the position of Data Engineer.
To apply, register and select your slot here: https://go.acciojob.com/Rt7CmZ
Required Skills: Python, SQL, Azure
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: All
Work Details:
- Work Location: Mumbai (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at Lokmanya Tilak College of Engineering
Further Rounds (for shortlisted candidates only):
Technical Interview 1
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/Rt7CmZ

We are seeking a highly skilled React JS Developer with exceptional DOM manipulation expertise and real-time data handling experience to join our team. You'll be building and optimizing high-performance user interfaces for stock market trading applications where milliseconds matter and data flows continuously.
The ideal candidate thrives in fast-paced environments, understands the intricacies of browser performance, and has hands-on experience with WebSockets and real-time data streaming architectures.
Key Responsibilities
Core Development
- Advanced DOM Operations: Implement complex, performance-optimized DOM manipulations for real-time trading interfaces
- Real-time Data Management: Build robust WebSocket connections and handle high-frequency data streams with minimal latency
- Performance Engineering: Create lightning-fast, scalable front-end applications that process thousands of market updates per second
- Custom Component Architecture: Design and build reusable, high-performance React components optimized for trading workflows
Collaboration & Integration
- Work closely with traders, quants, and backend developers to translate complex trading requirements into intuitive interfaces
- Collaborate with UX/UI designers and product managers to create responsive, trader-focused experiences
- Integrate with real-time market data APIs and trading execution systems
Technical Excellence
- Implement sophisticated data visualizations and interactive charts using libraries like Chartjs, TradingView, or custom D3.js solutions
- Ensure cross-browser compatibility and responsiveness across multiple devices and screen sizes
- Debug and resolve complex performance issues, particularly in real-time data processing and rendering
- Maintain high-quality code through reviews, testing, and comprehensive documentation
Required Skills & Experience
React & JavaScript Mastery
- 5+ years of professional React.js development with deep understanding of React internals, hooks, and advanced patterns
- Expert-level JavaScript (ES6+) with strong proficiency in asynchronous programming, closures, and memory management
- Advanced HTML5 & CSS3 skills with focus on performance and cross-browser compatibility
Real-time & Performance Expertise
- Proven experience with WebSockets and real-time data streaming protocols
- Strong DOM manipulation skills - direct DOM access, virtual scrolling, efficient updates, and performance optimization
- RESTful API integration with experience in handling high-frequency data feeds
- Browser performance optimization - understanding of rendering pipeline, memory management, and profiling tools
Development Tools & Practices
- Proficiency with modern build tools: Webpack, Babel, Vite, or similar
- Experience with Git version control and collaborative development workflows
- Agile/Scrum development environment experience
- Understanding of testing frameworks (Jest, React Testing Library)
Financial Data Visualization
- Experience with financial charting libraries: Chartjs, TradingView, D3.js, or custom visualization solutions
- Understanding of market data structures, order books, and trading terminology
- Knowledge of data streaming optimization techniques for financial applications
Nice-to-Have Skills
Domain Expertise
- Prior experience in stock market, trading, or financial services - understanding of trading workflows, order management, risk systems
- Algorithmic trading knowledge or exposure to quantitative trading systems
- Financial market understanding - equities, derivatives, commodities
Technical Plus Points
- Backend development experience with GoLang, Python, or Node.js
- Database knowledge: SQL, NoSQL, time-series databases (InfluxDB, TimescaleDB)
- Cloud platform experience: AWS, Azure, GCP for deploying scalable applications
- Message queue systems: Redis, RabbitMQ, Kafka, NATS for real-time data processing
- Microservices architecture understanding and API design principles
Advanced Skills
- Service Worker implementation for offline-first applications
- Progressive Web App (PWA) development
- Mobile-first responsive design expertise
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent professional experience)
- 5+ years of professional React.js development with demonstrable experience in performance-critical applications
- Portfolio or examples of complex real-time applications you've built
- Financial services experience strongly preferred
Why You'll Love Working Here
We're a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
What We Offer
💰 Competitive salary – Get paid what you're worth
🌴 Generous paid time off – Recharge and come back sharper
🌍 Work with the best – Collaborate with top-tier global talent
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings
🎯 Performance rewards – Multiple bonuses for those who go above and beyond
🏥 Health covered – Comprehensive insurance so you're always protected
⚡ Fun, not just work – On-site sports, games, and a lively workspace
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best
🚚 Relocation support – Smooth move? We've got your back
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting
We work hard, play hard, and grow together. Join us.

We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)


Job Title: Python Developer (Full Time)
Location: Hyderabad (Onsite)
Interview: Virtual and Face to Face Interview (Last round)
Experience Required: 4 + Years
Working Days: 5 Days
About the Role
We are seeking a highly skilled Lead Python Developer with a strong background in building scalable and secure applications. The ideal candidate will have hands-on expertise in Python frameworks, API integrations, and modern application architectures. This role requires a tech leader who can balance innovation, performance, and compliance while driving successful project delivery.
Key Responsibilities
- Application Development
- Architect and develop robust, high-performance applications using Django, Flask, and FastAPI.
- API Integration
- Design and implement seamless integration with third-party APIs (including travel-related APIs, payment gateways, and external service providers).
- Data Management
- Develop and optimize ETL pipelines for structured and unstructured data using data lakes and distributed storage solutions.
- Microservices Architecture
- Build modular, scalable applications using microservices principles for independent deployment and high availability.
- Performance Optimization
- Enhance application performance through load balancing, caching, and query optimization to deliver superior user experiences.
- Security & Compliance
- Apply secure coding practices, implement data encryption, and ensure compliance with industry security and privacy standards (e.g., PCI DSS, GDPR).
- Automation & Deployment
- Utilize CI/CD pipelines, Docker/Kubernetes, and monitoring tools for automated testing, deployment, and production monitoring.
- Collaboration
- Partner with front-end developers, product managers, and stakeholders to deliver user-centric, business-aligned solutions.
Requirements
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Technical Expertise
- 4+ years of hands-on experience with Python frameworks (Django, Flask, FastAPI).
- Proficiency in RESTful APIs, GraphQL, and asynchronous programming.
- Strong knowledge of SQL/NoSQL databases (PostgreSQL, MongoDB) and big data tools (Spark, Kafka).
- Familiarity with Kibana, Grafana, Prometheus for monitoring and visualization.
- Experience with AWS, Azure, or Google Cloud, containerization (Docker, Kubernetes), and CI/CD tools (Jenkins, GitLab CI).
- Working knowledge of testing tools: PyTest, Selenium, SonarQube.
- Experience with API integrations, booking flows, and payment gateway integrations (travel domain knowledge is a plus, but not mandatory).
Soft Skills
- Strong problem-solving and analytical skills.
- Excellent communication, presentation, and teamwork abilities.
- Proactive, ownership-driven mindset with the ability to perform under pressure.


Strong proficiency in Python and experience with ML libraries (Scikitlearn TensorFlow, PyTorch, scikit-learn) -ML & DS: Understanding of concept ML Modeling and Evaluation Metrix -Containerization & Orchestration:
Hands-on experience with Docker and Kubernetes for deploying ML models -CI/CD Pipelines:
Experience building automated CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions,
Experience with AWS,
Workflow Automation: Kubeflow/Airflow/MLFlow -Model Monitoring: Familiarity/Understanding with monitoring tools and techniques for ML models (e.g., Prometheus, Grafana, Seldon, Evidently AI) Version Control: Experience with Git: managing code and model versioning

Job Summary:
We are seeking a skilled and motivated Python Django Developer with experience in building high-performance APIs using Django Ninja.
The ideal candidate will have a strong background in web development, API design, and backend systems.
Experience with IX-API and internet exchange operations is a plus.
You will play a key role in developing scalable, secure, and efficient backend services that support our network infrastructure and service delivery.
Key Responsibilities:
Design, develop, and maintain backend services using Python Django and Django Ninja. Build and document RESTful APIs for internal and external integrations.
Collaborate with frontend, DevOps, and network engineering teams to deliver end-to-end solutions. Ensure API implementations follow industry standards and best practices.
Optimize performance and scalability of backend systems.
Troubleshoot and resolve issues related to API functionality and performance.
Participate in code reviews, testing, and deployment processes.
Maintain clear and comprehensive documentation for APIs and workflows.
Required Skills & Qualifications:
Proven experience with Python Django and Django Ninja for API development.
Strong understanding of RESTful API design, JSON, and Open API specifications.
Proficiency in Python and familiarity with asynchronous programming.
Experience with CI/CD tools (e.g., Jenkins, GitLab CI).
Knowledge of relational databases (e.g., PostgreSQL, MySQL).
Familiarity with version control systems (e.g., Git).
Excellent problem-solving and communication skills.
Preferred Qualifications:
Experience with IX-API development and integration.
Understanding of internet exchange operations and BGP routing.
Exposure to network automation tools (e.g., Ansible, Terraform).
Familiarity with containerization and orchestration tools (Docker, Kubernetes).
Experience with cloud platforms (AWS, Azure, GCP).
Contributions to open-source projects or community involvement

What You’ll Do:
- Architect & build our core backend using modern microservices patterns
- Develop intelligent AI/ML-driven systems for financial document processing at scale
- Own database design (SQL + NoSQL) for speed, reliability, and compliance
- Integrate vector search, caching layers, and pipelines to power real-time insights
- Ensure security, compliance, and data privacy at every layer of the stack
- Collaborate directly with founders to translate vision into shippable features
- Set engineering standards & culture for future hires
What You Bring:
- Core SkillsDeep expertise in Python (Django, FastAPI, or Flask)
- Strong experience in SQL & NoSQL database architecture
- Hands-on with vector databases and caching systems (e.g., Redis)
- Proven track record building scalable microservices
- Strong grounding in security best practices for sensitive data
Experience:
- 1+ years building production-grade backend systems
- History of owning technical decisions that impacted product direction
- Ability to solve complex, high-scale technical problems
- Bonus Points ForExperience building LLM-powered applications at scale
- Background in enterprise SaaS, or financial software
- Early-stage startup experience
- Familiarity with financial reporting/accounting concepts
Why Join Us:
- Founding team equity with significant upside
- Direct influence on product architecture & company direction
- Work with cutting-edge AI/ML tech on real-world financial data
- Backed by top-tier VC
- Join at ground zero and help shape our engineering culture

About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Role : Senior AI Engineer
Location : Noida, Delhi/NCR
Role Overview :
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs).
You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities :
- Architect & Develop AI Solutions : Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure : Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation : Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems : Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership : Take ownership of the entire lifecycle of AI projectsfrom ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship : Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration : Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications :
- Masters (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications :
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.


Core Skills to Look For:
- Frontend: React (major), HTML, CSS, JavaScript, TypeScript
- Backend: Python, C#, .NET, Node.js
- Databases: MySQL, MongoDB, Redis, Vector DB (Weaviate a plus)
- APIs: RESTful API design & integration
- DevOps / Deployment: Docker, Kubernetes
- Other: Full stack development, scalable web apps, problem-solving, ownership, team collaboration.

Job Description: Data Engineer
Location: Ahmedabad
Experience: 1+
Employment Type: Full-Time
We are looking for a highly motivated and experienced Data Engineer to join our team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.
Responsibilities
● Design and optimize data pipelines for various data sources
● Design and implement efficient data storage and retrieval mechanisms
● Develop data modelling solutions and data validation mechanisms
● Troubleshoot data-related issues and recommend process improvements
● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
● Coach and mentor junior data engineers in the team
Skills Required:
● Minimum 1 year of experience in data engineering or related field
● Proficient in designing and optimizing data pipelines and data modeling
● Strong programming expertise in Python
● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive
● Extensive experience with cloud data services such as AWS, Azure, and GCP
● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing
● Knowledge of distributed computing and storage systems
● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage
● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities
Qualifications
Bachelor's degree in Computer Science, Data Science, or a Computer related field


We are seeking Senior Backend Engineers who thrive in agile, fast-moving teams and can turn bold ideas into impactful products. You will play a critical role in developing high-quality backend systems, solving complex problems independently, and shaping next-gen solutions that drive our company’s mission forward. You should excel in building scalable, performant systems and have a deep understanding of design and architecture patterns.
What You’ll Do
- The development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end.
- Writing clean, high-quality, high-performance and maintainable code.
- The entire lifecycle of stories: development, test, production, and subsequent fixes and improvements.
- Solving complex technical problems.
- Develop a highly scalable and performant backend based on event-driven architecture.
- Building robust, secure and scalable microservices.
- Splitting features into tasks and for the backend architecture and its evolution.
- Performing an objective analysis of the problem statement and coming up with an unbiased technical solution before writing a single line of code.
- Coordinating cross-functionally to ensure the project meets business objectives and compliance standards. This includes collaborating with QAs, PMs to ensure timely delivery of high-quality apps.
- Participating in, designing and driving the code review process.
- Implementing RESTful services with a metric-driven API Gateway.
- Ensuring sub-second server response and implementing relational, document, key, object or graph data-stores, index stores and messaging stores as needed.
- Tracking defects and working with business owners and users to triage bugs and manage backlog.
- Taking ownership to run and maintain Cloud infrastructure.
- Evaluating relevant technologies, influencing and driving architecture and design discussions.
- Owning and delivering end-to-end backend features, ensuring high performance, reliability, and scalability.
- Collaborating with product stakeholders to deliver impactful solutions aligned with business objectives.
- Writing clean, efficient, maintainable code using best practices and participating in code reviews.
- Building reusable services and contributing to core engineering libraries.
- Documenting and demonstrating solutions through documentation, DFDs, and code comments.
- Solving complex technical problems with objectivity before writing code.
- Taking ownership of backend systems, including APIs, data pipelines, and infrastructure.
- Working closely with Product to help drive KPIs and align engineering goals with business outcomes.
- Staying agile and adapting quickly to evolving requirements, scope, and priorities.
What You Bring
- 3+ years of experience in software development with strong expertise in Golang.
- Overall 6+ years of experience in software development with a strong base and framework expertise in Java/Go/Python.
- Basic working knowledge of frontend technologies.
- Experience with frameworks – like Springboot, Quarkus, Gin/Mux.
- Experience in working with microservice architectures, transactional systems, and distributed environments.
- Exposure in building RESTful APIs with monitoring, fault tolerance and metrics (with something like Newrelic).
- Experience with MySQL, PostgreSQL, NoSQL (Cassandra, Redis, DynamoDB).
- Proficient in OOP, SQL, Design Patterns with Data modeling experience in Relational databases.
- Proficient in building GraphQL APIs.
- Proficient with Continuous Integration (CI), Continuous Deployment (CD) and version control (Git).
- Well-versed with TDD and Test Engineering and Automation.
- Excellent attention to detail.
- Outstanding written and verbal communication skills.
- Experience mentoring a team of 2–3 engineers.
- A self-starter who can work well with minimal-to-no guidance in a fluid environment.
- Excited by challenges surrounding the development of highly scalable & distributed systems.
- Agile and able to adapt quickly to changing requirements, scope, and priorities.
- Experienced in working on massively large-scale data systems in production environments.
Bonus Points For
- Open-source contributions, side-projects, blog posts and YT tech videos.
- Experience deploying serverless applications to GCP.
- Experience in Cloud Run, Cloud Pub/Sub, Cloud Tasks, Kubernetes, Cloud Vision.
- Experience with AWS stack.
- Machine learning experience.
What You Get
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
We are Proximity - a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams.

About the Role :
We are seeking an experienced Python Backend Lead to design, develop, and optimize scalable backend solutions.
The role involves working with large-scale data, building efficient APIs, integrating middleware solutions, and ensuring high performance and reliability.
You will lead a team of developers while also contributing hands-on to coding, design, and architecture.
Mandatory Skills : Python (Pandas, NumPy, Matplotlib, Plotly), FastAPI/FlaskAPI, SQL & NoSQL (MongoDB, CRDB, Postgres), Middleware tools (Mulesoft/BizTalk), CI/CD, RESTful APIs, OOP, OOD, DS & Algo, Design Patterns.
Key Responsibilities :
1. Lead backend development projects using Python (FastAPI/FlaskAPI).
2. Design, build, and maintain scalable APIs and microservices.
3. Work with SQL and NoSQL databases (MongoDB, CRDB, Postgres).
4. Implement and optimize middleware integrations (Mulesoft, BizTalk).
5. Ensure smooth deployment using CI/CD pipelines.
6. Apply Object-Oriented Programming (OOP), Design Patterns, and Data Structures & Algorithms to deliver high-quality solutions.
7. Collaborate with cross-functional teams (frontend, DevOps, product) to deliver business objectives.
8. Mentor and guide junior developers, ensuring adherence to best practices and coding standards.
Required Skills :
1. Strong proficiency in Python with hands-on experience in Pandas, NumPy, Matplotlib, Plotly.
2. Expertise in FastAPI / FlaskAPI frameworks.
3. Solid knowledge of SQL & NoSQL databases (MongoDB, CRDB, Postgres).
4. Experience with middleware tools such as Mulesoft or BizTalk.
5. Proficiency in RESTful APIs, Web Services, and CI/CD pipelines.
6. Strong understanding of OOP, OOD, Design Patterns, and DS & Algo.
7. Excellent problem-solving, debugging, and optimization skills.
8. Prior experience in leading teams is highly desirable.

Job Title : Python Backend Lead / Senior Python Developer
Experience : 6 to 10 Years
Location : Bangalore (CV Raman Nagar)
Openings : 8
Interview Rounds : 1 Virtual + 1 In-Person (Face-to-Face with Client)
Note : Only local Bangalore candidates will be considered
About the Role :
We are seeking an experienced Python Backend Lead / Senior Python Developer to design, develop, and optimize scalable backend solutions.
The role involves working with large-scale data, building efficient APIs, integrating middleware solutions, and ensuring high performance and reliability.
You will lead a team of developers while also contributing hands-on to coding, design, and architecture.
Mandatory Skills : Python (Pandas, NumPy, Matplotlib, Plotly), FastAPI/FlaskAPI, SQL & NoSQL (MongoDB, CRDB, Postgres), Middleware tools (Mulesoft/BizTalk), CI/CD, RESTful APIs, OOP, OOD, DS & Algo, Design Patterns.
Key Responsibilities :
- Lead backend development projects using Python (FastAPI/FlaskAPI).
- Design, build, and maintain scalable APIs and microservices.
- Work with SQL and NoSQL databases (MongoDB, CRDB, Postgres).
- Implement and optimize middleware integrations (Mulesoft, BizTalk).
- Ensure smooth deployment using CI/CD pipelines.
- Apply Object-Oriented Programming (OOP), Design Patterns, and Data Structures & Algorithms to deliver high-quality solutions.
- Collaborate with cross-functional teams (frontend, DevOps, product) to deliver business objectives.
- Mentor and guide junior developers, ensuring adherence to best practices and coding standards.
Required Skills :
- Strong proficiency in Python with hands-on experience in Pandas, NumPy, Matplotlib, Plotly.
- Expertise in FastAPI / FlaskAPI frameworks.
- Solid knowledge of SQL & NoSQL databases (MongoDB, CRDB, Postgres).
- Experience with middleware tools such as Mulesoft or BizTalk.
- Proficiency in RESTful APIs, Web Services, and CI/CD pipelines.
- Strong understanding of OOP, OOD, Design Patterns, and DS & Algo.
- Excellent problem-solving, debugging, and optimization skills.
- Prior experience in leading teams is highly desirable.

Position: Junior AI Research Engineer (Generative AI)
Location: Noida
Company: CodeFire Technologies Pvt. Ltd.
About the Role:
Looking for a sharp and motivated Junior AI Research Engineer to join our team and work on cutting-edge Generative AI projects. If you're from a premier institute (IIT/NIT/IIIT), work on cutting edge Gen AI solutions, and love solving complex problems, this is your chance to work hands-on with large language models, GenAI tools, and cutting edge AI solutions.
What You'll Do:
1. Understand nuances of multiple LLMs and use them in developing applications
2. Explore different techniques of prompt engineering and measure its impact on solutions
3. Look out for latest research and keep yourself updated
4. Be part of core group that leads Gen AI practice in the company
What We're Looking For:
1. Recent graduate (any branch) from IIT/NIT/IIIT
2. Strong analytical and logical reasoning
3. Research oriented mindset
4. Passionate to be on the forefront of GenAI revolution
5. Hands-on experience (projects, papers, GitHub, etc.) in AI
6. Worked with Python, PyTorch/TensorFlow, LangChain, or OpenAI APIs
Why Join Us:
1. Work on meaningful, cutting edge, real-world GenAI applications
2. Mentorship from experienced tech leaders
3. Flexible and innovation-driven culture
4. Exposure to early-stage AI product building
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


- 4= years of experience
- Proficiency in Python programming.
- Experience with Python Service Development (RestAPI/FlaskAPI)
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with Cloud (Azure /AWS) technologies

Job Description:
- 4+ years of experience in a Data Engineer role,
- Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc.
- Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive
- Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub
- Experience with GCP/Azure data factory/AWS
- Strong in SQL Scripting
- Experience with ETL tools
- Knowledge of Snowflake Data Warehouse
- Knowledge of Orchestration frameworks: Airflow/Luig
- Good to have knowledge of Data Quality Management frameworks
- Good to have knowledge of Master Data Management
- Self-learning abilities are a must
- Familiarity with upcoming new technologies is a strong plus.
- Should have a bachelor's degree in big data analytics, computer engineering, or a related field
Personal Competency:
- Strong communication skills is a MUST
- Self-motivated, detail-oriented
- Strong organizational skills
- Ability to prioritize workloads and meet deadlines



Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.

Demonstrated experience as a Python developer i
Good understanding and practical experience with Python frameworks including Django, Flask, and Bottle Proficient with Amazon Web Service and experienced in working with API’s
Solid understanding of databases SQL and mySQL Experience and knowledge of JavaScript is a benefit
Highly skilled with attention to detail
Good mentoring and leadership abilities
Excellent communication skills Ability to prioritize and manage own workload

About I Vision Infotech
I Vision Infotech is a leading IT company in India offering innovative and cost-effective web and mobile solutions. Since 2011, we've served global clients across the USA, UK, Australia, Malaysia, and Canada. Our core services include Web Development, E-commerce, and Mobile App Development for Android, iOS, and cross-platform technologies.
About the Training Program: Python Job-Oriented Training
This Paid Python Job-Oriented Training Program is designed for students, freshers, and career switchers who want to build a strong career in Python Development, Data Analysis, or Backend Web Development. You’ll gain hands-on experience by working on real-time industry projects and learning from experienced professionals.
What You Will Learn
Core Python Programming
- Python Basics (Data Types, Loops, Conditions, Functions)
- Object-Oriented Programming (OOP)
- File Handling, Error Handling, Modules
Web Development (Python + Django / Flask)
- Web App Development with Django
- Template Integration (HTML/CSS)
- REST API Development
- CRUD Operations & Authentication
- Deployment on Hosting Platforms
Database Integration
- MySQL / SQLite / PostgreSQL
- ORM (Object Relational Mapping) in Django
Data Analysis (Optional Track)
- Pandas, NumPy, Matplotlib
- Basic Data Cleaning & Visualization
Tools & Technologies
- Python 3.x, Django / Flask
- Git & GitHub
- Postman (for API Testing)
- VS Code / PyCharm
- Cloud Deployment (optional)
Training Highlights
- 100% Practical & Hands-on Learning
- Real-Time Projects & Assignments
- Git & Version Control Exposure
- Internship Completion Certificate (3 Months)
- Resume + GitHub Portfolio Support
- Interview Preparation & Placement Assistance
Eligibility Criteria
- BCA / MCA / B.Sc IT / M.Sc IT / Diploma / B.E / B.Tech
- No prior experience required – just basic computer knowledge
- Strong interest in Python programming / backend / data
Important Notes
- This is a Paid Training Program with a job-oriented structure.
- Training Fees to be paid before the batch starts.
- Limited seats strictly First-Come, First-Served.
- Only for serious learners who want to build a tech career.
- No time-pass inquiries, please.
Job Title : Senior Security Engineer – ServiceNow Security & Threat Modelling
Experience : 6+ Years
Location : Remote
Type : Contract
Job Summary :
We’re looking for a Senior Security Engineer to strengthen our ServiceNow ecosystem with security-by-design.
You will lead threat modelling, perform security design reviews, embed security in SDLC, and ensure risks are mitigated across applications and integrations.
Mandatory Skills :
ServiceNow platform security, threat modelling (STRIDE/PASTA), SAST/DAST (Checkmarx/Veracode/Burp/ZAP), API security, OAuth/SAML/SSO, secure CI/CD, JavaScript/Python.
Key Responsibilities :
- Drive threat modelling, design reviews, and risk assessments.
- Implement & manage SAST/DAST, secure CI/CD pipelines, and automated scans.
- Collaborate with Dev/DevOps teams to instill secure coding practices.
- Document findings, conduct vendor reviews & support ITIL-driven processes.
- Mentor teams on modern security tools and emerging threats.
Required Skills :
- Strong expertise in ServiceNow platform security & architecture.
- Hands-on with threat modelling (STRIDE/PASTA, attack trees).
- Experience using SAST/DAST tools (Checkmarx, Veracode, Burp Suite, OWASP ZAP).
- Proficiency in API & Web Security, OAuth, SAML, SSO.
- Knowledge of secure CI/CD pipelines & automation.
- Strong coding skills in JavaScript/Python.
- Excellent troubleshooting & analytical abilities for distributed systems.
Nice-to-Have :
- Certifications: CISSP, CEH, OSCP, CSSLP, ServiceNow Specialist.
- Knowledge of cloud security (AWS/Azure/GCP) & compliance frameworks (ISO, SOC2, GDPR).
- Experience with incident response, forensics, SIEM tools.
We are looking for a highly skilled and experienced Senior IT Infrastructure Automation Engineer to join our dynamic team. This critical role involves designing, developing, and implementing automation solutions that streamline IT operations, enhance reliability, and improve scalability across on-premises, cloud, and hybrid environments. Lead initiatives that reduce manual effort, increase consistency, and support critical infrastructure systems while collaborating closely with cross-functional teams.
Must Have Skills:
Scripting Languages:
PowerShell (advanced modules)
Python (advanced scripting & modules)
Bash (for Linux automation)
Configuration & Orchestration:
Strong expertise in Ansible
Operating Systems:
Windows Server (advanced)
Linux (Ubuntu, CentOS)
Virtualization:
VMware (advanced knowledge)
Nutanix (working knowledge)
Integration: REST APIs for system integrations
Good to Have Skills:
DevOps tools: Terraform, Kubernetes
Cloud Platforms: AWS, Azure
Citrix Cloud & NetScaler experience
Commvault backup automation
Endpoint management with BigFix, JAMF
Networking: routing, firewalls, ACLs
Database administration and scripting
Basic web development (Ruby, PHP, Python, JavaScript)
Job Description:
Automation Strategy & Development: Design and implement automation workflows for routine IT operations across Windows and Linux environments (e.g., user management, patching, service configuration). Create and manage scripts for: Active Directory (user/group management, GPOs, security settings), DNS & DHCP management. Develop custom automation tools using Python, PowerShell, and Bash to support various infrastructure needs.
Virtualization & Cloud Automation: Automate provisioning, scaling, and monitoring for VMware and Nutanix environments. Develop automation for Citrix Cloud and NetScaler for tasks like app publishing and load balancing. Integrate and automate cloud components (AWS or Azure) to support a hybrid infrastructure.
Monitoring & Backup Reliability: Implement automated LogicMonitor solutions for proactive issue detection. Build automation scripts for Commvault to manage backup and recovery, including reporting. Create monitoring and alerting tools using Python, PowerShell, or Bash.
Integration & Tooling: Use REST APIs to integrate third-party platforms (e.g., ServiceNow, LogicMonitor, NetScaler). Leverage Ansible for configuration management across Windows and Linux. (Preferred) Use DevOps tools such as Terraform and Kubernetes for provisioning and orchestration.
Endpoint Management & Security: Automate endpoint patching, software deployment, and asset tracking using tools like BigFix, JAMF, and Active Directory. Collaborate with cybersecurity teams to automate threat response and enforce compliance standards.
Documentation & Collaboration: Maintain clear and accurate documentation for all automation solutions, procedures, and systems. Keep the CMDB up to date for server and workstation deployments. Provide guidance and support to other administrators using automation tools. Record project and support activities using ServiceNow or similar platforms. Participate in regular team and stakeholder meetings to share updates and gather feedback.


Experience Required: 2-4 Years
No. of vacancies: 1
Job Type: Full Time
Vacancy Role: WFO
Job Category: Development
Job Description
We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with experience in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF) and a solid understanding of machine learning concepts and their practical applications are essential.
Roles & Responsibilities
- Develop and maintain web applications using Django and Flask frameworks.
- Design and implement RESTful APIs using Django Rest Framework (DRF).
- Deploy, manage, and optimize applications on AWS.
- Develop and maintain APIs for AI/ML models and integrate them into existing systems.
- Create and deploy scalable AI and ML models using Python.
- Ensure the scalability, performance, and reliability of applications.
- Write clean, maintainable, and efficient code following best practices.
- Perform code reviews and provide constructive feedback to peers.
- Troubleshoot and debug applications, identifying and fixing issues in a timely manner.
- Stay up-to-date with the latest industry trends and technologies to ensure our applications remain current and competitive.
Qualifications
- Candidate should have minimum 2+ yrs Experience.
- Bachelor’s degree in Computer Science, Engineering, or a related fields.
- Proficient in Python with a strong understanding of its ecosystem.
- Extensive experience with Django and Flask frameworks.
- Hands-on experience with AWS services, including but not limited to EC2, S3, RDS, Lambda, and CloudFormation.
- Strong knowledge of Django Rest Framework (DRF) for building APIs.
- Experience with machine learning libraries and frameworks, such as scikit-learn, TensorFlow, or PyTorch.
- Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Familiarity with front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.


About the Job
Data Scientist
Experience: 3–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
About TalentLo
TalentLo is a revolutionary talent platform connecting exceptional tech professionals with high-quality clients worldwide. We’re building a carefully curated pool of skilled experts to match with companies actively seeking specialized talent for impactful projects.
Role Overview
We’re seeking experienced Data Scientists with 3–5 years of professional experience to analyze large datasets, design data-driven solutions, and build predictive models that support critical business decisions. This is a freelance/contract opportunity where you’ll collaborate with global teams on challenging, high-impact projects.
Responsibilities
- Collect, clean, and preprocess structured and unstructured data
- Build and evaluate statistical models and machine learning algorithms
- Translate business problems into data science solutions
- Perform exploratory data analysis (EDA) and identify key insights
- Communicate findings through visualizations, reports, and presentations
- Collaborate with engineers, analysts, and stakeholders to deploy solutions
- Ensure scalability and performance of data science workflows
Requirements
- 3–5 years of professional experience in Data Science
- Strong proficiency in Python (pandas, NumPy, scikit-learn)
- Solid understanding of statistics, probability, and hypothesis testing
- Experience in SQL and working with relational databases
- Hands-on experience with machine learning techniques (classification, regression, clustering, etc.)
- Strong ability to visualize and communicate insights (Tableau, Power BI, or matplotlib/seaborn)
- Familiarity with cloud platforms (AWS, GCP, Azure) is a plus
How to Apply
- Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
- Submit your GitHub, portfolio, or sample projects
- Take the required assessment and get qualified
- Get shortlisted & connect with the client
✨ If you’re ready to solve complex business problems with data, collaborate with global teams, and grow your career as a Data Scientist — apply today!


We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.

CTC: up to 20 LPA
Exp: 4 to 7 Years
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
We are looking for an experienced Cloud & DevOps Engineer to join our growing team. The ideal candidate should have hands-on expertise in cloud platforms, automation, CI/CD, and container orchestration. You will be responsible for building scalable and secure infrastructure, optimizing deployments, and ensuring system reliability in a fast-paced environment.
Responsibilities
- Design, deploy, and manage applications on AWS / GCP.
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI/CD.
- Manage containerized workloads with Docker & Kubernetes.
- Implement Infrastructure as Code (IaC) using Terraform.
- Automate infrastructure and operational tasks using Python/Shell scripts.
- Set up monitoring & logging (Prometheus, Grafana, CloudWatch, ELK).
- Ensure security, scalability, and high availability of systems.
- Collaborate with development and QA teams in an Agile/DevOps environment.
Required Skills
- AWS, GCP (cloud platforms)
- Terraform (IaC)
- Docker, Kubernetes (containers & orchestration)
- Python, Bash (scripting & automation)
- CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD)
- Monitoring & Logging (Prometheus, Grafana, CloudWatch)
- Strong Linux/Unix administration
Preferred Skills (Good to Have)
- Cloud certifications (AWS, Azure, or GCP).
- Knowledge of serverless computing (AWS Lambda, Cloud Run).
- Experience with DevSecOps and cloud security practices.

**Company Overview**
We are a VC-backed fintech startup developing an innovative online trading platform. As we scale, we're seeking a skilled Backend Engineer with expertise in Python to join our growing team and help build a robust, scalable infrastructure for our cutting-edge trading application.
We're based out of the UK and have our engineering team in India.
**Job Description**
We are looking for a backend developer who specialises in Python. Your role will focus on developing and maintaining the server-side logic, optimising performance, and ensuring seamless integration with the frontend. You’ll work closely with the engineering and product teams to deliver a high-quality, secure, and scalable platform.
**Responsibilities**
1. Develop and maintain server-side logic using Python
2. Design and implement APIs for seamless integration with frontend components
3. Optimise backend performance and scalability for high traffic and large data loads
4. Build and maintain databases, ensuring security, data integrity, and optimal performance
5. Collaborate with frontend engineers to ensure smooth integration between backend and frontend systems
6. Troubleshoot, debug, and optimise backend infrastructure
7. Implement data protection, security protocols, and authentication mechanisms (e.g., JWT)
8. Maintain and enhance real-time communication systems using WebSockets or similar protocols
**Required Skills**
1. Strong proficiency in Python and related technologies, knowledge of databases and SQL, and experience with web frameworks like Django or FastAPI
2. Strong analytical and troubleshooting skills, and the ability to solve problems
3. Good understanding of OOPs, task broking services, Queues, Redis
4. Familiarity with RESTful API design and integration
5. Strong understanding of database management (e.g., PostgreSQL, Redis) and caching strategies
6. Familiarity with modern authentication and authorization mechanisms (e.g., JWT, OAuth)
7. Proficiency in working with cloud hosting services (AWS, Google Cloud, etc.)
8. Experience with containerization and orchestration tools (Docker, Kubernetes)
9. Knowledge of real-time communication protocols (e.g., WebSockets, TCP, SSE)
10. Strong understanding of security best practices for server-side applications
11. Experience with version control (Git) and CI/CD pipelines
12. Minimum of 2-3.5 years of experience building scalable backend systems
**Perks**
1. Work From Anywhere Flexibility
2. Unlimited Leaves policy*
3. Competitive salary and unlimited growth opportunities
4. Insights on how HFT is done using cutting edge technology
If you’re passionate about building scalable, high-performance backend systems and want to be part of a cutting-edge fintech startup, we’d love to hear from you!

Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Description
Collaborate closely with various stakeholders to gain a deep understanding of requirements and use cases. Develop detailed test plans, tools, and utilities to comprehensively test RtBrick products and features. Work closely with developers to review functional and design specifications, ensuring a thorough testing perspective. Create detailed feature test plans, design test bed configurations, and establish complex test setups based on project requirements. Develop Python and robot scripts for automated testing. Assist development engineers in diagnosing product defects, and actively participate in customer calls to troubleshoot issues, gather data, and communicate resolutions and fixes. Join our team and be at the forefront of innovation in technology.
Requirements
- Strong testing experience in any of the Layer-3 Unicast routing protocols (e.g. OSPF, BGP, IS-IS), MPLS signalling protocols (e.g. RSVP, LDP), Layer-3 VPNs, Layer2-VPNs, VPLS, Multicast VPN, EVPN,
- Hands-on experience with scripting languages or python programming to test system/application software (SWIG)
- Ability to scope and develop test cases for a given requirement including Scale/Performance testing in a distributed asynchronous environment
- Experience with "Robot Framework" for automation, RESTful API is a plus
- EC, IS , CS with networking back ground with 2-6 years of related experience is required
- Strong written and verbal communication skills
- Able to plan and execute tasks with minimal supervision
- Team-player, can-do attitude, will work well in a group environment while being able to contribute well on an individual basis
Responsibilities
- Collaborate closely with stakeholders, gaining insights into their unique requirements and use cases.
- Engineer comprehensive test plans, craft specialized tools and utilities for in-depth feature assessments.
- Provide a critical testing perspective by thoroughly evaluating documents like functional specs and design specs.
- Create exhaustive feature test plans and innovative test bed designs tailored to project needs.
- Configure intricate test environments, aligning them with project-specific parameters.
- Develop Python and robot scripts, automating key testing processes.
- Aid development engineers in diagnosing and resolving product defects.
- ·Engage in customer calls, actively participating in issue resolution and effectively communicating fixes.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.

We are seeking a Full Stack Developer with exceptional communication skills to collaborate daily with our international clients in the US and Australia. This role requires not only technical expertise but also the ability to clearly articulate ideas, gather requirements, and maintain strong client relationships. Communication is the top priority.
The ideal candidate is passionate about technology, eager to learn and adapt to new stacks, and capable of delivering scalable, high-quality solutions across the stack.
Key Responsibilities
- Client Communication: Act as a daily point of contact for clients (US & Australia), ensuring smooth collaboration and requirement gathering.
- Backend Development:
- Design and implement REST APIs and GraphQL endpoints.
- Integrate secure authentication methods including OAuth, Passwordless, and Signature-based authentication.
- Build scalable backend services with Node.js and serverless frameworks.
- Frontend Development:
- Develop responsive, mobile-friendly UIs using React and Tailwind CSS.
- Ensure cross-browser and cross-device compatibility.
- Database Management:
- Work with RDBMS, NoSQL, MongoDB, and DynamoDB.
- Cloud & DevOps:
- Deploy applications on AWS / GCP / Azure (knowledge of at least one required).
- Work with CI/CD pipelines, monitoring, and deployment automation.
- Quality Assurance:
- Write and maintain unit tests to ensure high code quality.
- Participate in code reviews and follow best practices.
- Continuous Learning:
- Stay updated on the latest technologies and bring innovative solutions to the team.
Must-Have Skills
- Excellent communication skills (verbal & written) for daily client interaction.
- 2+ years of experience in full-stack development.
- Proficiency in Node.js and React.
- Strong knowledge of REST API and GraphQL development.
- Experience with OAuth, Passwordless, and Signature-based authentication methods.
- Database expertise with RDBMS, NoSQL, MongoDB, DynamoDB.
- Experience with Serverless Framework.
- Strong frontend skills: React, Tailwind CSS, responsive design.
Nice-to-Have Skills
- Familiarity with Python for backend or scripting.
- Cloud experience with AWS, GCP, or Azure.
- Knowledge of DevOps practices and CI/CD pipelines.
- Experience with unit testing frameworks and TDD.
Who You Are
- A confident communicator who can manage client conversations independently.
- Passionate about learning and experimenting with new technologies.
- Detail-oriented and committed to delivering high-quality software.
- A collaborative team player who thrives in dynamic environments.


Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.


Required Skills & Qualifications
- 3 to 5 years of professional experience as a Full Stack Developer.
- Strong expertise in front-end technologies: HTML, CSS, JavaScript, React (preferred), Angular, or Vue.js.
- Hands-on experience with back-end technologies: Python, C#, .NET, Node.js.
- Strong programming skills in TypeScript.
- Proven experience in database design and modeling using SQL (MySQL) and NoSQL (MongoDB, Redis).
- Familiarity with Vector Databases (e.g., Weaviate) is a strong plus.
- Experience with RESTful APIs and scalable web application architecture.
- Exposure to Docker and Kubernetes for containerization and deployment.
- Solid understanding of software engineering principles, best practices, and design patterns.
- Strong problem-solving skills with the ability to step out of comfort zones and learn new technologies.
- Excellent communication and collaboration skills.
- Strong execution discipline with complete ownership of deliverables.

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
We’re seeking a highly skilled, execution-focused Data Scientist with 4–10 years of experience to join our team. This role demands hands-on expertise in fine-tuning and deploying generative AI models across image, video, and audio domains — with a special focus on lip-sync, character consistency, and automated quality evaluation frameworks. You will be expected to run rapid experiments, test architectural variations, and deliver working model iterations quickly in a high-velocity R&D environment.
Responsibilities
- Run end-to-end fine-tuning experiments on state-of-the-art models (Flux family, LoRA, diffusion-based architectures, context-based composition).
- Develop and optimize generative AI models for audio generation and lip-sync, ensuring high fidelity and natural delivery.
- Extend current language models to support regional Indian languages beyond US/UK English for audio and content generation.
- Enable emotional delivery in generated audio (shouting, crying, whispering) to enhance realism.
- Integrate and synchronize background scores seamlessly with generated video content.
- Work towards achieving video quality standards comparable to Veo3/Sora.
- Ensure consistency in scenes and character generation across multiple outputs.
- Design and implement an automated objective evaluation frameworks to replace subjective human review — for cover images, video frames, and audio clips. Implement scoring systems that standardize quality checks before editorial approval.
- Run comparative tests across multiple model architectures to evaluate trade-offs in quality, speed, and efficiency.
- Drive initiatives independently, showcasing high agency and accountability. Utilize strong first-principle thinking to tackle complex challenges.
- Apply a research-first approach with rapid experimentation in the fast-evolving Generative AI space.
Requirements
- 4-10 years of experience in Data Science, with a strong focus on Generative AI.
- Familiarity with state-of-the-art models in generative AI (e.g., Flux, diffusion models, GANs).
- Proven expertise in developing and deploying models for audio and video generation.
- Demonstrated experience with natural language processing (NLP), especially for regional language adaptation.
- Experience with model fine-tuning and optimization techniques.
- Hands-on exposure to ML deployment pipelines (FastAPI or equivalent).
- Strong programming skills in Python and relevant deep learning frameworks (e.g., TensorFlow, PyTorch).
- Experience in designing and implementing automated evaluation metrics for generative content.
- A portfolio or demonstrable experience in projects related to content generation, lip-sync, or emotional AI is a plus.
- Exceptional problem-solving skills and a proactive approach to research and experimentation.
Benefits
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities


We are seeking a skilled and detail-oriented SRE Release Engineer to lead and streamline the CI/CD pipeline for our C and Python codebase. You will be responsible for coordinating, automating, and validating biweekly production releases, ensuring operational stability, high deployment velocity, and system reliability.
Requirements
● Bachelor’s degree in Computer Science, Engineering, or related field.
● 3+ years in SRE, DevOps, or release engineering roles.
● Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
● Experience automating deployments for C and Python applications.
● Strong understanding of Git version control, merge/rebase strategies, tagging, and submodules (if used).
● Familiarity with containerization (Docker) and deployment orchestration (e.g.,
Kubernetes, Ansible, or Terraform).
● Solid scripting experience (Python, Bash, or similar).
● Understanding of observability, monitoring, and incident response tooling (e.g.,Prometheus, Grafana, ELK, Sentry).
Preferred Skills
● Experience with release coordination in data networking environments
● Familiarity with build tools like Make, CMake, or Bazel.
● Exposure to artifact management systems (e.g., Artifactory, Nexus).
● Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
● Own the release process: Plan, coordinate, and execute biweekly software releases across multiple services.
● Automate release pipelines: Build and maintain CI/CD workflows using tools such as GitHub Actions, Jenkins, or GitLab CI.
● Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
● Integrate testing frameworks: Ensure automated test coverage (unit, integration,regression) is enforced pre-release.
● Release validation: Develop pre-release verification tools/scripts to validate build integrity and backward compatibility.
● Deployment strategy: Implement and refine blue/green, rolling, or canary deployments in staging and production environments.
● Incident readiness: Partner with SREs to ensure rollback strategies, monitoring, and alerting are release-aware.
● Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readiness.
Success Metrics
● Achieve >95% release success rate with minimal hotfix rollbacks.
● Reduce mean release deployment time by 30% within 6 months.
● Maintain a weekly release readiness report with zero critical blockers.
● Enable full traceability of builds from commit to deployment.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.