50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
We are looking for a Junior Full Stack Developer to join our growing engineering team and contribute to building high-quality software solutions. In this role, you will support the entire development lifecycle—from design to deployment—while working closely with product managers and senior engineers.
If you have a passion for technology, enjoy learning new tools, and thrive in a collaborative environment, we’d love to hear from you.
Current Technology Stack
- Backend: FastAPI (active), PHP (legacy), Java (legacy)
- Frontend: Svelte, TypeScript, JavaScript
Key Responsibilities
- Collaborate with development teams and product managers to ideate and deliver software solutions
- Assist in designing client-side and server-side architecture
- Contribute to building intuitive and visually appealing user interfaces
- Support database design and application development
- Help develop and maintain APIs
- Participate in testing to ensure performance, scalability, and responsiveness
- Assist in troubleshooting, debugging, and enhancing existing systems
- Support security and data-protection initiatives
- Contribute to mobile-responsive feature development
- Help maintain technical documentation
Candidate Requirements
Education
- B.Tech / BE in Computer Science, Statistics, or a related field
Location
- Bangalore
Role-Based Skills
- Exposure to web application development
- Familiarity with common technology stacks
- Basic knowledge of front-end technologies such as HTML, CSS, JavaScript, XML, and jQuery
- Working understanding of back-end languages such as Java, Python, or PHP
- Familiarity with JavaScript frameworks/libraries like Angular, React, Svelte, or Node.js
- Awareness of databases such as PostgreSQL, MySQL, or MongoDB
- Basic understanding of web servers (e.g., Apache) and UI/UX principles
Behavioral Skills
- Strong communication and teamwork abilities
- High attention to detail
- Good organizational skills
- Analytical and problem-solving mindset
We are looking for a Full Stack Developer to build scalable software solutions and contribute across the entire software development lifecycle—from conception to deployment.
You will work closely with cross-functional teams and should be comfortable with both front-end and back-end technologies, modern frameworks, and third-party libraries. If you enjoy building visually appealing, functional applications and thrive in Agile environments, we’d love to connect.
Current Technologies Used
- Backend: FastAPI (active), PHP (legacy), Java (legacy)
- Frontend: Svelte, TypeScript, JavaScript
Experience with Python and PHP is a plus, but not mandatory.
Role Responsibilities
- Collaborate with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Build visually appealing front-end applications
- Develop and manage efficient databases and applications
- Write effective and scalable APIs
- Test software for responsiveness and performance
- Troubleshoot, debug, and upgrade systems
- Implement security and data-protection measures
- Build mobile-responsive features and applications
- Create and maintain technical documentation
Candidate Requirements:
Education
- B.Tech / BE in Computer Science, Statistics, or a relevant field
Experience
- 2–4 years as a Full Stack Developer or in a similar role
Location
- Bangalore (Hybrid)
Skill Set – Role Based
- Experience building web applications
- Familiarity with common technology stacks
- Knowledge of front-end languages and libraries:
- HTML, CSS, JavaScript, XML, jQuery
- Knowledge of back-end languages and frameworks:
- Java, Python, PHP
- Angular, React, Svelte, Node.js
- Familiarity with:
- Databases: PostgreSQL, MySQL, MongoDB
- Web servers: Apache
- UI/UX principles
Skill Set – Behavioural
- Excellent communication and teamwork skills
- Strong attention to detail
- Good organizational skills
- Analytical mindset
Job Description:
Test Design & Execution
Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.
Automation Development
Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.
Defect Management
Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.
API & Backend Testing
Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.
Collaboration
Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.
CI/CD Integration
Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.
Required Skills & Experience
- Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
- Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
- Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
- Strong experience in functional, regression, integration, and UI testing.
- Solid understanding of SQL for data validation and backend testing.
- Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.
Desirable Skills
- Experience in mobile application testing (Android/iOS).
- Exposure to performance testing tools such as JMeter.
- Experience working with cloud platforms like AWS or Azure.

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Introduction
About Us:
Mercari is a Japan-based C2C marketplace company founded in 2013 with the mission to “Create value in a global marketplace where anyone can buy & sell.” From being the first tech unicorn from Japan before its IPO in 2018 we have come a long way towards becoming a global player and continuously and diligently work towards our transformation journey with a strong focus on our mission.
Since its inception, Mercari Group has worked to grow its services, investing in both our people and technology. Over time Mercari has expanded from being the top player in the C2C marketplace in Japan to new geographies like the U.S. We have also successfully launched new businesses such as Merpay, which is a mobile payment service platform with a vision to create a society where anyone can realize their dreams through a new ecosystem centered not only on payment service but also on credit. Today, Mercari Group is made up of multiple subsidiary businesses including logistics, B2C platform, blockchain, and sports team management.
For our services to be utilized by people worldwide; however, there is still a mountain of work ahead of us. This endeavor naturally requires the capability of the best talent and minds, and that is exactly the reason for us to launch the India Center of Excellence. With your help, we will continue to take on the world stage and strive to grow into a successful global tech company.
Our Culture:
To achieve our mission at Mercari, our organization and each of our employees share the same values and perspectives. Our individual guidelines for action are defined by our four values: Go Bold, All for One, Be a Pro and Move Fast. Our organization is also shaped by our four foundations: Sustainability, Diversity & Inclusion, Trust & Openness, and Well-being for Performance. Regardless of how big Mercari gets, the culture will remain essential to achieving our mission and something we want to preserve throughout our organization. We invite you to read the Mercari Culture Doc which summarizes the behaviors and mindset shared by Mercari and its employees. We continue to build an environment where all of our members of diverse backgrounds are accepted and recognized, and where they can thrive while holding dear to Mercari’s culture.
Work Responsibilities
- Machine learning engineers working in the Recommendation domain develop the functions and services of the marketplace app Mercari through the development and maintenance of machine learning systems like Recommender systems while leveraging necessary infrastructure and companywide platform tools.
- Mercari is actively applying advanced machine learning technology to provide a more convenient, safer, and more enjoyable marketplace. Machine learning engineers use the cloud and Kubernetes to operate and improve machine learning systems.
Bold Challenges
- We are looking for people who are interested in our services, mission, and values, and want to work where engineers can go bold, use the latest technology, make autonomous decisions, and take on challenges at a rapid pace.
- Develop and optimize machine learning algorithms and models to enhance recommendation system to improve discovery experience of users
- Collaborate with cross-functional teams and product stakeholders to gather requirements, design solutions, and implement features that improve user engagement
- Conduct data analysis and experimentation with large-scale data sets to identify patterns, trends, and insights that drive the refinement of recommendation algorithms
- Utilize machine learning frameworks and libraries to deploy scalable and efficient recommendation solutions.
- Monitor system performance and conduct A/B testing to evaluate the effectiveness of features.
- Continuously research and stay updated on advancements in AI/machine learning techniques and recommend innovative approaches to enhance recommendation capabilities.
Minimum Requirements:
- Over 5-9 years of professional experience in end-to-end development of large-scale ML systems in production
- Strong experience demonstrating development and delivery of end-to-end machine learning solutions starting from experimentation to deploying models, including backend engineering and MLOps, in large scale production systems.
- Experience using common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas)
- Deep understanding of machine learning and software engineering fundamentals
- Basic knowledge and skills related to monitoring system, logging, and common operations in production environment
- Communication skills to carry out projects in collaboration with multiple teams and stakeholders
Preferred skills:
- Experience developing Recommender systems utilizing large-scale data sets
- Basic knowledge of enterprise search systems and related stacks (e.g. ELK)
- Functional development and bug fixing skills necessary to improve system performance and reliability
- Experience with technology such as Docker and Kubernetes
- Experience with cloud platforms (AWS, GCP, Microsoft Azure, etc.)
- Microservice development and operation experience with Docker and Kubernetes
- Utilizing deep learning models/LLMs in production
- Experience in publications at top-tier peer-reviewed conferences or journals
Employment Status
Full-time
Office
Bangalore
Hybrid workstyle
- We believe in high performance and professionalism. We work from office for 2 days/week and work from home 3 days/week
- To build a strong & highly-engaged organization in India, we highly encourage everyone to work from our Bangalore office, especially during the initial office setup phase
- We will continue to review and update the policy to address future organizational needs
Work Hours
- Full flextime (no core time)
*Flexible to choose working hours other than team common meetings
Media
Owned Media
- Mercari Engineering Portal
- AI at Mercari portal
- Mercan - Introduces the people that make Mercari
- Mercari US Blog
Related Articles
- Development Platforms and Platformers: On Rising to the Global Standard Ken Wakasa, Mercari CTO | mercan
- “I'm Not a Talented Engineer” Insists the Member-Turned-Manager Revamping Our Internal CS Tool | mercan
- Personalize to globalize:How Mercari is reshaping their app, their company, and the world | mercan
- The Providers of the Safe and Secure Mercari Experience: The TnS Team, Introduced by Its Members! | mercan
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Director - Data engineering
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
what you will wake up to solve.
1. Delivery & Tactical Rigor
- Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
- Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
- Execution & Technical Resolution
- Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
- Quality Enforcement
- Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.
2. Strategic Growth & Practice Scaling
- Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
- Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
- Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.
3. Leadership & Unit Management
- Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
- Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
- Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.
Welcome to Searce
The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.
We don’t do traditional.
As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.
Functional Skills
1. Delivery Management & Operational Excellence
- Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
- Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
- SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
- Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.
2. Architectural Implementation & Technical Oversight
- Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
- Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
- Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
- DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.
3. Unit Management & Commercial Execution
- Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
- Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
- Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
- Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.
Tech Superpowers
- Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
- End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
- Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
- Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
- AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
- Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
- Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
- Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.
Experience & Relevance
- Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
- Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
- Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
- Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
- Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
- Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Role: Sr. Azure Data Engineer
Experience: 8–10 Years
Work Timings: 1:30 PM – 10:30 PM IST
Location: Bellandur Bengaluru (Work from Office)
Company: Chevron
Employment Type: 6- 12 months Contract
Role Overview
We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.
Key Responsibilities
- Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
- Define and implement scalable data Lakehouse architectures aligned with OSDU standards
- Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
- Establish data governance frameworks including metadata, lineage, security, and access control
- Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
- Collaborate with stakeholders to translate business needs into technical solutions
- Develop and maintain architecture documentation, solution patterns, and standards
- Provide technical leadership and mentorship to engineering teams
- Optimize solutions for performance, cost, reliability, and security
- Ensure alignment with enterprise architecture and compliance standards
- Drive adoption of modular and reusable cloud data components
Required Skills & Qualifications
Core Technical Skills
- Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
- Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
- Strong experience in Python-based data engineering
- Data pipeline development (batch + real-time)
Architecture & Advanced Skills
- Data Lakehouse architecture and distributed systems
- Microservices, APIs, and integration frameworks
- OSDU (Open Subsurface Data Universe) or similar energy data models
DevOps & Tools
- CI/CD tools: Azure Pipelines, GitHub Actions
- Infrastructure as Code: Terraform or similar
Other Skills
- Data governance, security, compliance, and cost optimization
- Strong analytical and problem-solving skills
- Excellent communication and stakeholder management
About US:-
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.
Experience Range: 4-8 Years
Role: Full Stack Developer
Duties:
As Full Stack Engineer, you will work in small teams in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. Our highly skilled system architects and development managers configure software packages and build custom applications, creating the foundation for rapid and cost-effective implementation of systems that maximize value from day one. Our development teams are small, flexible and employ agile methodologies to quickly provide our consultants with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products.
The Role:
We create both rapid prototypes, usually in 2 to 3 weeks, as well as full-scale applications typically within 2 to 3 months, by working collaboratively and iteratively through design and development to deliver fully functioning web-based and mobile applications that meet business goals. Our Front-End Developers contribute to the architecture across the technology stack, from database to native apps.
Skills:
Minimum of 5–9 years of experience, with a proven record of hands-on software development in at least one of the following languages: Java, C#, C/C++, Python, JavaScript, Ruby, plus modern frontend proficiency in React and TypeScript. Demonstrated ownership of delivering end-to-end solutions (from design through production support), with strong proactivity in identifying opportunities, anticipating risks, and driving improvements without waiting for direction.
Significant experience designing, implementing, and operating Web Services and APIs (REST, SOAP, RPC, RMI) including API monitoring/observability and performance tuning. Solid understanding of network communication protocols (HTTP, TCP/IP, UDP, SMTP, DNS) and distributed system behaviors.
Capable of applying best coding practices, design patterns, and evaluating tradeoffs in complex, microservices-based architectures. Well versed in cloud computing (AWS), automated testing, CI/CD, and DevOps tooling; comfortable owning reliability, scalability, and operational excellence. Bonus: hands-on knowledge of Terraform (infrastructure as code).
Experience with relational data stores (MySQL, SQL Server, Oracle) and non-relational technologies, with strong proficiency in MongoDB (schema design, indexing, performance optimization), plus exposure to Elasticsearch, Cassandra, and related ecosystems. Strong professional experience with frameworks such as Node.js, AngularJS, Spring, Guice, and expertise building mobile, responsive/adaptive applications.
First-hand understanding of Agile development methodologies, with a commitment to engineering excellence (e.g., DRY, TDD, CI) and pragmatic delivery.
Non-Technical: First and foremost, passionate about technology, especially AI and emerging/disruptive technologies, and excited about translating innovation into real product impact. Strong command of English (verbal and written), excellent interpersonal skills, and a highly collaborative mindset, able to partner effectively across engineering, product, design, and stakeholders. Sound problem-solving ability to quickly process complex information and communicate it clearly and simply. Demonstrated leadership/mentorship, accountability, and a self-starter attitude suited to environments that foster entrepreneurial thinking.
What We Offer
- Professional Development and Mentorship.
- Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
- Health and Family Insurance.
- 40+ Leaves per year along with maternity & paternity leaves.
- Wellness, meditation and Counselling sessions.
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Job Title : Azure Data Scientist (AI/ML)
Experience : 5 to 10 Years
Location : Bengaluru
Work Mode : Hybrid (4 Days WFO, Tue to Fri – Non-Negotiable)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a highly skilled Azure Data Scientist with strong expertise in AI/ML, Python, and cloud-based data platforms. The role involves building scalable ML solutions, working on GenAI & RAG use cases, and delivering business impact through data-driven insights.
🔥 Mandatory Skills :
Python, Azure Machine Learning, Databricks, AI/ML model development (5+ yrs), Statistics & Probability, EDA & Data Modeling, Machine Learning algorithms, GenAI/RAG experience
✅ Key Responsibilities :
- Design, develop, and deploy AI/ML models to solve complex business problems
- Perform Exploratory Data Analysis (EDA) for data cleaning, discovery, and insights
- Build and optimize ML pipelines using Azure Machine Learning & Databricks
- Work on GenAI applications, RAG implementations, and advanced analytics solutions
- Collaborate with data engineers, business stakeholders, and domain experts
- Translate complex data into actionable business insights
- Manage model lifecycle (development, validation, deployment, monitoring)
- Communicate model outputs and insights to technical & non-technical stakeholders
- Drive innovation and contribute to AI/ML best practices and strategy
🧠 Required Skills (Must Have) :
- Strong experience in Python (ML/AI development)
- Hands-on with Azure Machine Learning & Databricks
- Deep understanding of Mathematics, Probability, and Statistics
- Expertise in Machine Learning & Data Science methodologies
- Experience in EDA, data visualization, and model development
- Exposure to GenAI, RAG, and ML application development
- Minimum 5+ years of experience in AI/ML model development
- Strong problem-solving and analytical skills
➕ Good to Have :
- Experience with MLOps practices
- Domain knowledge in Energy / Oil & Gas value chain
- Experience in data visualization tools
- Team collaboration or mentoring experience
🤝 What We’re Looking For :
- Strong communication & stakeholder management skills
- Ability to work in a cross-functional, global team environment
- Self-driven, adaptable, and innovation-focused mindset
📝 Interview Process :
- Geektrust Assessment (Assemble)
- Technical Interview
- Fitment Round
- Client Round

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Key Responsibilities:
- Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
- Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
- Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
- Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
- Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
- Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
- Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
- Contribute to the development of technical documentation and training materials.
Required Skillset:
- Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
- Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
- Experience in designing and developing scalable, high-performance, and secure software solutions.
- Strong understanding of software development methodologies, including Agile and Waterfall.
- Excellent communication, interpersonal, and problem-solving skills.
- Ability to work effectively in a fast-paced, dynamic environment.
- Bachelor's or Master's degree in Computer Science or a related field.
- Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.
You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.
Key Responsibilities
- Design, develop, test, debug, and maintain chatbot and virtual agent applications
- Collaborate with business stakeholders to define and translate requirements into technical solutions
- Analyze large volumes of conversational data to improve chatbot accuracy and performance
- Develop automation workflows for data handling and refinement
- Train and optimize chatbots using historical chat logs and user-generated content
- Ensure solutions align with enterprise architecture and best practices
- Document solutions, workflows, and technical designs clearly
Required Skills
- Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
- Experience with one or more AI/NLP platforms such as:
- Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
- Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
- Strong programming knowledge in Python, JavaScript, or Node.js
- Experience training chatbots using historical conversations or large-scale text datasets
- Practical knowledge of:
- Formal syntax and semantics
- Corpus analysis
- Dialogue management
- Strong written communication skills
- Strong problem-solving ability and willingness to learn emerging technologies
Nice-to-Have Skills
- Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
- Experience building voice apps for Amazon Alexa or Google Home
- Experience with Test-Driven Development (TDD) and Agile methodologies
- Ability to design and implement end-to-end pipelines for AI-based conversational applications
- Experience in text mining, hypothesis generation, and historical data analysis
- Strong knowledge of regular expressions for data cleaning and preprocessing
- Understanding of API integrations, SSO, and token-based authentication
- Experience writing unit test cases as per project standards
- Knowledge of HTTP, REST APIs, sockets, and web services
- Ability to perform keyword and topic extraction from chat logs
- Experience training and tuning topic modeling algorithms such as LDA and NMF
- Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
- Experience with NLP frameworks such as NLTK and spaCy

We are looking for an experienced Python Developer with 5–6 years of hands-on experience in designing, developing, and maintaining scalable backend applications and APIs. The ideal candidate should have strong expertise in Python, backend frameworks, databases, and cloud/deployment practices. The candidate should be capable of working in a fast-paced environment and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities
- Design, develop, test, and maintain robust and scalable Python-based applications.
- Build and integrate RESTful APIs and backend services.
- Work on server-side logic, database integration, and performance optimization.
- Collaborate with frontend developers, QA teams, DevOps, and product teams for end-to-end delivery.
- Write reusable, testable, and efficient code following best practices.
- Debug, troubleshoot, and resolve production issues.
- Participate in code reviews, technical design discussions, and architecture planning.
- Optimize applications for maximum speed, scalability, and reliability.
- Implement security and data protection measures.
- Work with CI/CD pipelines and deployment processes.
Required Skills
- Strong experience in Python development with 5–6 years of relevant experience.
- Hands-on experience with Python frameworks such as:
- Django
- Flask
- FastAPI
- Strong understanding of OOPs, Data Structures, and Algorithms.
- Experience in building and consuming REST APIs.
- Good knowledge of SQL and relational databases like:
- MySQL
- PostgreSQL
- Experience with NoSQL databases like:
- MongoDB
- Redis (preferred)
- Knowledge of ORM frameworks such as SQLAlchemy or Django ORM.
- Familiarity with Git/GitHub/GitLab version control.
- Understanding of unit testing, debugging, and code quality practices.
- Experience in working with Linux/Unix environments.
- Knowledge of Docker, containerization, and deployment concepts.
- Exposure to cloud platforms like AWS / Azure / GCP is preferred.
Preferred / Good to Have Skills
- Experience in microservices architecture.
- Knowledge of Celery, asynchronous processing, and message queues like:
- RabbitMQ
- Kafka
- Familiarity with CI/CD pipelines.
- Experience in writing clean architecture and scalable backend systems.
- Exposure to DevOps practices is a plus.
- Experience in Agile/Scrum methodology.
About the Role
Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.
We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.
This is not a feature delivery role. This is a system ownership role.
You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.
What You’ll Own
- Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
- Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
- Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
- Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
- Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
- Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
- Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
- Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
- Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.
Who You Are
- 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
- Strong backend architecture experience in Python and/or TypeScript.
- Comfortable designing distributed systems and financial workflows.
- Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
- Strong understanding of API design, state management, and data modeling.
- Able to navigate ambiguity and build 0→1 infrastructure.
- Hands-on builder who leads by writing production-grade code.
We Value
- Experience with underwriting engines or policy-driven decision systems.
- Exposure to smart contracts and blockchain integrations.
- Familiarity with PostgreSQL and event-driven architectures.
- Experience in early-stage or high-growth startups.
- Strong product thinking and ability to translate complex financial logic into scalable systems.
Why Join Qiro
- Lead the architecture of a programmable credit infrastructure platform.
- Join the founding technical leadership team.
- High autonomy and ownership — your decisions shape the company.
- In-office collaboration in Bangalore for speed and iteration.
- Competitive compensation and meaningful equity.
Our Culture
We operate with:
- First-principles thinking
- Technical craftsmanship
- High ownership
- Fast execution with long-term architectural discipline
Company Description
Euphoric Thought Technologies Pvt. Ltd. provides modern technology solutions with a focus on performance and results for organizations. We are committed to creating a better future by acting differently, thinking carefully, and always being enthusiastic. Euphoric offers services in Product Development, Cloud Management and Consulting, DevOps, ML/AI, ServiceNow Integration, Blockchain Development, and Data Analytics.
Role Description
This is a full-time on-site role in Bengaluru for a Senior Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.
Qualifications
- Back-End Web Development and Full-Stack Development skills
- Front-End Development and Software Development skills
- Proficiency in Cascading Style Sheets (CSS)
- Experience with Python, Django, and Flask frameworks
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
- Front end - ReactJS skill
- Data Engineering: Useful experience blending data engineering with core software engineering.
- Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
- CI/CD Tools: Familiarity with Github Actions is a plus.
- Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
- Code Optimization: Proficient in profiling and optimizing Python code.
We're looking for a Senior Python Developer with experience to join our team. You will lead and contribute to Python-based software projects as a Senior Python Developer, ensuring code quality and efficiency.
Senior Python Developer Job Responsibilities
- Design and Development: Senior Python Developers are in charge of creating Python-based applications and systems. Their code is the foundation of all software projects, ensuring functionality and performance.
- Leadership & Mentorship: Senior Developers frequently take on leadership positions, guiding and mentoring junior developers. They give technical skills and ensure the team adheres to best practices.
- Collaboration: Working collaboratively with cross-functional groups is an important element of this role. They aid in the definition of project demands and specifications, ensuring that software meets business objectives.
- Code Quality Assurance: A Senior Python Developer's role includes code reviews. They ensure code quality, suggest areas for development, and ensure best practices are followed.
- Troubleshooting and Debugging: Senior Python Developers are in charge of finding and resolving code bugs. Their strong problem-solving abilities are put to use as they troubleshoot and debug software to ensure its flawless operation.
- Staying Informed: It is critical to stay current with the newest trends and standards in Python development. Senior Developers ought to be knowledgeable about new technologies and tools.
- Performance Optimisation: They are in charge of optimization and testing to ensure that software is functional and operates smoothly.
- Documentation: Proper code and technical specifications documentation is required to ensure that the development process is open and readily available to the team.
Senior Python Developer Requirements and Skills
- Educational Background: A bachelor's or master's degree in computer science or a related field is a good starting point for this position.
- Experience: 6+yr Proven experience as a Python Developer is required. A strong project portfolio reveals expertise and capability.
- Python Proficiency: A strong understanding of Python and its associated libraries is required. It is critical to have a thorough understanding of Python's capabilities and limitations.
- Web Frameworks: Knowledge of web frameworks such as Django or Flask is advantageous because it speeds up web application development.
- Database Knowledge: Understanding of relational and non-relational databases is frequently required. Understanding how to work with databases is essential for developing reliable software.
- Front-End Skills: Being familiar with front-end technologies such as HTML, CSS, and JavaScript can be a valuable addition to the skill set of a Senior Python Developer, particularly when working on web applications.
- Version Control: Working knowledge of source control systems such as Git is frequently required, as it aids in code integrity and collaboration.
- Problem-Solving Skills: Strong skills in problem-solving and attention to detail are required. Senior Python developers must be able to effectively identify and resolve issues.
- Communication and Collaboration: Effective communication and collaboration with team members and stakeholders are critical to the success of projects.
- Leadership Experience: Prior leadership or mentorship experience is a significant asset. The ability to mentor and lead junior developers is frequently required.
Role Overview: We are seeking a Research Engineer to develop AI-driven solutions at the intersection of energy, climate science, and artificial intelligence. You will play a pivotal role in product development, leveraging data science and machine learning to solve engineering challenges.
Responsibilities:
- Develop and deploy data-driven solutions for energy and power market applications.
- Analyze large, diverse data sets: meteorological, local sensors (wind, solar, consumption), images (satellite images, etc.).
- Solve core engineering problems with AI/ML techniques, domain expertise, and advanced modeling.
- Design and build scalable pipelines for training, testing, and deploying models.
- Communicate complex ideas effectively to both technical and non-technical stakeholders.
- Collaborate across teams to drive product development and ensure impactful outcomes.
Expectations:
- Ability to move from broad vision to technical solutions.
- Ownership mindset: accountability for effort and outcomes.
- Integrity, transparency, and teamwork. Requirements:
- Strong analytical skills with a data-driven and scientific approach.
- Proficiency in Python, capable of handling both structured and unstructured data.
- Prior experience (industry or academia) building machine learning or deep learning based AI solutions.
- Prior experience in LLM & GenAI is a plus.
Job Summary
We are seeking a skilled Python Platform Developer to join our engineering team. You will be responsible for building, optimizing, and maintaining the core backend infrastructure and internal platforms that power our applications. The ideal candidate will build scalable API architectures, enhance data security, and implement automation to improve developer productivity.
Key Responsibilities
- Platform Development: Design, develop, and maintain robust and scalable backend services, API frameworks, and shared libraries using Python.
- Infrastructure Automation: Build and maintain tools for infrastructure automation using technologies such as AWS (Lambda, EC2, S3), Docker, and Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Performance Optimization: Improve system performance, low-latency API interactions, and data storage solutions.
- CI/CD Optimization: Develop, maintain, and improve automated testing and continuous integration/continuous deployment (CI/CD) pipelines.
- Collaboration: Work closely with product engineers, DevOps, and frontend developers to define requirements and deliver reliable infrastructure solutions.
- Security & Monitoring: Implement strong security protocols and monitoring solutions (e.g., Prometheus, Datadog) to ensure platform reliability.
Required Skills and Qualifications
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 3–5+ years of experience in software development with a heavy focus on Python.
- Core Python: Deep understanding of Python 3.x, object-oriented programming (OOP), and asynchronous programming (e.g., asyncio).
- Frameworks: Hands-on experience with web frameworks like FastAPI, Django, or Flask.
- Cloud Platforms: Experience with AWS or GCP services.
- Tools: Proficient with Git, Docker, and CI/CD pipelines.
- Database: Strong knowledge of SQL and database management.
Preferred Skills
- Experience with serverless architectures.
- Knowledge of Kubernetes.
- Experience in a DevOps or Site Reliability Engineering (SRE) role.
DevOps Engineer
Location: Bangalore office
About Peliqan
Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.
The Role
Own and evolve the infrastructure powering Peliqan's multi-tenant data platform. You'll manage Kubernetes clusters, cloud resources, CI/CD pipelines, and monitoring — keeping everything reliable, secure, and scalable. You'll be the go-to person for infrastructure support across the engineering team.
Responsibilities
Manage and optimise Kubernetes clusters running production workloads — data pipelines, APIs, and customer-facing services.
Maintain Docker-based local development environments for the engineering team.
Administer cloud infrastructure on AWS and Google Cloud (compute, storage, networking, managed databases).
Build and maintain CI/CD pipelines for automated testing, building, and deploying across staging and production.
Set up and manage monitoring, alerting, and logging for platform health and incident response.
Manage release processes — deployments, rollbacks, and release strategies.
- Maintain infrastructure-as-code using Helm charts.
- Support security hardening and compliance efforts (SOC 2, GDPR).
Requirements
3+ years in a DevOps, SRE, or Infrastructure Engineering role.
Strong hands-on experience with Kubernetes and Helm charts.
Deep familiarity with Docker for containerisation and local dev workflows.
Production experience with AWS and/or Google Cloud.
- Proficiency in Python and Bash scripting for automation and tooling.
- Solid grasp of DevOps principles: infrastructure-as-code, GitOps, observability, continuous delivery.
- Experience with CI/CD platforms (GitHub Actions, GitLab CI, or similar).
Nice to Have
- Experience supporting multi-tenant SaaS platforms or data infrastructure at scale.
- Knowledge of PostgreSQL, MySQL, or cloud-managed database administration.
- Exposure to security compliance frameworks (SOC 2, ISO 27001, GDPR).
QA Tester
Location: Bangalore office
About Peliqan
Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.
The Role
Own quality end-to-end across Peliqan's platform — from the frontend UI and data apps to backend pipelines, connectors, and APIs. You'll design test strategies, build automated test suites, and work closely with developers to ship reliable software. Comfort using AI tools to accelerate test creation is essential.
Responsibilities
Design and maintain test plans covering manual and automated testing for features, regressions, and releases.
Write unit tests for the frontend (Jest) and backend (pytest).
- Build and extend end-to-end test suites using Playwright across critical user flows — connector setup, data transformations, data app publishing, API creation.
- Use AI tools (Copilot, Claude, etc.) to rapidly generate and refine test cases and test data.
- Triage, document, and track defects with clear reproduction steps.
- Integrate automated tests into CI/CD pipelines.
- Conduct exploratory testing across data pipelines, the query engine, and user-facing interfaces.
Requirements
2+ years in a QA or SDET role, ideally in SaaS or data products.
Hands-on experience with Jest, pytest, and Playwright.
- Comfortable with Python for scripting and test automation.
- Demonstrated use of AI tools to craft and scale test suites.
- Familiarity with CI/CD pipelines and REST API testing.
- Strong analytical mindset and clear written communication.
Nice to Have
- Experience testing ETL/ELT pipelines or database-heavy applications.
- Familiarity with SQL and data validation testing.
- Exposure to Docker and containerised environments.
Role & Responsibilities
As a Junior Full Stack Engineer at , you will work on product features end-to-end - from the data model to the pixel on the screen.
You will work alongside senior engineers, learn to bridge engineering and product, and contribute to translating complex business workflows into interfaces that enterprise teams actually use. You will own smaller features and grow into full vertical ownership over time.
- Work on product features end-to-end - complex configuration UIs, workflow builders, and operational dashboards for enterprise users
- Contribute to customer-facing surfaces - SDKs, embeddable flows, and APIs that directly shape how clients experience
- Learn and apply frontend architecture best practices - component structure, state management, performance optimisation, and accessibility
- Build backend APIs and data models under senior guidance - developing end-to-end ownership over time
- Debug issues across the full stack - learn to trace problems from symptom to root cause
- Collaborate with design, product, and customer success - hear feedback from real users and let it shape what you build
- Partner with the GenAI engineer to surface AI capabilities through clean, well-designed product interfaces
- Participate in code and design reviews to learn and grow
Ideal Candidate
- Strong Full Stack Engineer profiles
- Mandatory (Experience 1) – Must have 1+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)
- Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments
- Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces
- Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries
- Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices
- Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes
- Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks
- Mandatory (Company) - Top Product Companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)
- Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies
- Preferred (Skill 1) – Strong proficiency in TypeScript, with experience in type-safe architecture and scalable frontend/backend systems
- Preferred (Skill 2) – Experience with testing frameworks such as Cypress / Playwright and strong unit testing practices
- Preferred (Skill 3) – Experience working with cloud platforms (AWS preferred)
Strong Full Stack Engineer profiles
Mandatory (Experience 1) – Must have 1+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)
Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments
Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces
Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries
Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices
Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes
Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks
Mandatory (Company) - Top Product Companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)
Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies
Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-8 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
• Design, develop, and maintain data pipelines using Databricks (PySpark / Spark SQL)
• Build and manage data pipelines across Bronze, Silver, and Gold layers using Delta Lake
• Implement ETL/ELT workflows for batch and near real-time processing
• Work with Databricks Workflows for orchestration and job scheduling
• Leverage Unity Catalog for data governance, access control, and metadata management
• Optimize Spark jobs, cluster configurations, and cost efficiency
• Collaborate with business and analytics teams to translate requirements into scalable data models
• Integrate data from multiple sources (APIs, databases, cloud storage)
• Ensure data quality, validation, and observability across pipelines
• Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring
Qualifications:
• Bachelor’s degree in computer science, Engineering, or a related field.
• Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
• Hands-on experience with Databricks platform and ecosystem
• Strong proficiency in Python (PySpark) and SQL
• Experience working with Delta Lake (ACID transactions, time travel, schema evolution)
• Good understanding of data warehousing concepts and dimensional modeling
• Familiarity with Unity Catalog (data governance, RBAC, lineage basics)
• Understanding of Spark performance tuning and optimization techniques
• Experience with cloud platforms (AWS / Azure / GCP)
• Working knowledge of Git and CI/CD practices
• Familiarity with implementing CI/CD processes or other orchestration tools is a plus.
Product Engineer (Full Stack) – AI & Healthcare
About Us
We are building a next-generation computational model of human biology to predict, prevent, and cure diseases at their source. By combining real-world biological data, diagnostics, and advanced modeling, we aim to make biology computable.
Our platform integrates at-home diagnostics, biomarker tracking, and lifestyle data (wearables, sleep, nutrition) to create a continuous, connected view of human health.
Role Overview
We are looking for a Product Engineer who combines strong engineering skills with product intuition and design sensibility. You will build high-quality, performant software that serves as the interface between individuals and their biology.
This is a high-ownership role where you will work closely with design and product teams to ship impactful features end-to-end.
Key Responsibilities
- Build and ship scalable, reliable full-stack systems
- Develop intuitive, high-quality user interfaces for complex biological data
- Collaborate closely with design to deliver polished user experiences
- Own features end-to-end: from ideation to production
- Optimize performance, responsiveness, and usability
- Work in a fast-paced, AI-first environment
Required Skills & Qualifications
- 2–4 years of experience building and shipping production systems
- Strong proficiency in full-stack development
- Experience with modern frontend frameworks (React preferred)
- Solid understanding of backend systems and APIs
- Strong product sense and attention to UI/UX detail
- Ability to work independently and make decisions quickly
Preferred Qualifications (Good to Have)
- Experience in startup environments
- Strong side projects or open-source contributions
- Experience with product analytics tools
- Exposure to AI/ML or agent-based systems
Tech Stack
- Frontend: TypeScript, React 18, Vite, Tailwind, TanStack, Radix
- Mobile: React Native, Nativewind
- Backend: Python (FastAPI, Pydantic)
- Database: Supabase, PostgreSQL
- AI/Tools: Braintrust, LiteLLM
- Analytics: Mixpanel, Amplitude, FullStory
Compensation & Benefits
- CTC: ₹30L – ₹35L (Base) + ESOPs
- Location: HSR Layout, Bengaluru (In-person)
- Work Schedule: 6 days/week (Mon–Sat)
- Joining: Immediate
Perks:
- Sponsored healthy meals (lunch & dinner)
- Gym subscription
- Learning & development budget
- Freedom to use tools/tech of your choice
How We Work
- AI-first mindset
- Founder-mode ownership
- Speed over process
- High trust, high autonomy
- Focus on learning velocity
Why Join Us?
- Work on one of the hardest problems in human history
- Build products at the intersection of AI, healthcare, and design
- Direct impact on improving human health outcomes
- High-growth, high-learning environment
Strong Junior GenAI / AI Backend Engineer Profiles
Mandatory (Experience 1) – Must have 1+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)
Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)
Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.
Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)
Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems
Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)
Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)
Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem
Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)
Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies
Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.
Strong Senior GenAI / AI Backend Engineer Profiles
Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
Mandatory (Company) – Product companies / startups, preferably Series A to Series D
Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
We are looking for a strong Mobile Engineer with backend exposure who can own end-to-end feature development. This is a mobile-heavy fullstack role where you will primarily build scalable mobile applications while contributing to backend services and APIs.
Key Responsibilities
- Design and develop high-quality mobile applications (primary focus)
- Build and integrate RESTful APIs and backend services
- Collaborate with product and design teams to ship features end-to-end
- Ensure performance, scalability, and reliability of mobile apps
- Write clean, maintainable, and testable code
- Participate in architecture discussions and technical decision-making
Must Have Skills
- Strong experience in mobile development (Flutter / React Native / iOS / Android)
- Solid understanding of backend development (Node.js / Java / Python / Go)
- Experience with API design, microservices, and databases
- Good understanding of system design and app performance optimization
- Familiarity with cloud platforms (AWS/GCP)
Good to Have
- Experience working in startup environments
- Exposure to CI/CD pipelines and DevOps practices
- Understanding of real-time systems or scalable architectures
Generative AI System Design
- Architect and implement end-to-end LLM-powered applications
- Build scalable RAG pipelines (chunking, embeddings, hybrid search, reranking)
- Design and implement agent-based workflows (tool calling, multi-step reasoning, orchestration)
- Integrate LLM APIs such as OpenAI and Anthropic, along with open-source models
- Implement structured output validation, grounding strategies, and hallucination mitigation
- Optimize inference cost, latency, and token efficiency
- Design evaluation pipelines for performance, accuracy, and safety
2️⃣ Backend & Microservices Engineering
- Design scalable backend systems using Python
- Build REST and async APIs using FastAPI / Django
- Architect and implement microservices with clear service boundaries
- Implement service-to-service communication (REST, gRPC, event-driven messaging)
- Work with message brokers (Kafka / RabbitMQ)
- Optimize database performance (PostgreSQL, MongoDB)
- Implement caching strategies (Redis)
- Build observability: logging, monitoring, distributed tracing
3️⃣ Cloud-Native Architecture & DevOps
- Design and deploy containerized services using Docker
- Orchestrate services using Kubernetes
- Implement CI/CD pipelines
- Ensure system scalability, resilience, and fault tolerance
- Apply distributed systems principles:
- Circuit breakers
- API gateway patterns
- Load balancing
- Horizontal scaling
- Saga patterns
- Zero-downtime deployments
Job Description
Experience: 3–6 Years
Location: Bangalore (Should be willing to travel to Indonesia, 6 months-1 year)
Role Overview
We are looking for a skilled Agentic Engineer / Data Scientist with hands-on experience in designing, architecting, and developing advanced agentic systems and intelligent solutions. The ideal candidate will have a strong foundation in modern AI frameworks and experience building scalable, production-grade AI systems.
Key Responsibilities
- Design and develop agentic systems and AI-driven solutions
- Architect scalable workflows for multi-agent systems
- Implement and optimize RAG (Retrieval-Augmented Generation) pipelines
- Work with vector embeddings and vector databases
- Build and manage agent orchestration frameworks
- Develop end-to-end AI workflows, including data ingestion, processing, and inference
- Collaborate with cross-functional teams to deliver AI-powered solutions
- Continuously evaluate and integrate emerging AI tools and frameworks
Required Skills & Qualifications
- 3–6 years of experience in Data Science, Machine Learning, or AI Engineering
- Strong experience in agent engineering and agent-based architectures
- Hands-on experience with:
- RAG pipelines
- Vector embeddings and semantic search
- Multi-agent orchestration and workflow design
- Proficiency in Python and relevant ML/AI libraries
- Experience with:
- LangChain (mandatory)
- LangGraph (mandatory)
- Solid understanding of LLMs and prompt engineering
- Strong problem-solving and system design skills
Strong Senior Full Stack Engineer profiles
Mandatory (Experience 1) – Must have 4+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)
Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments
Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces
Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries
Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices
Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes
Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks
Mandatory (Company) - Must have worked in product-based companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)
Mandatory (Education) - Strong CS fundamentals required (CS degree or equivalent). Candidates from Tier-1 institutes (IITs, BITS, IIITs) are preferred but not mandatory

About the Role
We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.
Key Responsibilities
· Design, develop, and deploy machine learning models for real-world business problems
· Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring
· Implement and manage MLOps pipelines for scalable and reproducible workflows
· Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management
· Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications
· Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions
· Optimize model performance and ensure production stability
· Stay updated with the latest advancements in AI/ML and GenAI ecosystems
Required Skills & Qualifications
· 4+ years of experience in Data Science / Machine Learning
· Strong programming skills in Python
· Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)
· Solid understanding of MLOps practices and tools
· Experience with MLflow or similar model lifecycle tools
· Practical experience in Generative AI (GenAI), including working with LLMs
· Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch
· Strong understanding of data structures, algorithms, and statistics
· Experience with cloud platforms (AWS/GCP/Azure) is a plus
Good to Have
· Experience with LLM fine-tuning, prompt engineering, or RAG pipelines
· Exposure to Docker, Kubernetes, and CI/CD pipelines
· Knowledge of data engineering workflows
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who we seek
We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.
We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Build user-facing features using Next.js and TypeScript.
- Convert designs into responsive UI using Tailwind CSS and reusable components.
- Work with APIs to integrate frontend with backend services.
- Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
- Fix bugs, write clean code, and improve performance.
- Collaborate in a PR-based workflow on GitHub.
- Write and maintain documentation for the features you ship.
- Learn and apply best practices: component structure, state management, error handling, accessibility basics.
What We’re Looking For
- Basic to intermediate experience with JavaScript and NextJS.
- Familiarity with TypeScript basics.
- Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
- Understanding of how APIs work and how to consume them from the frontend.
- Strong Git knowledge.
- Strong learning mindset, ownership, and attention to detail.
Benefits
- Work directly with founders and the leadership team.
- Drive projects that create real business impact, not busywork.
- Gain practical skills that traditional education misses.
- Experience rapid growth as you tackle meaningful challenges.
- Fuel your career journey with continuous learning and advancement paths.
- Thrive in a workplace where collaboration powers innovation daily.
Description
We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.
We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.
We have codebases in go, java, python, vue js, bash and support the development team that develops C code.
You need to like challenges, explore new fields and find solutions for problems.
You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years in professional software development
- Solid understanding of software development patterns like SOLID, GoF or similar.
- Experience automating deployments for different kinds of applications.
- Strong understanding of Git version control, merge/rebase strategies, tagging.
- Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
- Solid scripting experience (bash, or similar).
- Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).
Preferred Skills
- Experience in SRE
- Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
- Familiarity with build tools like Make, CMake, or similar.
- Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
- Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
- Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
- Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
- Deploy the services and implement and refine the automation for different environments.
- Operate: The services that the SRE Team developed.
- Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
- Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
- Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
- Success Metrics
- Achieve >99% service up time with minimal rollbacks.
- Delivery in time, hold timelines.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide
🚀 Job Title : Gen AI Engineer
Experience : 3 to 5 Years
Location : Bengaluru (MG Road – Prestige Building)
Work Mode : Hybrid (3 Days WFO)
Open Positions : 2
Notice Period : Immediate to 15–20 Days Preferred
🎯 Role Overview :
We are looking for a Gen AI Engineer with hands-on experience in building and deploying LLM powered applications.
You will work on cutting-edge AI solutions, including real-world enterprise use cases and next-generation internal products.
🛠 Mandatory Skills :
- Strong proficiency in Python.
- Hands-on experience with LLMs & GenAI frameworks (LangChain, LlamaIndex, Semantic Kernel).
- Experience in prompt engineering and system design.
- Knowledge of vector databases & embeddings.
- Experience integrating GenAI solutions into production systems.
- Understanding of REST APIs, async processing, and streaming responses.
⚡ AI / ML Knowledge :
- Strong understanding of NLP & transformer-based models.
- Familiarity with fine-tuning, embeddings, and inference patterns.
- Knowledge of GenAI evaluation metrics (accuracy, relevance, grounding).
☁ Infrastructure & Tooling :
- Experience with Cloud platforms (AWS / Azure / GCP).
- Familiarity with Docker & Kubernetes (good to have).
- Exposure to CI/CD pipelines and MLOps practices.
🌟 Nice to Have :
- Experience with multimodal models (vision, OCR, speech).
- Knowledge of AIOps, RCA, observability, enterprise workflows.
- Experience building AI agents & orchestration layers.
- Understanding of AI governance, safety, and responsible AI.
💼 Key Responsibilities :
- Design, develop, and deploy GenAI applications using LLMs (OpenAI, Azure OpenAI, open-source models).
- Build RAG pipelines using vector databases (FAISS, Pinecone, Weaviate, Chroma).
- Develop high-quality prompts, system instructions, and structured outputs (JSON, function calling).
- Integrate GenAI capabilities into backend systems via APIs & microservices.
- Optimize models for performance, latency, cost, and accuracy.
- Implement evaluation frameworks (hallucination detection, confidence scoring).
- Ensure data security, privacy, and compliance.
- Collaborate with product, design, and domain teams.
- Document architecture, prompts, and best practices.
💼🤝 Interview Process :
1. Geektrust Assessment (AI Agent-based evaluation)
2. Final Interview with Founder (45 mins)
Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 1+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.

Job Responsibilities :
- Work closely with product managers and other cross-functional teams to help define, scope, and deliver world-class products and high-quality features addressing key user needs.
- Translate requirements into system architecture and implement code while considering performance issues of dealing with billions of rows of data and serving millions of API requests every hour.
- Ability to take full ownership of the software development lifecycle from requirement to release.
-Writing and maintaining clear technical documentation enabling other engineers to step in and deliver efficiently.
- Embrace design and code reviews to deliver quality code.
- Play a key role in taking Trendlyne to the next level as a world-class engineering team
- Develop and iterate on best practices for the development team, ensuring adherence through code reviews.
- As part of the core team, you will be working on cutting-edge technologies like AI products, online backtesting, data visualization, and machine learning.
- Develop and maintain scalable, robust backend systems using Python and Django framework.
- Proficient understanding of the performance of web and mobile applications.
- Mentor junior developers and foster skill development within the team.
Job Requirements :
- 4+ years of experience with Python and Django.
- Strong understanding of relational databases like PostgreSQL or MySQL and Redis.
- (Optional) : Experience with web front-end technologies such as JavaScript, HTML, and CSS
Who are we :
Trendlyne, is a Series-A products startup in the financial markets space with cutting-edge analytics products aimed at businesses in stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech, analytics, and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do :
We build powerful analytics products in the stock market space that are best in class. Organic growth in B2B and B2C products have already made the company profitable. We deliver 900 million+ APIs every month to B2B customers. Trendlyne analytics deals with 100s of millions rows of data to generate insights, scores, and visualizations, which are an industry benchmark.
Role- Data Analyst
Experience- 2 to 5 years
Location-Bangalore
Job Role-
● Experience: Minimum of 2+ years of professional experience in a data-heavy
environment (E-commerce or Fintech experience is a plus).
● SQL Mastery: Exceptional ability to write complex joins, window functions, Analytical
functions, and CTEs. Experience with high-scale databases (e.g., BigQuery, Hive, or
Postgres).
● Scripting: Functional knowledge of Python for data manipulation (Pandas, NumPy)
and basic automation scripts.
● Systems Thinking: Ability to understand upstream data flows and how they impact
downstream reporting.
● Problem-Solving: A "detective" mindset—you enjoy digging into a Rs 600Cr discrepancy until you find the root cause
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc
● Debug and resolve technical issues that arise during the development or after deployment at various stages.
● Experience in databases including MySQL and NoSQL
● Experience in designing, developing and maintaining high availability systems.
● Experience in MVC pattern, Tomcat, Git, and Jira.
● Experience working with AWS cloud platform.
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Strong analytical and data driven approach to problem solving
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
ROLE SUMMARY
The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.
ROLE RESPONSIBILITIES
- Design and develop Python and Django applications that are scalable, secure, and maintainable.
- Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
- Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
- Create and optimize SQL queries, database structures, and data access logic to support application features.
- Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
- Implement accurate data mappings and transformations in accordance with business and technical requirements.
- Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
- Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
- Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
- Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.
TECHNICAL QUALIFICATIONS
- 7+ years of hands‑on experience with Python and Django, including complex application development.
- 5+ years of experience with SQL development, optimization, and database design.
- At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
- Deep expertise in application architecture, system integration, and service‑oriented design.
- Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
- Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.
GENERAL QUALIFICATIONS
- Exceptional verbal and written communication skills.
- Strong analytical, problem‑solving, and architectural reasoning abilities.
- Demonstrated leadership experience with the ability to guide and mentor technical teams.
- Proven ability to work effectively in fast‑paced, collaborative environments.
EDUCATION REQUIREMENTS
- Bachelor’s degree in Computer Science, MIS, or a related field.
- Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.
About Shopflo
At Shopflo, we're trying to change the way consumers experience brands and businesses. Our first product was a cart and checkout platform for e-commerce, that allowed marketers to personalise discounts, rewards, and payments. We are currently also working on a new product that takes it a notch higher by unlocking enterprise-grade personalization for all consumer tech businesses.
Team & Company
Shopflo was founded by three co-founders:
- Ankit Bansal (ex-IIT Kharagpur, Oracle, Gupshup)
- Ishan Rakshit (ex-IIT Bombay, Parthenon, Elevation Capital)
- Priy Ranjan (ex-IIT Madras, McKinsey, Elevation Capital)
We’re a fast-growing team of ~50 people, based in HSR Layout, Bengaluru. We raised a $3.8M seed round from Tiger Global, TQ Ventures.
What you will do
- Design and develop microservice that can work in a large-scale multi-tenant environment.
- Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability.
- Working with a cross-discipline team of Design, Product, Data Science and Analytics team.
- Deploy and maintain the application in a secured AWS environment.
- Take ownership from the ideation phase to deployment and maintenance.
- Active participation in the hiring process to bring world-class programmers in the team.
You should apply if you have:
- 2-4 years of experience in server-side development
- Strong programming skills in Java, Python, Node or Golang
- Hands-on experience in API development and frameworks such as Spring, Node, or Django.
- Good Understanding of SQL and NoSQL databases.
- Experience in test-driven development. (writing unit test and API test).
- Understanding of basic cloud computing concepts and experience in using any of the major cloud service providers(AWS/GCP/Azure).
- Ability to build and deploy the application in a containerized environment.
- Understanding of application logging and monitoring systems like Prometheus or Kibana.
- B. E/B. Tech/M. E. /M. Tech/M. S. from a reputed university with a good academic record.
- Curiosity to explore cutting-edge technologies and bake them into the products.
- Zeal and drive to take end-to-end ownership.
Job Title: Senior Linux Kernel Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Domain: Enterprise Linux / Kernel Development
Job Summary
We are seeking a highly skilled Senior Linux Kernel Engineer with deep expertise in kernel development, debugging, and performance optimization. The role involves working on enterprise-grade Linux distributions, kernel lifecycle management, security patching, and low-level hardware integration.
Key Responsibilities
1. Kernel Lifecycle & Maintenance
- Lead kernel upgrade strategies (e.g., LTS migrations such as 5.15 → 6.x) while ensuring stability and compatibility.
- Perform patch porting across kernel versions, resolving API and dependency conflicts.
- Track and mitigate security vulnerabilities by monitoring CVEs and upstream sources (e.g., LKML).
- Backport critical fixes to production kernels without impacting system stability.
2. Debugging & System Stability
- Act as an escalation point for kernel panics and system crashes.
- Perform post-mortem analysis using kdump, crash, and gdb.
- Debug early boot issues (UEFI, initramfs, kernel initialization).
- Conduct performance analysis using eBPF, ftrace, and perf to optimize system behavior.
3. Driver Development & Hardware Integration
- Design, develop, and maintain device drivers (network, storage, GPU, or character devices).
- Work closely with hardware through DMA, interrupts (MSI-X), and register-level programming.
- Maintain out-of-tree drivers using DKMS or similar frameworks.
- Ensure compatibility of drivers across kernel updates.
Required Technical Skills
- Programming: Strong expertise in C (mandatory) and C++
- Kernel Internals: Deep understanding of:
- Virtual File System (VFS)
- Memory Management (MMU, Paging)
- Process Scheduler
- Linux Networking Stack
- Debugging Tools:
- kdump, crash, gdb
- kprobes, trace-cmd, ftrace
- perf, valgrind
- Hardware debugging tools (JTAG, Serial Console)
- Build Systems:
- Kbuild, Makefiles
- Kernel packaging (RPM/Debian)
- Security:
- Experience with CVE patching and backporting
- Knowledge of SELinux/AppArmor
- Kernel hardening (FIPS, KSPP)
Preferred Skills
- Experience contributing to open-source kernel projects
- Familiarity with Linux Kernel Mailing List (LKML) workflows
- Exposure to enterprise Linux distributions (RHEL, Ubuntu, SUSE)
- Experience with performance tuning and system optimization at scale
1. Core Programming (C Language)
- Must have strong hands-on experience in C programming
- Comfortable with pointers, memory management, and low-level concepts
2. Kernel Internals Expertise
- Should have worked in at least one subsystem:
- VFS / File Systems
- Memory Management
- Scheduler / Networking
3. Debugging & Crash Analysis
- Experience handling kernel panics
- Hands-on with vmcore analysis tools
4. Security & Patching
- Understanding of CVE fixes and backporting
5. Driver Development
- Experience in writing or maintaining device drivers
6. Performance & Advanced Debugging
- Exposure to eBPF, ftrace, perf
7. Hardware-Level Understanding
- Knowledge of DMA, interrupts, hardware interaction
Soft Skills
- Strong analytical and problem-solving abilities
- Excellent communication skills
- Ability to work independently and in collaborative environments
- Quick learner with adaptability to new technologies
Job Title: Cloud Development & Linux Debugging Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Job Summary
We are looking for an experienced Cloud Development & Linux Debugging Engineer with strong expertise in Linux internals, system-level programming, and cloud technologies. The ideal candidate will have hands-on experience in developing, debugging, and optimizing Linux-based systems along with exposure to DevOps tools and containerized environments.
Key Responsibilities
- Develop and debug software at the Linux system level (kernel/user space).
- Work on Linux internals, low-level system components, and performance optimization.
- Design, develop, and maintain applications using Python and C/C++.
- Troubleshoot complex issues in Linux and cloud-based environments.
- Collaborate with cross-functional teams in an Agile/Scrum environment.
- Contribute to automation and infrastructure using DevOps tools.
- Work with containerized and cloud platforms such as Kubernetes and OpenStack.
Required Skills
- Strong experience in Linux software development (Linux internals, system-level programming).
- Proficiency in Python and C/C++.
- Solid debugging and analytical skills.
- Hands-on experience with Ansible, Puppet, and DevOps practices.
- Experience working with OpenStack and Kubernetes.
- Good understanding of Agile/Scrum methodologies.
- Excellent communication and teamwork skills.
Preferred Skills (Good to Have)
- Experience with Go / Golang and Go templating.
- Knowledge of Kubernetes Operators and Helm.
- Exposure to containerization technologies (Docker, Kubernetes).
- Contributions to open-source projects.
- Experience with cloud-native architectures.
Qualifications
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
- Self-driven individual with a strong learning mindset.
- Ability to work independently and in collaborative team environments.
About Evatt AI
Evatt AI is a scale‑up on a mission to make advanced legal reasoning and document understanding accessible through natural language. Over the past two years we’ve combined vector search and large language models to give lawyers instant access to case law and legislation. We’re entering a new expansion phase: building an all‑in‑one legal workplace platform that unifies a searchable casebase (like AustLII/Jade.io), agentic workflows, practice management tools and Microsoft Word integrations—delivering the AI assistance analogous to Harvey with the casebase power of Lexis AI+. To achieve this, we’re growing our development team in Bangalore and seeking a Head of Engineering who can own the technical vision and build out our team in India.
Requirements:
- Bachelor's degree in engineering with a specialization in computer science or a related field.
- 5+ years of experience as a software engineer in a product development setting.
- Love of technology and experience with one or more programming languages, such as Python or Go.
- Experience in full-stack development, including designing APIs and integration patterns, implementing security, and implementing frameworks for unit and end-to-end testing.
- Experience with microservices architecture.
- Experience in one or more frameworks like FastAPI, Spring, GRPC, Flask, etc.
- Extensive experience in a test-driven development environment.
- Understanding of CI/CD practices, including code check-in policies, automated unit tests, automated code deployments, etc.
- Ability to grasp new technologies and use them effectively to create industrial-strength software.
- Good communication skills. You can communicate well in the English language with product managers, your team members, and external stakeholders to understand their needs and convey yours in a clear, precise manner, verbally or in writing.
- Strong collaboration skills. You have demonstrated the ability to work with both senior and junior technical professionals and get work done. You quickly earn the trust of the people you work with. People enjoy and have fun working with you.
- Deadline-oriented. You understand that deadlines are meant to be met.
- Challenges will surface, and obstacles and roadblocks will cause delays, but you plan for them in advance and still ship your features on time to meet your commitments.
- Bias for action. Your default setting is to take action and not wait for things to happen. You love to learn about new technologies and advancements in the software industry.
Benefits at 314e Corporation:
- Medical Benefits
- Office Game space
- Referral Program
- Holiday parties
Role Overview
We are looking for an Automated QA Test Engineer (3–4 years experience) to design and implement
automated testing frameworks that ensure the quality and reliability of Hosted.ai’s core platform. The ideal
candidate will have hands-on experience with Pytest, Python scripting, and test automation systems,
along with the ability to architect test harnesses, plan test coverage, and triage bugs effectively.
Key Responsibilities
● Design and develop automated test frameworks and test harness logic.
● Implement end-to-end, integration, and regression tests using Pytest and Python.
● Define and execute test coverage plans for critical components of the platform.
● Conduct bug analysis, triage, and root cause identification in collaboration with
engineering teams.● Ensure tests are reliable, repeatable, and integrated into CI/CD pipelines.
●Continuously improve test automation practices and tooling.
● Document test strategies, results, and defect reports clearly.
Requirements
● 3–4 years of experience in software QA, with a focus on test automation.
● Strong background in manual testing.
● Strong hands-on experience with Pytest for UI and end-to-end testing.
● Proficiency in Python coding for test development and scripting.
● Experience architecting test harnesses and automation frameworks.
● Familiarity with CI/CD pipelines and version control systems (Git).
● Solid understanding of QA methodologies, test planning, and coverage
strategies.
● Strong debugging, analytical, and problem-solving skills.
Nice to Have
● Experience testing distributed systems, APIs, or cloud-native platforms.
● Exposure to performance/load testing tools.● Familiarity with Kubernetes, containers, or GPU-based workloads.
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.


















