50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
About SkillSecureX
SkillSecureX is a technology-focused platform dedicated to providing practical learning opportunities and industry-oriented exposure in Data Science, Artificial Intelligence, Machine Learning, Web Development, and emerging technologies. Our goal is to help students and freshers gain real-world experience through project-based internships and hands-on learning.
About the Internship
We are looking for enthusiastic and motivated candidates for the role of Data Science with AI Intern. This internship is designed for students, freshers, and aspiring data professionals who want to build practical skills in Data Science, Artificial Intelligence, data analysis, and machine learning technologies.
Interns will work on real-world datasets, AI-based projects, and practical assignments while learning industry-relevant tools and workflows under mentorship.
Roles & Responsibilities
• Work on data collection, cleaning, and preprocessing
• Analyze datasets and generate meaningful insights
• Assist in developing AI and Machine Learning models
• Create reports, dashboards, and data visualizations
• Work with Python libraries and data science tools
• Participate in project discussions and team collaboration
• Support research and development activities related to AI solutions
Required Skills
• Basic understanding of Python programming
• Interest in Data Science and Artificial Intelligence
• Familiarity with Excel, statistics, or data analysis concepts is a plus
• Knowledge of Machine Learning basics is beneficial
• Strong analytical and problem-solving skills
• Willingness to learn and work on practical projects
Eligibility
• Students pursuing graduation or post-graduation
• Freshers interested in Data Science and AI
• Candidates looking to gain practical industry experience
Perks & Benefits
• Internship Completion Certificate
• Hands-on experience on practical projects
• Flexible remote working environment
• Mentorship and industry-oriented learning
• Opportunity to strengthen resume and technical skills
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are seeking a highly skilled and innovative Generative AI Engineer to join our team. In this role, you will develop and deploy cutting-edge generative AI models to solve real-world problems. You will work on building models that generate content, understand complex data, and collaborate closely with cross-functional teams to implement AI-powered solutions.
What you will do?
- Design and implement generative AI solutions using large language models (LLMs).
- Apply prompt engineering techniques and build scalable Retrieval-Augmented Generation (RAG) systems.
- Fine-tune and optimize models for performance, cost, and reliability.
- Leverage AWS services such as Bedrock, SageMaker, and Lambda for deployment and inference.
- Develop APIs and backend components for production-grade AI applications.
- Implement observability, performance monitoring, and security best practices.
- Drive responsible AI adoption through evaluation, bias detection, and compliance.
What are we looking for?
- 3+ years of experience in Python with strong software engineering fundamentals.
- Hands-on experience with LLMs and prompt engineering strategies.
- Experience designing RAG pipelines and working with vector databases.
- Proficiency in model fine-tuning (e.g., LoRA) and embedding-based systems.
- Experience with cloud platforms and deploying AI models in production.
- Strong debugging, optimization, and problem-solving skills.
- Clear and effective technical communication.
- Production-first mindset with attention to cost, reliability, and performance.
Preferred Qualifications
- Practical experience with frameworks like LangChain or LlamaIndex.
- Exposure to multi-modal AI systems.
- Familiarity with ML/MLOps and large-scale deployment practices.
- Experience supporting systems at high request volumes.
Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).
And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.
However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?
Key Responsibilities:
- Lead the design, development, and deployment of complex software systems and applications.
- Collaborate with engineers and product managers to define and implement innovative solutions
- Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
- Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
- Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
- Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
- Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.
Qualifications:
- 5+ years of professional software development experience, with a strong focus on backend development.
- Proficiency in one or more programming languages such as Java, Python, Golang or C#
- Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
- Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
- Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
- Proven track record of designing and implementing scalable, high-performance systems.
- Excellent problem-solving skills and the ability to think critically and creatively.
- Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.
About ZYSYGY
ZYSYGY is building Germany's zero-fee payment network. Merchants pay zero transaction fees. We route payments directly over SEPA Instant, bypassing Visa, Mastercard, and the entire card network chain. No hardware. No card reader. Just a phone.
We are building the first company to combine a fully software-based merchant POS with a consumer super-app for everyday financial life: payments, transport, utilities, government services, all in one place. Think what UPI did for India. We are doing it for Germany, from the ground up, on banking rails.
Role Overview
You will join the core engineering team as a founding engineer. You will own the backend payment engine end to end: transaction flows, wallet management, SEPA Instant settlement, KYB/KYC integration, recurring mandates, refund logic, and the compliance export layer (DATEV, Kassenbuch, GoBD, fiskaltrust). On mobile, you will contribute to the React Native consumer and merchant apps alongside our mobile engineer.
You will own what you build from development through deployment, monitoring, and production reliability. The architecture decisions you make in the first six months will run in production for years.
What you will do
Design and build the backend payment engine: transaction flows, wallet management, SEPA Instant settlement, offline payment authorization, and recurring mandate billing.
Own KYB/KYC integration and the compliance export layer: DATEV, Kassenbuch, GoBD, and fiskaltrust for KassenSichV.
Build REST APIs for financial-grade reliability and high-throughput transaction processing, including full R-transaction handling and reconciliation pipelines.
Contribute to the React Native consumer and merchant mobile apps alongside our mobile engineer.
Own the full lifecycle of what you build, from development through deployment, monitoring, and continuous improvement.
Integrate AI tooling into your development workflow as a matter of course: code generation, automated testing, and intelligent review.
Who you are
Meaningful hands-on experience building payment systems, fintech infrastructure, or financial APIs.
You have designed a ledger. You understand double-entry bookkeeping, immutable transaction records, and why you never update a financial row.
You have solved the partial failure. Money left one wallet and did not arrive in the other. You know how to detect it, resolve it, and make sure it does not happen again.
You understand idempotency at a design level. You have built systems that survive client retries, duplicate webhooks, and network drops without double-charging anyone.
You have built or reasoned about reconciliation: detecting discrepancies between your internal ledger and an external settlement file, and resolving them reliably at scale.
You understand KYC and KYB state machines, what happens when verification status changes mid-transaction, and what a regulator expects from your audit trail.
You are comfortable with SQL databases and own the full lifecycle of your code from development through deployment.
You can communicate technical decisions clearly to non-technical stakeholders. At this stage of the company, that matters.
You are a continuous learner. Payments infrastructure is a deep domain and you treat it that way.
You are comfortable with ambiguity and early-stage risk. There is no playbook yet. You will help write it.
Strong academic background preferred. Demonstrated engineering ability matters more than institution.
Relocation
Relocation to Munich, Germany, is on the table for the right candidate, including visa support.
Connect with the Founder
You can also connect with me on LinkedIn at www.linkedin.com/in/shabbir-maimoon
Sr. DE / Data Engineer (Healthcare Data & SQL Expert)
Experience Level: 5–7 Years
Focus: Database Design, Advanced SQL, ETL/ELT Pipelines, and Healthcare Interoperability.
Summary
We are looking for a highly skilled Senior Data Engineer to join our healthcare data team. This role is perfect for a technical powerhouse who excels at building robust data pipelines and deeply understands database internals. You will be responsible for designing schemas, writing complex stored procedures, and optimizing SQL performance to handle clinical and claims data at scale. You will bridge the gap between raw data ingestion and high-performance analytics, ensuring all solutions meet HIPAA and FHIR standards.
What You’ll Do
1. Advanced SQL & Database Development
- Schema Design: Design and implement relational schemas (MSSQL, PostgreSQL, Oracle) ensuring data integrity through constraints, triggers, and normalized structures.
- Programmability: Write and maintain sophisticated Stored Procedures, Functions, and Views to handle complex business logic within the database layer.
- Performance Tuning: Own query optimization. You should be the expert in reading EXPLAIN/ANALYZE plans, implementing advanced indexing strategies (Clustered, Non-Clustered, Columnstore), and managing partitioning.
- Data Modeling: Build and manage dimensional models (Star/Snowflake) and implement Slowly Changing Dimensions (SCD Types 1, 2, and 4).
- Getty Images
2. Data Engineering & Ingestion
- Pipeline Development: Build and operate scalable ETL/ELT pipelines using Python and SQL to ingest data from EHRs, REST APIs, and flat files.
- Orchestration: Use Apache Airflow to schedule jobs, manage dependencies, and implement robust retry/alerting logic.
- API Integration: Develop Python-based ingestion frameworks that handle OAuth, pagination, and throttling for third-party healthcare data partners.
3. Healthcare Interoperability & Compliance
- Standards: Map complex clinical data to HL7 FHIR resources and curated analytic layers.
- Security: Implement "Privacy by Design" by enforcing HIPAA safeguards, including encryption at rest, access controls, and PII/PHI de-identification.
4. Operational Excellence
- CI/CD: Use GitHub and automated pipelines to deploy database changes and data code.
- Observability: Implement data quality tests (using tools like dbt or custom Python/SQL checks) to monitor freshness and accuracy.
What You’ll Bring
- Experience: 5–7 years of professional data engineering experience, with a heavy emphasis on backend database development.
- The SQL Expert Toolkit:
- Expert SQL: Window functions, CTEs, recursive queries, and set-based transformations.
- DB Internals: Deep knowledge of MSSQL, PostgreSQL, or Oracle. You should understand how the engine stores and retrieves data.
- Optimization: Proven track record of turning "slow" queries into high-performance assets via indexing and refactoring.
- The Engineering Toolkit:
- Python: Intermediate to advanced (Pandas/Polars, Requests, SQLAlchemy, or PySpark).
- Orchestration: Practical experience with Airflow (or Prefect/Dagster).
- Legacy/Cloud mix: Proficiency in SSIS/SSMA or PowerShell is a plus for migrating legacy workloads to modern platforms.
- The Domain Knowledge: Familiarity with FHIR/HL7 and an understanding of the importance of data governance in a regulated environment.
Technical "Must-Haves" for the Interview
- Ability to whiteboard a complex Database Schema from scratch.
- Ability to debug a long-running SQL query and explain the IO/CPU trade-offs of different index types.
- Experience handling JSON/BSON data types within a relational database context.
Nice to Have
- Experience with NoSQL systems like MongoDB or Elasticsearch.
- Cloud experience (Azure, AWS, or GCP) specifically regarding managed SQL services.
- Knowledge of dbt (data build tool) for managing transformations in the warehouse.
About the Role:
We are building AI-native infrastructure for a high-performance equity trading firm from the ground up. This is a greenfield opportunity with significant ownership. You will architect and ship the full stack: frontend dashboards, backend systems, and AI pipelines that directly
power trading operations.
This is not a maintenance role. You will make real architectural decisions from day one and see your work have immediate impact.
What You Will Build
• AI pipelines for trade signal generation, risk analysis, and portfolio insights
• Backend systems for real-time data ingestion, processing, and storage at scale
• Internal trading dashboards and analyst-facing tools with fast, responsive UIs
• LLM-powered workflows integrated with proprietary equity data and research corpora
• Evaluation, monitoring, and observability for AI features running in production
Requirements
• 2 to 3+ years of end-to-end software development experience with shipped AI features in production
• Full stack ownership: comfortable across React or similar frontends, Python or Node backends, and cloud infrastructure
• Hands-on experience with LLM APIs, RAG pipelines, or ML model serving in real products•
• Strong engineering fundamentals: system design, data modeling, API design, and testing
• Ability to make architectural decisions independently and build from zero without hand-holding
• Clear written and verbal communication; able to work closely with non-technical stakeholders
Nice to Have
• Familiarity with equity markets, financial data feeds, or trading systems
• Experience with time-series databases or market data infrastructure
• Prompt engineering, LLM evaluation, and output reliability practices
• Vector databases and embedding pipeline design
• Low-latency backend system design
• Prior startup or small-team experience where you owned multiple layers
Why Join
• Greenfield build: no legacy code, no backlog of tech debt
• Direct impact on live trading operations from day one
• Competitive compensation package
• Relocation support available for the right candidate
• Dubai-based, one of the world’s leading financial hubs with a fast-growing tech ecosystem
How to Apply
Apply using: https://loopx.redstring.co.in/neuralis-private-limited/job/6a01d01a03b64ed111206470
Key Responsibilities
Platform Build & Architecture
• Refactor an existing Python-based prototype into a modular, production-grade platform
• Define clear service boundaries (API layer, orchestration, agent runtime, data access)
• Build reusable components that allow extension without exposing core engine logic Agent Framework & Orchestration
• Design and implement frameworks for AI agents and reporting workflows
• Build orchestration for multi-step execution (deterministic + AI-driven)
• Ensure outputs are traceable, auditable, and suitable for financial reporting Developer Enablement
• Enable internal/client developers to: o build and deploy reporting agents o reuse approved components o access platform capabilities via APIs (without direct code access)
• Implement access controls and abstraction layers Full Stack Development
• Lead development across: o Backend: Python, FastAPI o Frontend: Next.js o Real-time: WebSockets
• Build simple internal interfaces for: o job execution o monitoring o output review Code Governance & DevOps
Own development workflows in Azure DevOps • branching strategy, PRs, code reviews, merges • Set up CI/CD pipelines, environments (dev/test/prod), and release processes • Ensure code quality, testing, and maintainability standards Team Leadership (Near-term) • Act as the technical anchor o shore • Mentor and guide future hires as the team scales • Establish best practices across code, documentation, and delivery
Required Skills
• Strong experience in Python backend development (FastAPI or similar)
• Experience with React / Next.js
• Familiarity with WebSockets or real-time systems
• Experience building APIs and scalable backend systems
• Hands-on experience with Azure DevOps (repos, pipelines, PR workflows)
• Understanding of modular architecture, access control, and system design • Ability to operate in an early-stage, fast-evolving environment
We are looking for a talented and driven Data Scientist to join our growing Analytics team in India. In this role, you will work at the intersection of advanced machine learning, scalable MLOps infrastructure, and domain-specific healthcare analytics. You will collaborate closely with cross-functional teams to build, deploy, and maintain production-grade ML models that drive real-world impact in clinical trials and healthcare operations.
KEY RESPONSIBILITIES
End-to-End ML Development
• Design, build, and optimize predictive models across the full ML lifecycle—from data ingestion to model serving.
• Conduct rigorous Exploratory Data Analysis (EDA) to surface insights and drive feature engineering decisions.
• Validate model performance using appropriate statistical techniques and domain knowledge.
MLOps & Production Deployment
• Deploy, monitor, and maintain production-grade ML models using Databricks MLFlow endpoints and Unity Catalog.
• Implement CI/CD pipelines for model versioning, experiment tracking, and automated retraining.
• Ensure model reliability, observability, and performance in live production environments.
Language Models & LLM Applications
• Apply transformer-based models (BERT, ClinicalBERT, Trial2Vec) for NLP tasks including classification, NER, and information extraction.
• Build and maintain vector similarity search pipelines for semantic retrieval and recommendation use cases.
• Fine-tune pre-trained models for domain-specific applications in clinical and healthcare contexts.
• Support exploratory work around LLM integration and prompt engineering for internal tooling.
Domain-Driven Analytics
• Apply advanced analytics within complex healthcare and clinical trial datasets—including patient records, trial protocols, and adverse event data.
• Translate ambiguous business problems into structured analytical frameworks with measurable outcomes.
• Partner with domain experts, product managers, and engineering teams to deliver data-driven solutions.
REQUIRED QUALIFICATIONS
Education
• Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Bioinformatics, or a closely related field.
Experience
• 2–4 years of hands-on experience in a data science or machine learning role.
• Demonstrable experience deploying ML models in production environments (not just prototyping).
Technical Skills
• Strong proficiency in Python (pandas, NumPy, scikit-learn, PyTorch / TensorFlow).
• Experience with Databricks, MLFlow (experiment tracking, model registry, endpoints), and Unity Catalog.
• Hands-on experience with BERT-family models and Hugging Face Transformers library.
• Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embedding-based retrieval.
• Solid understanding of SQL and working with large structured/unstructured datasets.
• Exposure to cloud platforms (AWS / GCP / Azure) and distributed computing frameworks (Spark).
GOOD TO HAVE
• Prior experience with clinical trial data standards (CDISC, CDASH, SDTM) or healthcare ontologies (SNOMED, ICD-10).
• Familiarity with Trial2Vec or similar trial-to-vector embedding approaches.
• Experience with LLM fine-tuning, RAG pipelines, or prompt engineering in a production setting.
• Knowledge of regulatory and compliance considerations in healthcare AI (e.g., FDA guidelines, HIPAA).
• Contributions to open-source ML projects or published research.
THIS ROLE IS NOT FOR YOU IF…
• You have strong SQL/BI skills but limited hands-on ML modelling experience — or you’ve built models only in notebooks without ever deploying them to production.
• Your LLM exposure is limited to API calls and prompt engineering — with no experience fine-tuning models, working with embeddings, or building vector search pipelines.
7th Unit is building a wheeled upper-body humanoid robot for industrial deployment in Central Asia and Eastern Europe. This is a founding role, you own the perception architecture from day one.
What you'll do
- Build and own the full perception stack for v1 — object detection, depth estimation, workspace understanding
- Integrate perception with manipulation and motion planning systems
- Deploy and optimize perception pipelines on edge hardware (Jetson Orin or equivalent)
- Work directly on the physical robot — debugging real hardware, not simulations
- Develop the data pipeline for continuous improvement of perception performance
What we need
- 3+ years building perception systems for real robotic platforms — not research only
- Strong foundation in computer vision, 3D data processing, and sensor fusion
- Proficiency in Python and C++, hands-on experience with ROS2
- Experience deploying perception on edge compute hardware
- You have debugged a perception system on a physical robot under real operating conditions
Structure
- Position starts Q3 2026
- Equity negotiated directly, founding engineer package
- Remote during pre-build phase. In-person required from prototype assembly onward
7th Unit is building a wheeled upper-body humanoid robot for industrial deployment in Central Asia and Eastern Europe. This is a founding role, you own the hardware architecture from day one.
What you'll do
- Define arm and actuator architecture for v1 — degrees of freedom, actuator selection, mechanical design
- Design and build the gripper system optimized for our first deployment object set
- Develop the hot-swap battery system enabling continuous multi-shift operation
- Own structural design with field serviceability and manufacturing scale in mind
- Integrate with wheeled base platform sourced from established robotics component suppliers
- Deliver a working manipulation prototype within the first 90 days
What we need
- 3+ years designing and building physical robotic systems that operated outside a lab
- Hands-on experience with actuator selection, motor control integration, and real-world mechanical failure modes
- You have personally assembled and debugged a working robotic system end-to-end
- Comfortable making architecture decisions under uncertainty and iterating fast
- Experience with off-the-shelf component integration is a strong plus
Structure
- Position starts Q3 2026
- Equity negotiated directly, founding engineer package
- Remote during pre-build phase. In-person required from prototype assembly onward
About the Internship
The Nexora Group is looking for motivated and enthusiastic interns for the role of Data Science with AI Intern. This internship provides practical exposure to data analysis, machine learning, AI tools, and real-world datasets through guided mentorship and project-based learning.
Interns will gain hands-on experience working on live tasks related to data processing, visualization, predictive analytics, and AI-driven solutions.
Responsibilities
- Assist in collecting, cleaning, and analyzing datasets.
- Work on data visualization and reporting tasks.
- Support machine learning and AI model development activities.
- Participate in research and implementation of AI-based solutions.
- Perform data preprocessing and feature engineering tasks.
- Collaborate with mentors and team members on project assignments.
- Test, evaluate, and improve model performance.
- Maintain project documentation and reports.
Required Skills
- Basic understanding of Python or any programming language.
- Interest in Data Science, Analytics, and AI technologies.
- Knowledge of statistics, data handling, or logical reasoning is a plus.
- Good analytical and problem-solving skills.
- Ability to learn and work collaboratively in a team environment.
Preferred Skills (Optional)
- Familiarity with Python libraries such as Pandas, NumPy, or Matplotlib.
- Basic understanding of Machine Learning concepts.
- Knowledge of SQL, Excel, or data visualization tools.
- Interest in Artificial Intelligence and automation technologies.
Who Can Apply
- Students pursuing B.Tech, BCA, MCA, B.Sc IT, M.Tech, or related fields.
- Freshers seeking practical exposure in Data Science and AI.
- Candidates passionate about analytics, AI, and emerging technologies.
Perks & Benefits
- Hands-on experience with real-world datasets and projects.
- Mentorship from experienced professionals.
- Internship Completion Certificate.
- Letter of Recommendation based on performance.
- Flexible learning and working environment.
- Opportunity to build practical technical skills.
- Top performers may receive future opportunities based on project requirements.
Job Title: Data Architect (Azure)
Location: Remote
Fulltime
Role Description
We are looking for a seasoned data leader to design, build, and own enterprise-scale data platforms on Azure. This role goes beyond development — it requires end-to-end accountability for architecture, data pipelines, transformation frameworks, and production readiness.
You will act as the critical link between business stakeholders, data engineering teams, and analytics functions, ensuring scalable and high-performance data solutions are delivered and maintained.
Key Responsibilities:
- Design and implement robust data pipelines using Azure Data Factory (ADF), including integration with REST APIs and external data sources
- Build scalable data transformation workflows using Databricks (PySpark), handling complex and nested JSON datasets
- Architect and implement Delta Lake-based data platforms, including fact and dimension models (star schema)
- Define and enforce best practices for data modeling, performance optimization, and cost efficiency
- Own end-to-end data platform lifecycle — from architecture and deployment to monitoring and operational support
- Establish production readiness frameworks, including logging, alerting, and data quality checks
- Collaborate closely with business and analytics teams to translate requirements into scalable technical solutions
- Mentor engineering teams and drive architectural governance across projects
Required Experience & Skills:
• Experience building pipelines with Azure Data Factory
• Experience connecting to REST API sources using Azure Data Factory
• Experience building transformations with Databricks using PySpark
• Experience handling complex nested JSON files using PySpark
• Experience designing dimensional models/star schema
• Experience implementing facts and dimension tables in Databricks Delta Lake
• Around 15-20 years of solid experience in building, managing, and optimizing enterprise data platforms with at least 5 years in Azure cloud data services
• Act as a bridge between business, data engineering, and analytics teams to ensure requirements are clearly understood and implemented correctly
• Own end-to-end production readiness of the data platform, including architectural design, deployment patterns, monitoring strategy and operational support.

Nasdaq Listed AI-driven Content/Knowledge Management Company
Role: Senior Technical Consultant
Schedule: Fully Remote but requiring working in EDT Zone (Second shift work till 12:30 AM IST)
Benefits: Cell-Phone Reimbursement, Food Allowance, Internet Allowance, Commute Allowance
Company:
Upland Software (Nasdaq listed) is a leader in AI-powered knowledge and content management software. Our solutions help enterprises unlock critical knowledge, automate content workflows, and drive measurable ROI—enhancing customer and employee experiences while supporting regulatory compliance. More than 1,100 enterprise customers rely on Upland to solve complex challenges and provide a trusted path for AI adoption.
Job Description
Opportunity Summary:
We are seeking an experienced Senior Technical Consultant to join our India-based Center of Excellence team. This role demands both technical depth and customer presence—someone who thrives under pressure, communicates clearly, and solves complex integration and troubleshooting challenges in real time, often while collaborating directly with customers on live calls.
What would you do?
· Customer Engagement & Solutioning
o Lead and participate in technical meetings with enterprise customers, including troubleshooting sessions and go-live support calls.
o Translate complex technical details into language understandable by both technical and non-technical stakeholders.
o Calmly manage tense situations and help de-escalate when technical issues or deadlines create pressure.
· Technical Implementation & Integration
o Configure, integrate, and optimize BA Insight connectors with systems such as SharePoint, FileNet, iManage, and other enterprise repositories.
o Utilize REST APIs and scripting (C#, PowerShell, or similar) to implement or extend integration logic.
o Troubleshoot complex issues across multi-tier environments including application servers, connectors, indexing services, and authentication systems.
o Support orchestration pipelines that prepare data (chunking, metadata tagging, indexing) for LLM/AI consumption.
· Cross-Functional Collaboration
o Work closely with Project Managers, Solutions Consultants, and Support to ensure project milestones are met.
o Partner with Product and R&D to identify defects, suggest enhancements, and test new connector functionality.
o Support Sales and Customer Success teams in technical scoping discussions and feasibility assessments.
· Documentation & Process Discipline
o Maintain detailed technical documentation and logs of customer work.
o Track time and deliverables accurately using internal project management and time-tracking tools.
o Follow established PS methodology and internal QA/SDLC standards.
What are we looking for?
· Minimum 4–6 years of experience in a technical consulting, support engineering, or implementation role within a SaaS or enterprise software company.
· Experience with enterprise content management or search-based applications (e.g., SharePoint, iManage, OpenText, Documentum, Elastic, Solr) strongly preferred.
· Prior exposure to AI or data orchestration workflows a plus (chunking, vectorization, or integration with LLMs).
Primary Skills:
The candidate must possess the following primary skills:
Core Engineering & Development
· Strong proficiency in scripting and automation (Python, PowerShell, etc.)
· Solid experience with APIs, integrations, and data pipelines
· Ability to design and implement scalable, reusable solutions
Enterprise Search (Hands-On)
· Practical experience with:
· Indexing pipelines and ingestion workflows
· Metadata modeling and enrichment strategies
· Relevance tuning and search optimization
· Ability to diagnose issues related to:
o Missing content
o Poor ranking
o Incorrect search results
AI & Copilot Troubleshooting (Critical Skill)
· Hands-on experience or strong capability to troubleshoot:
· Microsoft Copilot / AI search issues
· Data grounding problems (AI not finding the right content)
· Permissions and access-related issues
· Microsoft Graph / Search indexing gaps
· Strong understanding of:
o Retrieval-Augmented Generation (RAG) concepts (high level)
o How AI depends on data quality, structure, and access
Enterprise Platform Experience
· Hands-on experience with one or more:
· Microsoft ecosystem (SharePoint, OneDrive, Graph, Copilot)
· Salesforce, ServiceNow, or similar enterprise systems
· Understanding of authentication (SSO, OAuth) and access models
Advanced Troubleshooting & Problem Solving
· Ability to diagnose cross-system issues (AI + search + data source)
· Strong root cause analysis skills across:
o Data
o Integration
o Platform configuration
Delivery & Ownership
· Ability to work independently on deployments and complex issues
· Experience in customer-facing or delivery environments
· Capability to guide/mentor junior engineers
Soft Skills
· Fast learner capable of mastering complex products and customer environments quickly
· Self-starter who can manage multiple projects with minimal supervision
· Team-oriented mindset with strong empathy for customers and colleagues
· Passionate about connecting enterprise data with AI to drive intelligent outcomes
· Exceptional troubleshooting and analytical abilities
· Passionate about delivering an amazing customer experience
· Able to have a change of mind, and able to change the minds of others
· Writes clearly and concisely
· Capable of working without a company office, with a fully remote team
Growth Skills
· Possesses a good work ethic; a self-starter with a desire to grow
· Always looking for better ways to get the job done
Work Schedule
· As Mentioned above.
· Flexibility to alternate schedules with other CoE Project Managers to ensure consistent U.S. coverage.
Qualification
Bachelor’s degree or technical institute degree/certificate in Computer Science, Information Systems, or other related field or equivalent combination of knowledge and experience. This role requires overlap with multiple time zones for planning meetings, status updates etc. on a regular basis. The duration of these overlaps can change depending on the type of meeting. Upland India has the flexibility to manage your working hours accordingly to help in your work-life balance. You can find out more about this during your interview conversation.
About BA Insight
Upland BA Insight is an enterprise search and AI orchestration platform that connects knowledge across systems like SharePoint, FileNet, iManage, Jira, and Documentum. Our technology enables intelligent search and data orchestration pipelines that prepare content for use with AI models such as ChatGPT, Microsoft Copilot, and Azure OpenAI.
Company Description
Eassy Onboard LLP is a team of Databricks Certified Data Engineers committed to empowering businesses through data-driven solutions. Specializing in automated workflows, scalable architectures, optimized data pipelines, and AI solutions, we help organizations reduce manual effort, optimize costs, and achieve reliable insights. We ensure secure data operations with robust validation processes and strong data integrity. As an Employer of Record (EOR), we assist global companies in hiring top Indian talent, managing payroll, compliance, and regulatory requirements. Our mission is to accelerate enterprise transformation and enable companies to build future-ready, compliant teams.
Role Description
We are seeking a Senior Data Engineer with deep expertise in Spark/PySpark/SQL to join our data team.
This is a hands-on technical role for someone passionate about building scalable data systems, mentoring engineers, and shaping data strategy.
You will architect systems that power high-performance data processing, enable advanced analytics, and accelerate AI initiatives.
What You'll Do
- Design and evolve scalable, distributed data infrastructure across cloud platforms including GCP and AWS.
- Build and maintain real-time and batch data processing pipelines supporting AI/ML workloads, consumer applications, and analytics.
- Develop and manage integrations with third-party e-commerce platforms to expand the data ecosystem.
- Ensure data availability, reliability, and quality through monitoring and automated auditing.
- Partner with engineering, AI, and product teams on data solutions for business-critical needs.
- Mentor and support data engineers, establishing best practices and code quality standards.
Qualifications
- Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
- 5+ years of software development and data engineering experience with ownership of production-grade data infrastructure.
- Deep expertise scaling Spark, PySpark, and SQL in production, including Databricks or DataProc on GCP.
- Strong understanding of distributed computing and modern data modeling for scalable systems.
- Proficient in Python with experience implementing software engineering best practices.
- Hands-on experience with both relational and NoSQL systems including MySQL, MongoDB, and Elasticsearch.
- Strong communicator with experience influencing cross-functional stakeholders.
Nice to Have
- Experience with job orchestration and containerization tools such as Airflow and Docker.
- Experience working with vector stores and knowledge graphs.
- Experience working in early-stage, high-growth environments.
- Familiarity with MLOps pipelines and integrating ML models into data workflows.
- A proactive, problem-solving mindset with a passion for innovative solutions.

It’s a global digital engineering and technology (MNC)
Job Details:
- Role: Senior Staff Engineer
- Experience: 7.5-9 Years
- Employment Type: Full-time
- Work Mode: Remote
Job Description
REQUIREMENTS:
- Strong hands-on experience in Java and Python
- Expertise in Microsoft Azure AI/ML services
- Experience with LLM application frameworks (LangChain, LangGraph, or similar)
- Strong experience in API development and system integration
- Experience building backend systems and scalable architectures
- Solid understanding of data structures, system design, and distributed systems
- Familiarity with cloud-native deployments, CI/CD, and observability tools
- Experience with LLM tools/providers and AI-assisted development (Good to Have)
- Strong problem-solving and communication skills
RESPONSIBILITIES:
- Design and develop autonomous AI agents capable of multi-step reasoning and decision-making
- Build and orchestrate agent workflows using modern frameworks (LangChain, LangGraph, etc.)
- Integrate AI agents with APIs, databases, and SaaS platforms for end-to-end automation
- Develop prompt engineering strategies, memory architectures, and tool integrations
- Deploy, monitor, and maintain AI agents in production environments
- Optimize agents for performance, scalability, latency, and cost efficiency
- Debug and improve agent behavior using testing, logging, and feedback loops
- Collaborate with cross-functional teams to embed AI solutions into business workflows
- Write clean, scalable, and production-ready backend code
- Stay updated with emerging AI/LLM trends and agent frameworks
Qualifications
Bachelor’s or master’s degree in computer science, Information Technology, or a related fields

Role Overview
We are looking for passionate and driven interns across multiple technology domains including Frontend Development, Backend Development, DevOps, AI/ML, and Data Engineering. This internship offers hands-on experience in real-world projects, collaboration with cross-functional teams, and exposure to modern tools and technologies.
Domains & Responsibilities
Frontend Development
- Build responsive and user-friendly web interfaces
- Translate UI/UX designs into functional applications
- Optimize performance and ensure cross-browser compatibility
Backend Development
- Develop APIs and server-side logic
- Work with databases and data storage solutions
- Ensure application security and performance
DevOps
- Assist in CI/CD pipeline setup and automation
- Manage deployments and cloud infrastructure
- Monitor system performance and reliability
About Simbian
Simbian is at the forefront of cybersecurity innovation, leveraging purpose-built AI Agents to deliver 10x security outcomes for global enterprises and MSSPs. Our platform autonomously investigates and responds to alerts, freeing security teams from repetitive tasks. Simbian combines privacy-first technology, proven integration with 70+ enterprise tools, and rapid deployment for measurable value. Role
Overview
We are seeking a collaborative, innovative DevOps Engineer passionate about enabling secure, scalable operations for cutting-edge cybersecurity products. Join our team during a period of high growth and help architect the future of agentic AI security platforms.
Key Responsibilities
• Kubernetes Management:
o Manage and maintain production-grade Kubernetes clusters across multiple cloud providers (AWS is essential, Azure is valuable, GCP is a plus).
o Deploy, upgrade, troubleshoot, and scale stateful and stateless workloads (NGINX, Postgres, MongoDB, OpenCTI, OpenSearch, Kafka, Hadoop, Fluentd) in Kubernetes.
• Cloud Operations:
o Operate and optimize cloud environments, with strong expertise in AWS (AWS Certified Solutions Architect Professional or equivalent Azure cert preferred).
o Design, deploy, and manage infrastructure on AWS and Azure (GCP optional). • SQL Database Management:
o Administer SQL databases, ideally Postgres, on Kubernetes clusters or cloud VMs.
o Perform routine maintenance, backups, upgrades, monitoring, and optimization.
• Infrastructure as Code:
o Build, install, upgrade, and maintain Helm charts with expertise.
o Use and understand Ansible for cloud automation (AWS/Azure), and Terraform for infrastructure provisioning.
• Monitoring, Logging, Observability:
o Implement and manage logging and metrics stacks using OpenSearch/Elasticsearch, Prometheus, Grafana, Thanos or similar open source tools.
• Programming & Scripting:
o Develop automation scripts in Bash (proficient with control structures). o Produce scripts or microservices in Node.js (preferred) or Python/Django (bonus).
• CI/CD:
o Build and maintain CI/CD pipelines preferably using GitHub Actions (Jenkins or equivalent is acceptable).
• Containerization:
o Create, manage, and troubleshoot Docker/Podman containers, images, volumes, and use Docker Compose for local development.
• Customer-Facing On-Prem Deployments (Bonus):
o Install, configure, and support Kubernetes on customer premises.
o Demonstrate ownership, initiative, and strong customer communication skills.
o Solid knowledge of Linux administration, networking, and cloud environments.
What You’ll Bring:
• 4+ years’ experience in DevOps, SRE, or Production Engineering.
• Mastery of Kubernetes, AWS, infrastructure automation, and database management.
• Strong collaborative, curious, and growth-driven mindset.
• Ability to challenge ideas, drive innovation, and embrace rapid change.
• Excellent communication for technical customer interactions.
Why Join Simbian?
• Work with pioneering agentic AI security—impact global security teams.
• Shape infrastructure for privacy-first technology in a high-growth startup.
• Enjoy a dynamic remote-first work culture with opportunities for ownership and advancement.
Lead / Sr. Data Engineer (Architect & Engineering Owner)
The Role
We are seeking a Lead Data Engineer who operates at the intersection of high-scale engineering and enterprise architecture. In this role, you will "own" our healthcare data platform end-to-end. You aren't just building pipelines; you are designing the blueprint for how clinical, claims, and sales data flow through our ecosystem. You will bridge the gap between legacy systems (MSSQL/Oracle) and modern cloud warehouses (Snowflake/Redshift/Databricks), ensuring our data is governed, HIPAA-compliant, and optimized for advanced analytics.
What You’ll Do
1. Architecture & Strategic Leadership
- Design the Blueprint: Own the enterprise data architecture (Staging, Integration, Warehouse, and Semantic layers). Define the evolution from monolithic databases to scalable cloud-hosted analytics.
- Modeling Mastery: Lead the design of complex Dimensional Models (Star/Snowflake) and implement advanced Slowly Changing Dimension (SCD) strategies to track historical clinical events.
- Set the Standard: Establish coding, version control (GitHub), and CI/CD standards. Conduct design reviews and mentor a team of engineers to move from "task-takers" to "system-builders."
2. Advanced Data Engineering (Hands-on)
- Modern ELT/ETL: Build and orchestrate production-grade pipelines using Python, Airflow, and dbt. Manage automated ingestion via Fivetran or custom-built frameworks for APIs and EHRs.
- Multi-Engine Expertise: Operate seamlessly across PostgreSQL, MSSQL, and Oracle, while optimizing petabyte-scale cloud warehouses like Snowflake or Redshift.
- Performance Tuning: Own query optimization. You should be the expert at using EXPLAIN/ANALYZE, partitioning, and indexing to reduce compute costs and latency.
- Quality & Reconciliation: Design robust validation frameworks to ensure data integrity—essential for healthcare compliance and clinical trust.
3. Healthcare Interoperability & Governance
- Data Standards: Map diverse datasets (EHR, API, Flat Files) to HL7 FHIR resources and curated analytic layers.
- Privacy by Design: Embed HIPAA Security Rule safeguards (encryption, audit trails, and access controls) directly into the code and infrastructure.
- Interoperability: Handle complex semi-structured data (JSON/XML) from third-party partners and EMR systems.
What You’ll Bring
- Experience: 8–12+ years in Data Engineering/Architecture. You should have a track record of leading technical projects or mentoring teams.
- The "Hybrid" Stack: * Expert SQL/PL-SQL: Deep experience with performance tuning in relational environments (Oracle/MSSQL).
- Modern Tools: Practical experience with Snowflake/Redshift, dbt, and Airflow.
- Programming: High proficiency in Python (Pandas, PySpark) or Java/Scala for custom ETL routines.
- Architectural Depth: Clear understanding of SDLC, Agile (Scrum), and Data Modeling frameworks.
- Healthcare Domain: Exposure to pharmaceutical or clinical data (Life Sciences, EMR, or Claims) is highly preferred.
- Soft Skills: The ability to translate "clinical business needs" into "technical runbooks" and communicate effectively with stakeholders.
Nice to Have
- AI/ML Integration: Experience supporting Data Science teams with feature extraction and model deployment (SageMaker/Azure ML).
- Advanced Tooling: Familiarity with NoSQL (MongoDB), search engines (Elasticsearch), or niche ETL tools (Talend/Informatica) for migration purposes.
- Cloud Infrastructure: Hands-on experience with AWS Glue, Lambda, or Azure Data Factory.
About FloBiz
Website : https : //flobiz.in/
FloBiz is India's first neobusiness platform, revolutionizing the way Small and
Medium-sized Enterprises (SMEs) operate in India. Our mission is to digitize 65 million
MSMEs in the country, and we are well on our way to achieving this goal. Our flagship
product, myBillBook, has already empowered over 10 million businesses across 2000+
towns with its billing, accounting, inventory management, and payment collection
solutions. With over $25 billion in annual transactions, we are proud to be a rapidly
growing tech startup serving the needs of SMBs in India.
Our Flagship product : MyBillBook
myBillBook is India's leading GST billing & accounting software with mobile, web app & native desktop offerings and runs on Android as well as iOS. myBillBook has been designed to aid SMB owners to conduct their operations from anywhere and anytime and provides a secure platform for business owners to record transactions & track business performance on the go. It is an ideal software for GST registered businesses where invoicing is one of the core business activities. Also, businesses looking to digitise their operations to understand their financial position better can use this software. It helps them create bills (GST & non-GST), record purchases & expenses, manage inventory and track payables/receivables directly from their mobile phones or computers. Also, the app generates 25 critical business reports that help business owners make effective business decisions. myBillBook is currently available in English, Hindi, Gujarati & Tamil.
Currently, the app has been downloaded by over 6.5M SMBs across the country with over 10x growth in user base in the last 12 months alone. Even with such pace of adoption of the product, myBillBook continues to be the highest rated application in its category on Google Play Store.
Key Responsibilities :
• Design, develop, maintain and optimise complex, scalable and distributed systems capable of handling large-scale datasets and high-throughput workloads.
• Optimise performance, reliability and availability across the whole system.
• Write clean, efficient, and maintainable code in multiple programming languages as needed.
• Contribute to architectural decisions and help improve engineering best practices.
• Work with a builder mindset, contribute and collaborate across cross-functional teams, to unblock and accelerate delivery. No role silos.
• Actively mentor juniors through code reviews, design discussions and pairing.
• Leverage LLM-assisted tools (Claude, Cursor, LLM-powered code review and testing) to accelerate development velocity and improve code quality.
• Build and evolve the platform to be LLM-ready - design APIs, data pipelines and system interfaces that enable seamless LLM integration and automation.
Required Qualifications :
• 3-5 years of experience in back-end software development focusing on large-scale distributed systems.
• BE/B.Tech in Computer Science or a related technical field (or equivalent practical experience).
• Strong software development skills in one or more languages such as Java / Ruby on Rails / Python.
• Working experience with SQL and NoSQL databases (e.g. PostgreSQL, MongoDB) with ability to design effective schema and perform various optimisations for large-scale data.
• Deep understanding of system design principles and best practices for building scalable and resilient systems with microservices.
• Excellent problem-solving with experience in incident management, monitoring, alerting, and root cause analysis.
• Experience with event-driven architectures (Kafka, SQS, RabbitMQ, or similar).
• Experience in building intelligent AI agents and systems powered by Large Language Models.
• Hands-on experience with cloud platforms like AWS or Google Cloud Platform.
• Deep understanding of software development best practices, patterns and code reviews.
• Effective communication skills to coordinate with cross-functional teams during large-scale projects.
Perks @ Benefits :
• Competitive salary with performance-linked rewards and recognitions.
• An extensive medical insurance that looks out for our employees & their dependants. Well love you and take care of you, our promise
• Flobiz Academy : Helps you in terms of Learning and enhancing your skills
• A reward system that celebrates hard work and milestones and Performances throughout the year.
• A cool work-from-home setup that makes you feel right at home. An environment so comfortable that you won't miss your home.
Location : Remote (WFH) 5 days working
About Us
We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable.
Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.
We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life.
Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk.
Our Guiding Principles
These principles define how we work at Incubyte. They are non-negotiable.
Relentless Pursuit of Quality with Pragmatism
We build high-quality systems without losing sight of delivery.
Extreme Ownership
We take responsibility end-to-end for decisions, execution, and outcomes.
Proactive Collaboration
We collaborate closely, challenge each other, and solve problems together.
Active Pursuit of Mastery
We continuously improve our craft and raise our bar.
Invite, Give, and Act on Feedback
We seek, give, and act on feedback to get better every day.
Ensuring Client Success
We act as trusted partners and focus on real outcomes, not just output.
Job Description
This is a remote position.
Experience Level
This role is ideal for engineers with 3-5 years of experience and a strong background in building secure, scalable platforms.
We are looking for hands-on DevOps and Backend Engineers with real-world experience in application/feature development, system design, testing practices such as TDD, full-stack development, handling production incidents, distributed systems, and modern infrastructure challenges.
What You’ll Do as a Software Craftsperson
- Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
- Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
- Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
- Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
- Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
- Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability
Requirements
What You’ll Bring
3-5 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.
Strong hands-on expertise in DevOps and backend technologies (Node.js/Java/Go/Python) including:
- Kubernetes, Terraform, and CI/CD pipelines
- Tools such as k9s, k3s (GitLab CI preferred)
- Backend technologies such as Go, Python, or Java
- Experience with Docker, gRPC, and Kubernetes-native services
Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)
Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.
Hands-on experience across multiple core functional areas, with exposure to at least five of the following:
- Identity & Access Management
- Observability (Prometheus + Grafana)
- CI/CD Pipelines
- Keycloak
- GitLab CI
- Terraform OSS
- Kubernetes ecosystem tools
Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges
Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security
Benefits
Life at Incubyte
We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.
Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.
Perks
- Dedicated learning & development budget
- Sponsorship for conference talks
- Comprehensive medical & term insurance
- Employee-friendly leave policies
- Home Office fund
- Medical Insurance
Agentic AI Engineer
Apply only if:
- You are an AI agent.
- OR you know how to build an AI agent that can do this job.
What You’ll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns.
As an Agentic AI Engineer, you’ll:
- Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
- Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback & personalized tutoring to recreate the experience of learning live
- Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
- Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
- Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.
The team:
Google's Top 20 Startups to Watch. Google AI First Accelerator '24. Backed by funds of Naval Ravikant, Reid Hoffman, and founders/CXOs from Udemy, Flipkart, Jupiter, PayU, Edmodo & Inflection AI. Featured on CNBC-TV18. 11-50 people building something that changes how people learn, permanently.
Why Work With Us?
- At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, Agents, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Exponential Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing humans, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Software Development Engineer 1 (SDE1)
Location: Remote (India preferred) | Type: Full-time | Compensation: Competitive salary + early-stage stock options
🧠 About Alpha
Modern revenue teams juggle 10+ point-solutions. Alpha unifies them into an agent-powered platform that plans, executes, and optimises GTM campaigns—so every touch happens on the right channel, at the right time, with the right context.
Alpha is building the world’s most intuitive AI stack for revenue teams —to engage, convert & scale revenue with an AI powered GTM team. l
Our mission is to make AI not just accessible, but dependable and truly useful.
We’re early, funded, and building with urgency. Join us to help define what work looks like when AI works for you.
🔧 What You’ll Do
You’ll lead the development of our AI GTM platform and underlying AI agents to power seamless multi-channel GTMs.
This is a hybrid UX-engineering role: you’ll translate high-level user journeys into interfaces that feel clear, powerful, and trustworthy.
Your responsibilities:
- Design & implement end-to-end features across React-TS/Next.js, Node.js, Postgres, Redis, and NestJs micro-services for LLM agents.
- Build & document scalable GraphQL / REST APIs that expose our data model (Company, Person, Campaign, Sequence, Asset, ActivityRecord, InferenceSnippet).
- Integrate third-party APIs (CRM, email, ads, CMS) and maintain data sync reliability > 98 %.
- Implement the dynamic agent flow builder with configurable steps, HITL checkpoints, and audit trails.
- Instrument product analytics, error tracking, and CI pipelines for fast feedback and safe releases.
- Work directly with the founder on product scoping, technical roadmap, and hiring pipeline.
✅ What We’re Looking For
- 1–3 years experience building polished web apps (React, Vue, or similar)
- Strong eye for design fidelity, UX decisions, and motion
- Experience integrating frontend with backend APIs and managing state
- Experience with visual builders, workflow editors, or schema UIs is a big plus
- You love taking complex systems and making them feel simple
💎 What You’ll Get
- Competitive salary + high-leverage early equity
- Ownership of user experience at the most critical phase
- A tight feedback loop with real users from Day 1
- Freedom to shape UI decisions, patterns, and performance for the long haul
Role Overview:
As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.
Skip the wait and get noticed faster by completing our AI-powered screening. Click this link to start your quick interview. It only takes a few minutes and could be your shortcut to landing the job! -https://bit.ly/LT_Python
What You'll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:
- Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
- Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
- Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
- Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
- Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
- Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
- Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
- Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.
What makes you a great fit?
Must-Haves:
- 3+ yrs Python back-end experience (FastAPI)
- Strong with Docker & container orchestration
- Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
- SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals
Nice-to-Haves
- k8s at scale, Terraform,
- Experience with AI/ML inference services (LLMs, vector DBs)
- Go / Rust for high-perf services
- Observability: Prometheus, Grafana, OpenTelemetry
About Us:
At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders:
LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us?
At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 6+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
- AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 6+ years of Object-Oriented Programming with Python or equivalent
- 6+ years of experience working with relational (SQL) databases
- 6+ years of experience using Git to contribute code as part of a team of Software Craftspeople
- AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Senior Data Engineer (Databricks, BigQuery, Snowflake)
Experience: 8+ Years in Data Engineering
Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)
Budget: Open / Competitive
Job Summary:
We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.
In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.
Key Responsibilities:
- Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
- ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
- ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
- Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
- Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
- Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
- Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
- Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
- AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.
Technical Requirements:
- Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
- Languages: Expert proficiency in SQL and Python for complex data processing.
- Big Data: Hands-on experience with PySpark and large-scale distributed computing.
- Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
- Cloud Platforms: Deep working knowledge of AWS and Azure.
- Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
- Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
- AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.
Preferred Qualifications:
- Exposure to data governance and cataloging tools (e.g., Unity Catalog).
- Knowledge of performance tuning for massive-scale big data systems.
- Familiarity with real-time data processing frameworks.
- Experience in digital transformation and sustainability-focused data projects.
About TIFIN:
TIFIN is a cutting-edge fintech platform transforming financial lives through AI and investment intelligence. Backed by industry leaders like JP Morgan and Morningstar, we're dedicated to personalizing wealth experiences, akin to how AI has revolutionized entertainment, but with the critical responsibility of delivering superior financial outcomes. We blend design and behavioral science with investment intelligence to create engaging software and APIs that empower better investor outcomes. Our mission is to recognize each individual's unique needs and goals, matching them to tailored financial advice and investments across our marketplace and various divisions.
Our Values: Go with your GUT
- Grow at the Edge: We embrace personal growth, stepping out of comfort zones, and putting ego aside to unlock genius. We operate with self-awareness and integrity, striving for excellence without excuses.
- Understanding through Listening and Speaking the Truth: Transparency is key. We communicate with radical candor, authenticity, and precision to foster shared understanding. We challenge ideas, but once a decision is made, we commit fully.
- Win for Teamwin: We thrive in our genius zones and take full ownership of our work. We inspire each other with energy and attitude, collaborating seamlessly to achieve collective success.
The Opportunity:
TIFIN is seeking a highly skilled and experienced LLM Engineer to join our innovative, remote-first team. This is a unique opportunity to shape the future of personalized financial experiences by leveraging your expertise in Large Language Models (LLMs) and Generative AI. As an early-stage startup, we're looking for an independent contributor and leader who is ready to build systems from the ground up and own outcomes.
What You'll Do:
- Collaborate closely with design and product teams to craft intuitive and engaging conversational AI experiences for our users.
- Work autonomously to deliver high-quality features, taking full ownership of project outcomes.
- Analyze and leverage our extensive data to create highly personalized experiences.
- Fine-tune LLMs with proprietary data to enhance model performance and relevance for our specific use cases.
- Implement various RAG (Retrieval-Augmented Generation) approaches to augment LLMs with relevant, up-to-date, and domain-specific information.
- Act as both a technical leader and an individual contributor, embodying the startup mentality of doing "whatever it takes" to succeed.
- Design and set up new workflows, systems, and tools from scratch, with support from the wider team.
What You'll Bring:
- 8+ years of professional experience in software engineering or a related field.
- Proven experience working with Large Language Models (LLMs) and Generative AI technologies.
- Demonstrated experience in building and deploying conversational bots.
- Hands-on experience with fine-tuning machine learning models, specifically LLMs.
- Proficiency in utilizing RAG-based approaches for LLM augmentation.
- A strong understanding of financial concepts and investing is a significant plus, though not strictly required.
- Ability to thrive in a fast-paced, startup environment, with a proactive and problem-solving mindset.
- Excellent communication skills and the ability to articulate complex technical concepts clearly.
Our Benefits Package Includes:
- Competitive salary with performance-linked variable compensation.
- Comprehensive medical insurance.
- Tax-saving benefits.
- Flexible Paid Time Off (PTO) policy and company-paid holidays.
- Generous Parental Leave: 6 months paid maternity leave, 2 weeks paid paternity leave.
TIFIN is an equal-opportunity employer, valuing diverse talents and perspectives. We encourage all qualified applicants to apply, regardless of background.
Budget: 35 LPA to 45 LPA
Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Key Responsibilities:
- Design, develop, and deploy computer vision and machine learning models for analyzing visual and document-based data.
- Build pipelines that convert unstructured visual inputs into structured and usable information.
- Develop and evaluate models for tasks such as object detection, segmentation, document parsing, and image understanding.
- Apply OCR and related techniques to extract meaningful information from complex documents and imagery.
- Work with large datasets and build efficient training and evaluation pipelines.
- Handle real-world visual datasets that may contain noise, inconsistencies, incomplete information, or varying formats.
- Experiment with different approaches to solve challenging computer vision problems and evaluate tradeoffs between accuracy, performance, and complexity.
- Collaborate with product and engineering teams to integrate machine learning models into scalable production systems.
- Continuously improve model performance, accuracy, and robustness in real-world environments.
- Stay up to date with the latest developments in AI and computer vision and apply relevant techniques where appropriate.
- Actively leverage modern AI tools and frameworks to accelerate experimentation, development, and engineering workflows.
Requirements:
- 5+ years of hands-on experience building and deploying machine learning models, particularly in Computer Vision or document understanding.
- Strong proficiency in Python for machine learning and data processing.
- Hands-on experience with modern ML frameworks such as PyTorch and libraries in the Hugging Face ecosystem.
- Experience with computer vision tooling such as OpenCV.
- Experience with common ML and data science libraries such as scikit-learn, NumPy, and Pandas.
- Experience developing models for tasks such as segmentation, object detection, or document analysis.
- Experience working with large image datasets and building training pipelines.
- Solid understanding of model evaluation, data preprocessing, and performance optimization.
- Strong problem-solving skills and ability to work in a fast-paced product environment.
- Ability to collaborate effectively with cross-functional engineering and product teams.
- The candidate should be based in India
- Willing to work remotely full-time
- Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Preferred Qualifications:
- Experience with TensorFlow or other deep learning frameworks.
- Experience working with OCR pipelines or document analysis systems.
- Experience deploying machine learning models in production environments.
- Experience with containerized deployments such as Docker or Kubernetes.
- Experience working with complex technical documents, diagrams, or structured visual data.
- Familiarity with spatial or geometry-related data problems.
- Experience with libraries such as Detectron2, MMDetection, or similar.
- Familiarity with frameworks used to integrate modern AI models into applications (e.g., LangChain or similar tooling).
- Contributions to open-source ML or computer vision projects are a plus.
Additional Information:
- The problems we work on involve complex visual and document-based data, so we value engineers who enjoy tackling challenging technical problems and experimenting with different approaches to reach practical solutions.
- Candidates are required to include links to relevant projects, GitHub repositories, research work, or examples of machine learning systems they have built.
Benefits:
- Flexible remote work opportunities with career development opportunities
- Engagement with a supportive and collaborative global team
- Competitive market based salary
Job Title: Python Development Intern
Company: Honeybee Digital
Location: Remote
Internship Duration: 3 Months
Job Type: Internship
Working Hours
- Full-time: 9:00 AM – 6:00 PM
- Part-time: 9:00 AM – 1:00 PM / 1:00 PM – 6:00 PM
Note: Internship certificate will be provided only after successful completion of the internship duration.
About the Role
We are looking for a passionate and motivated Python Development Intern who is eager to gain hands-on experience in real-world projects. This role is ideal for candidates interested in backend development, automation, data handling, and API integration.
Key Responsibilities
- Assist in developing applications using Python
- Work on data handling, automation scripts, and backend logic
- Support API development and integration
- Assist in web scraping and data processing tasks
- Debug, test, and optimize existing code
- Collaborate with development and data teams
- Document code and maintain project updates
Requirements
- Basic knowledge of Python programming
- Understanding of data structures and logic building
- Familiarity with libraries such as Pandas, NumPy (preferred)
- Basic understanding of APIs and web frameworks (Flask/Django is a plus)
- Problem-solving mindset and willingness to learn
- Ability to work independently and meet deadlines
Skills You Will Gain
- Hands-on experience in Python development and real projects
- Exposure to automation, Fast APIs, and backend systems
- Practical knowledge of data processing and scripting
- Debugging and optimization techniques
- Experience working in a professional development environment
Who Can Apply
- Students pursuing Computer Science, IT, Data Science, or related fields
- Freshers interested in Python development and backend roles
- Candidates looking to build a strong technical portfolio
About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote
Experience:
3+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 3 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
About Us Simbian® is building Agentic AI platform for cybersecurity. Founded by repeat successful security founders, we have gathered an excellent cohort of employees, partners, and customers. Our mission is to solve security using AI and our core values are excellence, replication, and intellectual honesty.
Our promise is to make Simbian the best workplace of your career and we believe a small group of thoughtful passionate people can make all the positive difference in the world. To fuel our fast growth, we are seeking an exceptional candidate who shares our core values of excellence (being the world's best at our craft), replication (share your best ideas with others), and intellectual honesty (tell the truth even if it's bitter).
Our AI Agents automate security operations and provide our customers 10x leverage. Our customers include some of the world's largest companies.
Our initial use cases include: SOC alert triage and investigation Prioritization and classification of vulnerabilities AI based threat hunting
As an Engineering Manager, you will lead a pod of highly skilled engineers responsible for building critical components of Simbian’s platform—from scalable backend services and data pipelines to integrations with security tools and novel AI-driven investigation engines. You’ll be responsible for driving execution, mentoring engineers, and shaping technical direction while working closely with product, AI/ML, and security teams.
This role is ideal for a hands-on leader who thrives in startup environments, is comfortable balancing execution with strategy, and can guide engineers to build reliable, secure, and scalable systems.
Responsibilities
• Lead and mentor a pod of backends, frontend, or platform engineers (depending on pod assignment: e.g., Integrations, Investigation Infra, Threat Hunting, etc.).
• Drive delivery of product and platform features aligned to quarterly OKRs
• Establish engineering best practices for code quality, observability, security, and reliability
• Collaborate with product managers and security SMEs to define technical scope, execution plans, and delivery timelines.
• Provide technical guidance in architecture decisions across areas such as: 1. Scalable microservices 2. Security product integrations (EDR, SIEM, CNAPP, etc.)
• Data pipelines (historical + real-time event ingestion)
• AI/ML systems for reasoning and automation
• Recruit, develop, and retain top engineering talent.
• Ensure pods maintain a high bar for innovation, execution, and collaboration.
Requirements
• 12+ years of professional software engineering experience in security domain, with at least 3+ years leading or managing engineering teams. • Strong background in building scalable backend systems (Python, Go, or Java preferred).
• Experience with cloud-native architectures (Kubernetes, Postgres, vector databases, OpenSearch, etc.).
• Familiarity with data pipelines (ETL/ELT, orchestration frameworks like Dagster/Airflow, streaming systems).
• Exposure to security products and data (SIEM, EDR, CNAPP, vulnerability management) is a strong plus.
• Track record of leading pods/teams to deliver complex technical projects with measurable outcomes.
• Strong communication skills, with the ability to work cross-functionally with product, AI/ML, and security teams.
• Startup mindset: bias for execution, ability to operate with ambiguity, and eagerness to wear multiple hats.
Nice to Have
• Experience with AI/ML pipelines, LLM integration, or security-focused AI applications.
• Knowledge of SOC processes, MITRE ATT&CK, or incident response workflows.
• Contributions to open-source projects in data, security, or AI. • Previous experience scaling teams at an early-stage startup.
Benefits
• Competitive salary commensurate with experience
• Generous early-stage equity with significant upside potential
• Annual performance bonuses tied to company and individual goals
Budget- under 90L annually
About AIVOA
AIVOA is building an AI-native Supply Chain Operating System for Life Sciences companies (API & FDF manufacturers).
We are creating an intelligent control layer that connects procurement, production, compliance, and logistics — enabling faster decisions, automation, and real-time visibility across operations.
About the Role
We are looking for a highly driven fresher to join as an AI Engineer and work on building AI-native systems from scratch.
This is a full-stack engineering role where you will:
- Build backend systems using Python (FastAPI)
- Develop frontend interfaces using React + Vite
- Work on AI-powered workflows and automation systems
You will directly contribute to building real-world systems used in regulated industries.
What You’ll Work On
- Backend APIs using FastAPI (Python)
- Frontend applications using React + Vite
- AI-assisted workflows (automation, decision systems)
- Integrating APIs, databases, and AI tools
- Building end-to-end product features (not isolated tasks)
Required Skills
- Strong basics in Python
- Basic understanding of React
- Understanding of APIs and how systems connect
- Basic SQL knowledge
- Strong problem-solving mindset
Good to Have (Optional)
- FastAPI exposure
- React project experience
- Git/GitHub
- Interest in AI tools (ChatGPT, Copilot, etc.)
Who Should Apply
- Freshers serious about becoming AI / Full Stack Engineers
- Builders (projects > certificates)
- People who can learn fast and execute
- Candidates who want startup experience and real ownership
Growth
- Work directly with founders and domain experts
- Build real AI systems
- Fast growth based on performance
Senior Project Owner / Project Manager Technology
Department - Technology / Software Development
Work Mode - Work From Home (WFH), Full Time
Experience - Minimum 10 Years (Development Background)
Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.
ROLE SUMMARY
We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.
KEY RESPONSIBILITIES
Project & Delivery Management
- Own and manage multiple concurrent technology projects from initiation to production release
- Define project scope, timelines, milestones, and resource allocation plans
- Distribute tasks effectively across a team of developers, QA, and support engineers
- Track assigned work daily, follow up on progress, and proactively remove blockers
- Ensure all projects meet deadlines and quality benchmarks without compromise
- Participate actively in production activities and take full accountability for live deployments
US Client Management
- Serve as the Technology single point of contact for all assigned US clients
- Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
- Resolve client queries, manage escalations, and ensure high client satisfaction
- Showcase company-developed applications and software demos confidently to clients
- Translate complex client requirements into clear technical deliverables for the team
Team Leadership
- Lead, mentor, and performance-manage a distributed remote team of technical members
- Foster accountability, ownership, and a high-delivery culture within the team
- Conduct sprint planning, stand-ups, retrospectives, and performance reviews
- Identify skill gaps and work with HR/training teams to bridge them
Process & Operations
- Deeply understand ARDEM's internal processes and align project execution accordingly
- Ensure development standards and best practices are followed across all projects
- Manage crisis situations with composure, identify root causes and drive swift resolution
- Coordinate with cross-functional teams including HR, Operations, Training, and QA
- Maintain project documentation, status reports, and risk registers
REQUIRED EXPERIENCE
- 10+ years of total experience in software development and project management
- 5–7 years of hands-on coding experience in one or more technologies listed below
- 2–3 years in a team management or tech lead role overseeing 5+ members
- Proven experience managing multiple simultaneous projects in a remote/WFH environment
- Prior experience working with US-based clients strong understanding of US work culture and expectations
TECHNICAL SKILLS
- Python: scripting, automation, data processing, backend services
- JavaScript / Node.js: server-side development, REST APIs, async workflows
- NET Core: enterprise application development and service integration
- SQL Databases: query optimization, schema design, stored procedures
- Familiarity with CI/CD pipelines, Git workflows, and deployment processes
- Ability to review code, understand architectural decisions, and guide the team technically
SKILLS & COMPETENCIES
- Exceptional verbal and written communication skills in English client-facing confidence is a must
- Strong crisis management and conflict resolution ability under tight deadlines
- Highly organized with a structured approach to planning, prioritization, and execution
- Self-driven and accountable capable of operating independently in a remote environment
- Strong presentation skills able to demo software to non-technical stakeholders
- Empathetic leadership style with the ability to motivate and align diverse team members
QUALIFICATIONS
- Bachelor's or master's degree in computer science
- PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
- Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage
LOCATION PREFERENCE
- Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
- This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
ABOUT ARDEM
ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving US based clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.
About the Role
Join the Blockchain Backend Infrastructure team and take a position in building and maintaining a leading blockchain management platform. You'll be responsible for building cutting-edge blockchain infrastructure while implementing high-throughput, real-time scalable software solutions.
As a Blockchain Engineer, you will be instrumental in the research and integration of blockchain technologies into the platform. Your responsibilities will include collaborating closely with foundations and developers to gain a deep understanding of blockchain protocols and on-chain projects, then applying that knowledge to implement new features within the platform.
You will focus equally on external protocol integration patterns and internal wallet infrastructure. This role serves as a technical bridge between raw on-chain capabilities and the wallet features delivered to customers.
What You'll Do
- Implement modern backend applications and infrastructure in a microservices architecture, using the latest technologies and development practices.
- Deep dive into the latest blockchain technology and become an expert in the fundamentals, protocols, and features of the chains we support.
- Collaborate effectively with developers, engineers, and other roles while demonstrating strong independent problem-solving abilities.
- Contribute to production reliability through on-call participation, incident response, and post-incident follow-ups.
What You'll Bring
- 5+ years of backend development experience in modern languages (Go, Python, JavaScript/TypeScript).
- 3+ years of hands-on blockchain development experience.
- Experience working on high-scale distributed systems.
- Understanding of microservices architecture and API design.
- Knowledge of consensus mechanisms, cryptographic primitives, and distributed systems.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred
- Experience building blockchain solutions for enterprise or institutional use cases.
- Understanding of security best practices for smart contracts and blockchain systems.
- Demonstrated ability to apply AI tools in day-to-day development.
- Understanding of MPC, multi-signature wallets, or other advanced cryptographic techniques.
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Experience with Docker, Kubernetes, and Helm.
- Location:
- - EU preferred or availability to travel to one of dev hubs in Europe once per quarter.
Job role: Systems Engineer (L2)
Location: Remote/Bengaluru
Experience: 3-6 years
About the Role:
We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.
Key Responsibilities:
— Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.
— Manage and optimize networking components — routers, switches, firewalls, load balancers.
— Handle incident response — monitor systems, identify issues, resolve production problems.
— Implement DevOps best practices — CI/CD pipelines, automation, containerization.
— Collaborate with backend and product teams on system architecture.
— Performance tuning — ensure high availability and reliability of platform.
— Security management — implement security protocols and compliance standards.
Required Skills:
Technical:
- Linux/Unix administration — strong fundamentals
- Networking — TCP/IP, DNS, BGP, VoIP protocols
- Cloud platforms — AWS/GCP/Azure — minimum 2 years
- DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
- Monitoring tools — Grafana, Prometheus, Kibana, Datadog
- Scripting — Python, Bash, Shell
- Databases — MySQL, PostgreSQL, Redis
Soft skills:
- Strong problem-solving under pressure
- Good communication — English written and verbal
- Team player — collaborative mindset
Good to Have:
- Experience in telecom/CPaaS/cloud communications industry
- Knowledge of VoIP, SIP, RTP protocols
- AI/ML operations experience
- CCNA/AWS certifications

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Job Title: Data Analyst (AI/ML Exposure)
Experience: 1–3 Years
Location: Mumbai
Job Description:
We are looking for a Data Analyst with strong experience in data handling, analysis, and visualization, along with exposure to AI/ML concepts. The role involves working with structured and unstructured data (SQL, CSV, JSON), building data pipelines, performing EDA, and deriving actionable insights. Candidates should have hands-on experience with Python (Pandas, NumPy), data visualization tools, and basic knowledge of NLP/LLMs. Exposure to APIs, data-driven applications, and client interaction will be an added advantage.
Skills Required: Python, SQL, Data Analysis, EDA, Visualization, APIs
Apply: Share your resume or connect with us.
Job Title: AI Architecture Intern
Company: PGAGI Consultancy Pvt. Ltd.
Location: Remote
Employment Type: Internship
Position Overview
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Key Responsibilities:
- AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
- Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
- Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
- Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
- Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.
Requirements:
- Strong understanding of AI concepts, machine learning algorithms, and data structures.
- Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
- Proficiency in programming languages such as Python, Java, or C++.
- Demonstrated interest in system architecture, design thinking, and scalable solutions.
- Up-to-date knowledge of AI trends, tools, and technologies.
- Ability to work independently and collaboratively in a remote team environment
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).
Preferred Experience:
- Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
- Exposure to AI-driven startups or fast-paced technology environments.
- Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.
What You'll Do
- Build and maintain web & backend systems using Python & Node.js
- Create custom workflows and automations
- Do code reviews, fix bugs, manage databases
- Work with teams to understand and deliver solutions
- Write clean, well-documented code
- Mentor junior developers
What We Need
- 2–6 years of software development experience
- Strong in Python, Node.js & REST APIs
- Experience with workflow/automation tools
- Self-driven, good communicator, team player
Perks of This Role
- Lead your own projects
- Mentor junior devs
- Direct access to stakeholders & leadership
You will own the end to end implementation and operation of AI powered outbound campaigns for our clients. That means taking a client brief, understanding their target market, building the systems that research and engage prospects, and making sure those systems run reliably without hand holding.
This is not a "connect two Zapier steps and call it automation" kind of role. You will be designing multi step workflows where AI agents research companies, enrich data through APIs, personalize messaging intelligently, and deliver outputs into client tools. Each campaign is a custom system with moving parts that need to work together cleanly.
What your weeks will look like:
You will onboard new clients, understand their ICP and outreach goals, then build and deploy the technical infrastructure to execute those campaigns. You will monitor live campaigns, troubleshoot when something breaks, and optimize for better results over time. You will hop on video calls with clients when needed, but the bulk of your time is building and maintaining systems that work.
Specifically, you will:
Build and manage complex n8n workflows that pull data from multiple sources, enrich it through APIs and AI, and deliver personalized outputs. Design Airtable bases that structure client data, automate processing, and integrate with external tools. Set up and manage email infrastructure: domains, deliverability, sending sequences. Use AI tools (Claude, GPT) to build research and personalization layers into client workflows. Handle client onboarding, ongoing communication, and technical troubleshooting. Own your campaigns. When something breaks at 2 PM on a Tuesday, you fix it. When a client asks why response rates dropped, you investigate and have an answer.
Who This Is For
You are someone who figures things out. You read documentation, test until it works, and do not give up when the first approach fails. You have strong technical intuition even if you are not a traditional developer. You understand how APIs work, how data flows between systems, and how to debug when something is not behaving as expected.
You follow AI developments closely. Not casually. You know the practical performance difference between Claude Opus 4.6 & GPT 5.4 🙃, you have opinions on which tools are overhyped, and you have probably built something with AI that you are proud of, even if it was just for yourself.
You are hungry. Not in a cliché motivational poster way. You genuinely want to get better at what you do, you take ownership of your work, and you do not need someone checking in on you every few hours to make sure you are making progress.
Core skills (non negotiable):
- n8n: You have built workflows and understand how nodes, data flow, and error handling work.
- AI tools: Regular, meaningful use of Claude or ChatGPT. You know how to prompt effectively and understand the limitations.
- Technical aptitude: You pick up new tools fast and figure things out from documentation, not tutorials.
- English proficiency: Written and spoken. You will be communicating with international clients.
Great to have (you can learn these on the job):
Experience with cold email and outbound systems (Smartlead, Instantly, or similar). Understanding of email deliverability (SPF, DKIM, domain setup). API integration and webhook experience. Data enrichment workflows using tools like Apollo, web scraping, or similar.
Work Setup
Fully remote. Work from anywhere. 5 day week. You manage your own schedule as long as the work gets done and you are available for client calls when needed.
Why This Role Is Different
You are not joining a company to work on traditional endangered tech job, but working with AI all day long, and deploying it's outbound systems for clients. You are building and running AI infrastructure for organizations that most people only see on TV. The problems you solve are genuinely novel. There is no tutorial for most of what we do. You will learn faster here in 3 months than you would in years at most places, because we operate at the intersection of AI, automation, and high stakes client work.
If you are the kind of person who gets excited about building systems that actually work in the real world, not just demos, this is your role.
How to Apply
If you have a portfolio of n8n workflows, Airtable bases, or any AI projects you have built, include links. We value what you have actually built over what is listed on your resume. Practical proof of work is valued 100x than just writing cool things in your application.
AVOID WRITING USING AI ANYWHERE IN YOUR APPLICATION. WE WORK WITH AI ALL DAY. ANY APPLICATIONS WRITTEN USING AI WILL NOT BE READ AND WILL BE REMOVED BY OUR AI QUALIFIER AGENT ITSELF 🙂
Summary
We are looking for a motivated Odoo Developer to design, develop, and maintain ERP solutions on both Odoo Community and Enterprise editions. The ideal candidate will have strong Python skills, practical experience with the Odoo framework, and the ability to deliver scalable, customized modules that align with business requirements. Compensation will be offered as a 25% to 50% hike on the candidate’s last drawn salary, based on experience and skill set.
Key Responsibilities
- Develop, customize, and maintain Odoo ERP modules for both Community and Enterprise editions.
- Create new custom modules and enhance existing ones to extend system functionality.
- Write clean, efficient, and well-documented Python code following Odoo development standards.
- Troubleshoot, debug, and resolve technical issues to ensure optimal system performance.
- Collaborate with functional consultants and business stakeholders to deliver scalable ERP solutions.
- Design and implement integrations between Odoo and third-party systems such as APIs, payment gateways, CRM tools, and other business applications.
- Optimize database queries and improve system performance.
- Participate in code reviews, testing, and deployment processes.
Required Skills & Experience
- Minimum 3 years of experience in Odoo development (Community and/or Enterprise editions).
- Strong proficiency in Python and understanding of the Odoo framework.
- Experience with PostgreSQL and database design concepts.
- Knowledge of Odoo ORM, QWeb, XML, and JavaScript.
- Hands-on experience developing and customizing Odoo modules.
- Familiarity with REST APIs and third-party integrations.
- Good debugging and problem-solving skills.
- Understanding of Git or other version control systems.
- Ability to work independently and in a team environment.
Preferred Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Experience working with both Odoo Community and Enterprise editions.
- Exposure to Odoo.sh or cloud deployment environments.
- Basic understanding of business processes such as Accounting, Sales, Inventory, or HR in ERP systems.
- Experience in Agile development methodologies is a plus.
Note
This is an immediate full-time remote requirement. Candidates who are passionate about ERP development and can work with both Odoo Community and Enterprise editions are encouraged to apply.
Python Developer (Performance Optimization Focus)
Experience: 3–5 Years
Location: Remote (India-based candidates only)
Employment Type: Full-time
Role Overview
We are seeking a Python Developer with a strong focus on performance optimization and system efficiency. In this role, you will identify bottlenecks, enhance system performance, and contribute to building scalable, high-performance applications in a Linux-based environment.
Key Responsibilities
- Analyze and troubleshoot performance bottlenecks in applications and systems
- Optimize code, database queries, and architecture for scalability and speed
- Design, develop, test, and maintain robust Python applications
- Work with large datasets and improve data processing efficiency
- Collaborate with cross-functional teams to improve system reliability and performance
- Monitor system performance and implement proactive improvements
- Write clean, maintainable, and efficient code following best practices
Required Skills & Qualifications
- 3–5 years of hands-on experience in Python development
- Strong expertise in performance tuning and optimization techniques
- Experience with debugging and profiling tools
- Solid understanding of data structures and algorithms
- Experience with REST APIs and backend development
- Strong analytical and problem-solving skills
Linux & System Knowledge (Must-Have)
- Comfortable working in Linux/Unix environments
- Command-line proficiency, including:
- File editing (vi, nano)
- File permissions (chmod, chown)
- File downloads (wget, curl)
- Basic file and directory operations
Basic Python Knowledge (Interview Scope)
- Writing simple scripts and reusable functions
- String manipulation and data handling
- Example task: Count words in a file/string efficiently
Good to Have
- Familiarity with AI/ML concepts or tools
- Experience optimizing data-intensive or distributed systems
- Exposure to cloud platforms (AWS, GCP, Azure)
Why Join Us
- Work on performance-critical systems with real-world impact
- Fully remote work environment
- Opportunity to work with modern, scalable technologies
- Collaborative, growth-focused team culture
Brikito — Lead Full-Stack Developer
Job Description
About Brikito
Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.
This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.
The Role
Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)
Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)
Type: Full-time
Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)
Start Date: May 2026
Reports to: Founder/CEO
- What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
- Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
- Set up CI/CD pipeline, staging, and production environments
- Integrate payment gateway (Razorpay for India)
- Build both web and mobile-responsive interfaces
- Ship the MVP within 12 weeks
- Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
- Build features based on customer feedback (not assumptions)
- Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
- Hire and manage 1–2 junior developers as the team grows
- Set up monitoring, error tracking, and basic analytics
- Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
- Establish code review processes, documentation standards, and sprint rhythms
- Own the technical roadmap alongside the founder
- Participate in investor conversations as the technical co-founder (if CTO-level)
- Make build-vs-buy decisions for new features
- Required SkillsMust Have7+ years of professional software development experience
- Strong proficiency in React or Next.js (frontend)
- Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
- PostgreSQL or MySQL — database design, query optimisation, migrations
- REST API design — clean, well-documented APIs
- Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
- Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
- Git — clean branching, PR-based workflow
- Has shipped at least one product that real users used — not just academic or internal tools
- Comfortable working independently — no one will tell you what to do step by step
- Strongly PreferredPrevious experience at a startup (Series A or earlier)
- Experience building SaaS or B2B products
- Experience with mobile development (React Native or Flutter)
- Experience integrating payment gateways (Razorpay, Stripe)
- Experience with third-party API integrations (OpenAI, Twilio, etc.)
- Understanding of CI/CD pipelines (GitHub Actions, Docker)
- Basic understanding of construction, real estate, or field operations (not required, but a plus)
- Nice to HaveExperience with TypeScript
- Experience with real-time features (WebSockets, push notifications)
- Familiarity with Figma (to translate wireframes into UI)
- Experience hiring and mentoring junior developers
- Open source contributions or a personal project portfolio
- What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
- Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
- Someone who optimises for perfect code over shipping — we ship first, refactor later
- Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it
- What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
- Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
- Direct impact — every line of code you write will be used by real customers within weeks
- Technical freedom — you choose the stack, the tools, the architecture
- A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
- Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company
How to Apply
Send the following:
- A short note (5–10 lines) on why this role interests you and what you'd bring
- Your LinkedIn profile or resume
- One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
- Your availability — when can you start?
We will respond within 48 hours. The process is:
- 30-minute video call with the founder
- Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
- Final conversation about role, equity, and start date
- Offer within 1 week of first call
Questions?
DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/
This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.
Strong Software Engineer fullstack profile using NodeJS / Python and React
Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
Mandatory (Core Skills 1): Must have strong experience in working on Typescript
Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis
Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB
Mandatory (Company) - Product Companies Only
Mandatory (Education) - B.Tech or Dual degree (Btech and Mtech or Integrated Msc/MS) from Tier 1 Engineering Institutes (Top 7 IITs, Top 5 NITs, IIIT Bangalore, IIIT Hyderabad, IIIT Allahabad, MNNIT, IIT Dhanbad, BITS Pilani). Candidates from other institutions will not be considered unless they come from top-tier product companies
Mandatory (Note) : This role is a hybrid role (2 days WFO)
Job Title: Full Stack Engineer (Django + Next.js)
We’re looking for a Full Stack Engineer with strong backend fundamentals and solid frontend experience to build scalable web products and APIs.
Must-Have
• Django + DRF (2+ years): Models, serializers, services, API views, migrations, query optimization (select_related / prefetch_related), transaction.atomic, custom managers
• Next.js + React (2+ years): App Router, SSR, client components, dynamic imports, useQuery, responsive UIs with Tailwind
• REST APIs: Auth, permissions, pagination, error handling, CORS, JWT flows
• PostgreSQL: Schema design, indexes, constraints, JSON fields, raw SQL when needed
• Celery / async tasks: Retry logic, idempotency, task chaining
• Git: Clean commits, branching, PR workflow
Good to Have
• AI / LLM integrations
• AWS S3 and presigned uploads
• Multi-tenancy
• WebRTC / MediaRecorder
• Docker
• Testing with pytest / Django TestCase / factory_boy
We’re looking for someone who can independently own features end-to-end and write clean, scalable code.
Description :
Job Title : Python Engineer- AI Agents & Code Optimization
Experience : 2+ Years
Employment Type : Full-time
Location : Remote
About the Role :
We are looking for a hands-on Software Engineer to build and improve AI agents that work directly on our production code.
Your core responsibility will be to design and evolve a specialized AI agent that deeply understands our codebase and actively helps make it faster, cleaner, simpler, and cheaper to maintain.
This is not a research role. This is real work on real systems with real business impact.
How We Work :
- Business impact first : Cheaper, Faster, Better
- Simple beats complex always
- Small changes, shipped fast
- You own your work end-to-end
- First question is always : Do we even need this?
- Flat team, zero micromanagement
- Decisions can change adaptability matters
- No long PRDs : one clear goal ? discuss ? execute
- Ship, measure, improve, repeat
What You Will Do :
- Build and use AI agents to optimize, refactor, and remove code
- Feed logs, metrics, and performance data back into AI agents
- Profile applications and identify performance bottlenecks
- Optimize SQL queries and database usage
- Improve deployment pipelines and release processes
- Continuously improve internal AI tooling
- Work closely with infrastructure and production systems
Tech You Should Be Comfortable With :
You dont need to be an expert in everything, but you should be comfortable working with :
- Linux CLI (Required)
- Python
- PHP
- SQL (MySQL or MariaDB)
- Shell scripting
- Large Language Models (LLMs)
What Were Looking For :
- 2+ years of software engineering experience (or strong hands-on projects)
- Solid understanding of performance optimization
- Experience cleaning up legacy or messy codebases
- Practical profiling and debugging skills
- Comfortable working close to infrastructure and deployments
- Automation-first mindset
- Ability to explain technical decisions clearly and simply in English
Nice to Have :
- Experience building AI agents
- Exposure to large or long-running systems
- CI/CD or deployment automation experience
When You Join :
- Career Growth : You are expected to grow into a tech lead, entrepreneur, or highly skilled specialist
- Bleeding-Edge Tech : Hands-on experience with alpha/beta software, cutting-edge infrastructure, and top tier hardware
- Global Exposure : Work with a global team and directly with C-level leadership
- Real Impact : Your code directly solves real user problems and moves the company forward
About the Role
Pendo is looking for a Senior Engineering Manager to lead teams building core product capabilities across Analytics, Guides, and Platform services. These are the systems that power how hundreds of millions of end users experience the software.
In this role, you will drive execution against business objectives, direct complex initiatives from kickoff through delivery, and build a team that operates with clarity and focus. You will set clear expectations, delegate effectively, and partner closely with product, design, and senior engineering leadership to keep teams aligned and moving. You default toward action, push teams to deliver value daily, and actively use AI tools as part of how you work.
If you're energized by directing high-impact teams, developing strong engineers, and building a culture where craft and velocity coexist, this role is a great fit.
What You'll Do
Team Leadership & Hiring
- Create an environment where engineers are encouraged to take risks, experiment, and challenge the status quo.
- Lead, mentor, and grow a team of engineers through clear expectations, coaching, and timely feedback.
- Own hiring end-to-end, partnering with recruiting to attract and close top engineering talent.
- Build an inclusive, high-performing team culture grounded in ownership, accountability, and continuous improvement.
Delivery & Execution
- Maintain a high bar for velocity, predictability, and quality.
- Own team execution against product and engineering goals.
- Partner with Product and Design to define roadmaps, scope work, and deliver high-quality outcomes.
- Identify and remove blockers, manage risks, and ensure strong planning and prioritization.
Technical Leadership
- Guide technical direction in partnership with senior engineers and tech leads.
- Shape architecture that drives delivery speed while preserving quality, reliability, and adaptability.
Cross-Functional Collaboration
- Work closely with product, design, infrastructure, and other engineering teams to deliver cohesive customer experiences.
- Align team priorities with broader organizational goals and strategy.
Operational Excellence
- Drive improvements in system reliability, performance, and scalability.
- Establish strong practices around monitoring, incident response, and continuous improvement.
What We're Looking For
- 8+ years of experience in software engineering.
- 3+ years of experience managing and growing engineering teams.
- Proven track record of hiring and building high-performing teams.
- Experience delivering complex, cross-functional initiatives in a product-driven environment.
- Strong technical foundation in backend, distributed systems, or full-stack development.
- Proven ability to lead teams through ambiguity and change while maintaining execution.
- Actively uses AI tools in day-to-day work and helps drive adoption across teams.
- Strong communication, organizational, and stakeholder management skills.
Nice to Have
- Experience working on analytics products, user-facing SaaS platforms, or data-intensive systems.
- Experience managing teams across both frontend and backend domains.
- Familiarity with modern cloud environments and scalable architectures.
- Experience working in distributed teams across multiple time zones.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities
Key Responsibilities
AI Architecture & Solution Design
- Design end-to-end AI solution architectures, including:
- Generative AI and LLM-based systems
- Retrieval-Augmented Generation (RAG) pipelines
- Agentic and multi-agent workflows
- Define reference architectures and best practices for AI-enabled features within enterprise products.
- Ensure AI solutions integrate seamlessly with existing applications, data, and cloud architectures.
AI Integration & MCP Servers
- Design and implement Model Context Protocol (MCP) servers to securely expose tools, APIs, and data to AI agents.
- Define standards for tool interfaces, access control, auditing, and safety guardrails.
- Enable product teams to onboard AI tools and capabilities using reusable, scalable integration patterns.
Agentic AI & Workflow Enablement
- Architect AI-driven workflows that support collaboration between humans and AI agents.
- Design AI-to-AI (A2A) and AI-to-system interaction patterns.
- Ensure agent behaviors are deterministic, explainable, and aligned with enterprise requirements.
Hands-On Development & Prototyping
- Build proofs-of-concept and production-ready implementations using Python and/or TypeScript.
- Rapidly validate ideas from ideation to deployment.
- Establish reusable frameworks, libraries, and CI/CD pipelines for AI development.
AI Governance, Quality & Safety
- Implement guardrails to minimize hallucinations, unsafe actions, and data leakage.
- Define evaluation and monitoring strategies for AI systems, including prompt regression and RAG accuracy checks.
- Ensure AI solutions comply with enterprise security, privacy, and governance standards.
Developer Enablement & Collaboration
- Partner with Product, Engineering, QE, Performance, and Security teams to deliver AI capabilities.
- Mentor teams on AI design patterns, tooling, and best practices.
- Contribute to internal AI communities through demos, documentation, and knowledge sharing.
Qualifications :
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience.
- Demonstrated expertise in cloud‑native system design, distributed architectures, and enterprise‑scale integrations.
- Proven ability to architect and implement AI-enabled systems, including integrating Large Language Models (LLMs) into production-grade software.
- Strong ownership of architectural decisions, technical direction, and solution delivery across complex, cross-functional initiatives.
- Hands-on experience applying security, observability, and automation best practices within enterprise environments.
- 6–10 years of experience in software architecture and distributed systems.
- 5+ years of experience building Generative AI or LLM-based solutions.
- Practical experience designing and implementing:
- Retrieval-Augmented Generation (RAG) architectures
- Agentic AI systems
- Tool-calling frameworks and AI integration layers
- Proficiency in Python and/or .Net/TypeScript/Node.js.
- Experience working with major cloud platforms such as Azure, AWS, or Google Cloud Platform (GCP).
Preferred Qualifications
- Experience with OpenAI, Azure OpenAI, Anthropic, or similar LLM platforms.
- Familiarity with Model Context Protocol (MCP) or equivalent AI tool-integration frameworks.
- Experience applying AI engineering practices beyond prototyping, including evaluation, reliability, and scalability considerations.
- Ability to translate ambiguous business problems into clear technical architecture and execution plans.
- History of influencing technical standards and mentoring senior engineers or architects.
- Experience with vector databases, embeddings, and retrieval optimisation.
- Experience building AI-enabled developer tooling and CI/CD pipelines.
- Prior experience in enterprise SaaS environments.
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.






















