50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Job Title: Python Development Intern
Company: Honeybee Digital
Location: Remote
Internship Duration: 3 Months
Job Type: Internship
Working Hours
- Full-time: 9:00 AM – 6:00 PM
- Part-time: 9:00 AM – 1:00 PM / 1:00 PM – 6:00 PM
Note: Internship certificate will be provided only after successful completion of the internship duration.
About the Role
We are looking for a passionate and motivated Python Development Intern who is eager to gain hands-on experience in real-world projects. This role is ideal for candidates interested in backend development, automation, data handling, and API integration.
Key Responsibilities
- Assist in developing applications using Python
- Work on data handling, automation scripts, and backend logic
- Support API development and integration
- Assist in web scraping and data processing tasks
- Debug, test, and optimize existing code
- Collaborate with development and data teams
- Document code and maintain project updates
Requirements
- Basic knowledge of Python programming
- Understanding of data structures and logic building
- Familiarity with libraries such as Pandas, NumPy (preferred)
- Basic understanding of APIs and web frameworks (Flask/Django is a plus)
- Problem-solving mindset and willingness to learn
- Ability to work independently and meet deadlines
Skills You Will Gain
- Hands-on experience in Python development and real projects
- Exposure to automation, Fast APIs, and backend systems
- Practical knowledge of data processing and scripting
- Debugging and optimization techniques
- Experience working in a professional development environment
Who Can Apply
- Students pursuing Computer Science, IT, Data Science, or related fields
- Freshers interested in Python development and backend roles
- Candidates looking to build a strong technical portfolio
About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote
Experience:
3+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 3 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Job Title: Snowflake Platform Administrator
Duration: 6-12 months contract (could be extended upon performance)
Mode: Remote
About the Role
We are looking for a Snowflake Administrator to join our Snowflake Center of Excellence (COE) to manage, secure, and optimize the enterprise Snowflake data platform. The role will focus on platform administration, security governance, and automation while enabling data engineering, analytics, and business teams to effectively leverage Snowflake capabilities.
Key Responsibilities
•Administer and maintain the Snowflake platform, including warehouses, databases, schemas, users, roles, and resource monitors.
•Implement and manage Snowflake security and access governance including RBAC, network policies, and network rules.
•Manage identity and access integration with Azure Active Directory (Azure AD), including role mapping with Azure AD groups.
•Monitor platform performance, usage, and cost to ensure efficient and reliable operations.
•Manage key Snowflake capabilities including data sharing (consumer and provider), cloning, data recovery, integrations (storage/API/notification), and performance optimization.
•Develop automation scripts using SQL and Python for administrative and operational tasks.
•Create and maintain CI/CD workflows using GitHub Actions for Snowflake deployments.
•Collaborate with data engineers, analysts, and architects to ensure secure and scalable data platform usage.
•Stay up to date with Snowflake product releases, new features, and platform best practices, and proactively evaluate their applicability to the organization.
•Contribute to standards, best practices, and governance frameworks within the Snowflake COE.
General Business
•Explore opportunities to leverage AI to improve platform automation and productivity.
Required Experience & Skills:
•5-8 years of relevant experience in Snowflake Administration and platform management.
•Solid understanding of Snowflake architecture, security, features, and performance optimization.
•Experience implementing RBAC, Network Policies, and Network Rules in Snowflake.
•Experience with Snowflake integration with Azure AD for role and access management via AD groups.
•Proficiency in SQL and Python scripting.
•Experience with GitHub and GitHub Actions/Workflow creation.
•Strong analytical and problem-solving skills.
•Functional Domain: FMCH (Fast Moving Consumer Health).
Preferred Additional Skills:
•AI enthusiast and Automation expertise
•Understanding of modern data architectures including data lakes and real-time processing
•Familiarity with BI tools such as Power BI, Tableau, Looker
Education & Languages
•Bachelor’s degree in computer science, Information Technology, or similar quantitative field of study.
•Fluent in English.
•Function effectively within teams of varied cultural backgrounds and expertise sources.
About Us Simbian® is building Agentic AI platform for cybersecurity. Founded by repeat successful security founders, we have gathered an excellent cohort of employees, partners, and customers. Our mission is to solve security using AI and our core values are excellence, replication, and intellectual honesty.
Our promise is to make Simbian the best workplace of your career and we believe a small group of thoughtful passionate people can make all the positive difference in the world. To fuel our fast growth, we are seeking an exceptional candidate who shares our core values of excellence (being the world's best at our craft), replication (share your best ideas with others), and intellectual honesty (tell the truth even if it's bitter).
Our AI Agents automate security operations and provide our customers 10x leverage. Our customers include some of the world's largest companies.
Our initial use cases include: SOC alert triage and investigation Prioritization and classification of vulnerabilities AI based threat hunting
As an Engineering Manager, you will lead a pod of highly skilled engineers responsible for building critical components of Simbian’s platform—from scalable backend services and data pipelines to integrations with security tools and novel AI-driven investigation engines. You’ll be responsible for driving execution, mentoring engineers, and shaping technical direction while working closely with product, AI/ML, and security teams.
This role is ideal for a hands-on leader who thrives in startup environments, is comfortable balancing execution with strategy, and can guide engineers to build reliable, secure, and scalable systems.
Responsibilities
• Lead and mentor a pod of backends, frontend, or platform engineers (depending on pod assignment: e.g., Integrations, Investigation Infra, Threat Hunting, etc.).
• Drive delivery of product and platform features aligned to quarterly OKRs
• Establish engineering best practices for code quality, observability, security, and reliability
• Collaborate with product managers and security SMEs to define technical scope, execution plans, and delivery timelines.
• Provide technical guidance in architecture decisions across areas such as: 1. Scalable microservices 2. Security product integrations (EDR, SIEM, CNAPP, etc.)
• Data pipelines (historical + real-time event ingestion)
• AI/ML systems for reasoning and automation
• Recruit, develop, and retain top engineering talent.
• Ensure pods maintain a high bar for innovation, execution, and collaboration.
Requirements
• 12+ years of professional software engineering experience in security domain, with at least 3+ years leading or managing engineering teams. • Strong background in building scalable backend systems (Python, Go, or Java preferred).
• Experience with cloud-native architectures (Kubernetes, Postgres, vector databases, OpenSearch, etc.).
• Familiarity with data pipelines (ETL/ELT, orchestration frameworks like Dagster/Airflow, streaming systems).
• Exposure to security products and data (SIEM, EDR, CNAPP, vulnerability management) is a strong plus.
• Track record of leading pods/teams to deliver complex technical projects with measurable outcomes.
• Strong communication skills, with the ability to work cross-functionally with product, AI/ML, and security teams.
• Startup mindset: bias for execution, ability to operate with ambiguity, and eagerness to wear multiple hats.
Nice to Have
• Experience with AI/ML pipelines, LLM integration, or security-focused AI applications.
• Knowledge of SOC processes, MITRE ATT&CK, or incident response workflows.
• Contributions to open-source projects in data, security, or AI. • Previous experience scaling teams at an early-stage startup.
Benefits
• Competitive salary commensurate with experience
• Generous early-stage equity with significant upside potential
• Annual performance bonuses tied to company and individual goals
Budget- under 90L annually
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated full-time Software Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
- Collaborate with the product and design teams to develop and implement user-friendly web interfaces
- Ensure responsive design and optimize web pages for performance and scalability across devices
- Debug and resolve issues, improving the overall user experience on the platform and ensuring reliability
- Assist in integrating APIs and frontend services with the backend
- Stay up-to-date with the latest trends and suggest improvements to enhance the platform’s functionality
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Strong proficiency in JavaScript frameworks like ReactJS or Redux
- Proficiency in frontend languages such as HTML, CSS, and JavaScript (ES6+)
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in software development, build critical skills, and contribute to a product that has a real-world impact.
About AIVOA
AIVOA is building an AI-native Supply Chain Operating System for Life Sciences companies (API & FDF manufacturers).
We are creating an intelligent control layer that connects procurement, production, compliance, and logistics — enabling faster decisions, automation, and real-time visibility across operations.
About the Role
We are looking for a highly driven fresher to join as an AI Engineer and work on building AI-native systems from scratch.
This is a full-stack engineering role where you will:
- Build backend systems using Python (FastAPI)
- Develop frontend interfaces using React + Vite
- Work on AI-powered workflows and automation systems
You will directly contribute to building real-world systems used in regulated industries.
What You’ll Work On
- Backend APIs using FastAPI (Python)
- Frontend applications using React + Vite
- AI-assisted workflows (automation, decision systems)
- Integrating APIs, databases, and AI tools
- Building end-to-end product features (not isolated tasks)
Required Skills
- Strong basics in Python
- Basic understanding of React
- Understanding of APIs and how systems connect
- Basic SQL knowledge
- Strong problem-solving mindset
Good to Have (Optional)
- FastAPI exposure
- React project experience
- Git/GitHub
- Interest in AI tools (ChatGPT, Copilot, etc.)
Who Should Apply
- Freshers serious about becoming AI / Full Stack Engineers
- Builders (projects > certificates)
- People who can learn fast and execute
- Candidates who want startup experience and real ownership
Growth
- Work directly with founders and domain experts
- Build real AI systems
- Fast growth based on performance
What are we looking for??
- You have a good understanding and work experience in AKS, Kubernetes, and EKS.
- You are able to manage multi region clusters for disaster recovery.
- You have a good understanding of AWS stack.
- You have experience of production level in Kubernetes.
- You are comfortable coding/programming and can do so whenever required.
- You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
- You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
- You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
- You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
- You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
- You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.
What you will be learning and doing?
- You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
- The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
- You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they?
- You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
- You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Senior Project Owner / Project Manager Technology
Department - Technology / Software Development
Work Mode - Work From Home (WFH), Full Time
Experience - Minimum 10 Years (Development Background)
Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.
ROLE SUMMARY
We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.
KEY RESPONSIBILITIES
Project & Delivery Management
- Own and manage multiple concurrent technology projects from initiation to production release
- Define project scope, timelines, milestones, and resource allocation plans
- Distribute tasks effectively across a team of developers, QA, and support engineers
- Track assigned work daily, follow up on progress, and proactively remove blockers
- Ensure all projects meet deadlines and quality benchmarks without compromise
- Participate actively in production activities and take full accountability for live deployments
US Client Management
- Serve as the Technology single point of contact for all assigned US clients
- Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
- Resolve client queries, manage escalations, and ensure high client satisfaction
- Showcase company-developed applications and software demos confidently to clients
- Translate complex client requirements into clear technical deliverables for the team
Team Leadership
- Lead, mentor, and performance-manage a distributed remote team of technical members
- Foster accountability, ownership, and a high-delivery culture within the team
- Conduct sprint planning, stand-ups, retrospectives, and performance reviews
- Identify skill gaps and work with HR/training teams to bridge them
Process & Operations
- Deeply understand ARDEM's internal processes and align project execution accordingly
- Ensure development standards and best practices are followed across all projects
- Manage crisis situations with composure, identify root causes and drive swift resolution
- Coordinate with cross-functional teams including HR, Operations, Training, and QA
- Maintain project documentation, status reports, and risk registers
REQUIRED EXPERIENCE
- 10+ years of total experience in software development and project management
- 5–7 years of hands-on coding experience in one or more technologies listed below
- 2–3 years in a team management or tech lead role overseeing 5+ members
- Proven experience managing multiple simultaneous projects in a remote/WFH environment
- Prior experience working with US-based clients strong understanding of US work culture and expectations
TECHNICAL SKILLS
- Python: scripting, automation, data processing, backend services
- JavaScript / Node.js: server-side development, REST APIs, async workflows
- NET Core: enterprise application development and service integration
- SQL Databases: query optimization, schema design, stored procedures
- Familiarity with CI/CD pipelines, Git workflows, and deployment processes
- Ability to review code, understand architectural decisions, and guide the team technically
SKILLS & COMPETENCIES
- Exceptional verbal and written communication skills in English client-facing confidence is a must
- Strong crisis management and conflict resolution ability under tight deadlines
- Highly organized with a structured approach to planning, prioritization, and execution
- Self-driven and accountable capable of operating independently in a remote environment
- Strong presentation skills able to demo software to non-technical stakeholders
- Empathetic leadership style with the ability to motivate and align diverse team members
QUALIFICATIONS
- Bachelor's or master's degree in computer science
- PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
- Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage
LOCATION PREFERENCE
- Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
- This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
ABOUT ARDEM
ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving US based clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.
About the Role
Join the Blockchain Backend Infrastructure team and take a position in building and maintaining a leading blockchain management platform. You'll be responsible for building cutting-edge blockchain infrastructure while implementing high-throughput, real-time scalable software solutions.
As a Blockchain Engineer, you will be instrumental in the research and integration of blockchain technologies into the platform. Your responsibilities will include collaborating closely with foundations and developers to gain a deep understanding of blockchain protocols and on-chain projects, then applying that knowledge to implement new features within the platform.
You will focus equally on external protocol integration patterns and internal wallet infrastructure. This role serves as a technical bridge between raw on-chain capabilities and the wallet features delivered to customers.
What You'll Do
- Implement modern backend applications and infrastructure in a microservices architecture, using the latest technologies and development practices.
- Deep dive into the latest blockchain technology and become an expert in the fundamentals, protocols, and features of the chains we support.
- Collaborate effectively with developers, engineers, and other roles while demonstrating strong independent problem-solving abilities.
- Contribute to production reliability through on-call participation, incident response, and post-incident follow-ups.
What You'll Bring
- 5+ years of backend development experience in modern languages (Go, Python, JavaScript/TypeScript).
- 3+ years of hands-on blockchain development experience.
- Experience working on high-scale distributed systems.
- Understanding of microservices architecture and API design.
- Knowledge of consensus mechanisms, cryptographic primitives, and distributed systems.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred
- Experience building blockchain solutions for enterprise or institutional use cases.
- Understanding of security best practices for smart contracts and blockchain systems.
- Demonstrated ability to apply AI tools in day-to-day development.
- Understanding of MPC, multi-signature wallets, or other advanced cryptographic techniques.
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Experience with Docker, Kubernetes, and Helm.
- Location:
- - EU preferred or availability to travel to one of dev hubs in Europe once per quarter.
Job role: Systems Engineer (L2)
Location: Remote/Bengaluru
Experience: 3-6 years
About the Role:
We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.
Key Responsibilities:
— Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.
— Manage and optimize networking components — routers, switches, firewalls, load balancers.
— Handle incident response — monitor systems, identify issues, resolve production problems.
— Implement DevOps best practices — CI/CD pipelines, automation, containerization.
— Collaborate with backend and product teams on system architecture.
— Performance tuning — ensure high availability and reliability of platform.
— Security management — implement security protocols and compliance standards.
Required Skills:
Technical:
- Linux/Unix administration — strong fundamentals
- Networking — TCP/IP, DNS, BGP, VoIP protocols
- Cloud platforms — AWS/GCP/Azure — minimum 2 years
- DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
- Monitoring tools — Grafana, Prometheus, Kibana, Datadog
- Scripting — Python, Bash, Shell
- Databases — MySQL, PostgreSQL, Redis
Soft skills:
- Strong problem-solving under pressure
- Good communication — English written and verbal
- Team player — collaborative mindset
Good to Have:
- Experience in telecom/CPaaS/cloud communications industry
- Knowledge of VoIP, SIP, RTP protocols
- AI/ML operations experience
- CCNA/AWS certifications

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Technology/Technical Team Lead
Department - Technology / Software Development
Work Mode - Work From Home (WFH), Full Time
Experience - Minimum 10 Years (Development Background)
Location - Tier-1 Cities Only (Mumbai, Delhi, Bengaluru, Hyderabad, Chennai, Pune, Kolkata)
Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.
ABOUT ARDEM
ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving USbased clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.
ROLE SUMMARY
We are looking for a seasoned Technology/Technical Team Lead with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.
KEY RESPONSIBILITIES
Project & Delivery Management
- Own and manage multiple concurrent technology projects from initiation to production release
- Define project scope, timelines, milestones, and resource allocation plans
- Distribute tasks effectively across a team of developers, QA, and support engineers
- Track assigned work daily, follow up on progress, and proactively remove blockers
- Ensure all projects meet deadlines and quality benchmarks without compromise
- Participate actively in production activities and take full accountability for live deployments
US Client Management
- Serve as the Technology single point of contact for all assigned US clients
- Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
- Resolve client queries, manage escalations, and ensure high client satisfaction
- Showcase company-developed applications and software demos confidently to clients
- Translate complex client requirements into clear technical deliverables for the team
Team Leadership
- Lead, mentor, and performance-manage a distributed remote team of technical members
- Foster accountability, ownership, and a high-delivery culture within the team
- Conduct sprint planning, stand-ups, retrospectives, and performance reviews
- Identify skill gaps and work with HR/training teams to bridge them
Process & Operations
- Deeply understand ARDEM's internal processes and align project execution accordingly
- Ensure development standards and best practices are followed across all projects
- Manage crisis situations with composure, identify root causes and drive swift resolution
- Coordinate with cross-functional teams including HR, Operations, Training, and QA
- Maintain project documentation, status reports, and risk registers
REQUIRED EXPERIENCE
- 10+ years of total experience in software development and project management
- 5–7 years of hands-on coding experience in one or more technologies listed below
- 2–3 years in a team management or tech lead role overseeing 5+ members
- Proven experience managing multiple simultaneous projects in a remote/WFH environment
- Prior experience working with US-based clients strong understanding of US work culture and expectations
TECHNICAL SKILLS
- Python: scripting, automation, data processing, backend services
- JavaScript / Node.js: server-side development, REST APIs, async workflows
- NET Core: enterprise application development and service integration
- SQL Databases: query optimization, schema design, stored procedures
- Familiarity with CI/CD pipelines, Git workflows, and deployment processes
- Ability to review code, understand architectural decisions, and guide the team technically
SKILLS & COMPETENCIES
- Exceptional verbal and written communication skills in English client-facing confidence is a must
- Strong crisis management and conflict resolution ability under tight deadlines
- Highly organized with a structured approach to planning, prioritization, and execution
- Self-driven and accountable capable of operating independently in a remote environment
- Strong presentation skills able to demo software to non-technical stakeholders
- Empathetic leadership style with the ability to motivate and align diverse team members
QUALIFICATIONS
- Bachelor's or master's degree in computer science
- PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
- Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage
LOCATION PREFERENCE
- Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
- This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
Job Title: Data Analyst (AI/ML Exposure)
Experience: 1–3 Years
Location: Mumbai
Job Description:
We are looking for a Data Analyst with strong experience in data handling, analysis, and visualization, along with exposure to AI/ML concepts. The role involves working with structured and unstructured data (SQL, CSV, JSON), building data pipelines, performing EDA, and deriving actionable insights. Candidates should have hands-on experience with Python (Pandas, NumPy), data visualization tools, and basic knowledge of NLP/LLMs. Exposure to APIs, data-driven applications, and client interaction will be an added advantage.
Skills Required: Python, SQL, Data Analysis, EDA, Visualization, APIs
Apply: Share your resume or connect with us.
Job Title: AI Architecture Intern
Company: PGAGI Consultancy Pvt. Ltd.
Location: Remote
Employment Type: Internship
Position Overview
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Key Responsibilities:
- AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
- Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
- Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
- Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
- Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.
Requirements:
- Strong understanding of AI concepts, machine learning algorithms, and data structures.
- Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
- Proficiency in programming languages such as Python, Java, or C++.
- Demonstrated interest in system architecture, design thinking, and scalable solutions.
- Up-to-date knowledge of AI trends, tools, and technologies.
- Ability to work independently and collaboratively in a remote team environment
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).
Preferred Experience:
- Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
- Exposure to AI-driven startups or fast-paced technology environments.
- Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.
Join our team as a Data Engineer (ETL & Migration) and be a key contributor in our dynamic, technology-driven environment. Your expertise in building complex data pipelines, migrating and transforming data between systems, and collaborating with stakeholders, project managers, and Data and Business analysts will drive impactful solutions and help tackle exciting data challenge.
What We’re Looking For:
- Minimum 3 years of experience as a Data Engineer, ETL Engineer, or Data Migration Engineer
- Expertise in database design and management, including SQL databases such as SQL Server
- Strong command in Python or similar programming languages for building custom pipelines, data cleaning, transformations, and automation
- Hands-on experience designing and implementing ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes for data integration
- Proficiency in Data validation and Reconciliation
- Hands-on experience working with APIs (Python, Postman)
- Strong knowledge of Data modeling principles and techniques
- Experience with Version control systems (Git)
- Experience with Cloud platforms (AWS, Azure, or Google Cloud)
- Familiarity with PowerShell or Bash scripting is a plus
What You’ll Be Doing:
- Build robust ETL data pipelines using SQL, APIs, and custom import logic
- Create data mappings between source and target systems within the transformation layer
- Develop testing processes to prevent data loss or data corruption
- Work with both relational and non-relational data to achieve full data mapping
- Collaborate with Business and Data analysts to ensure data quality and proper business logic
- Work closely with stakeholders to ensure on-time delivery
- Collaborate with an agile delivery team by working on backlog items and priorities
What You'll Do
- Build and maintain web & backend systems using Python & Node.js
- Create custom workflows and automations
- Do code reviews, fix bugs, manage databases
- Work with teams to understand and deliver solutions
- Write clean, well-documented code
- Mentor junior developers
What We Need
- 2–6 years of software development experience
- Strong in Python, Node.js & REST APIs
- Experience with workflow/automation tools
- Self-driven, good communicator, team player
Perks of This Role
- Lead your own projects
- Mentor junior devs
- Direct access to stakeholders & leadership
You will own the end to end implementation and operation of AI powered outbound campaigns for our clients. That means taking a client brief, understanding their target market, building the systems that research and engage prospects, and making sure those systems run reliably without hand holding.
This is not a "connect two Zapier steps and call it automation" kind of role. You will be designing multi step workflows where AI agents research companies, enrich data through APIs, personalize messaging intelligently, and deliver outputs into client tools. Each campaign is a custom system with moving parts that need to work together cleanly.
What your weeks will look like:
You will onboard new clients, understand their ICP and outreach goals, then build and deploy the technical infrastructure to execute those campaigns. You will monitor live campaigns, troubleshoot when something breaks, and optimize for better results over time. You will hop on video calls with clients when needed, but the bulk of your time is building and maintaining systems that work.
Specifically, you will:
Build and manage complex n8n workflows that pull data from multiple sources, enrich it through APIs and AI, and deliver personalized outputs. Design Airtable bases that structure client data, automate processing, and integrate with external tools. Set up and manage email infrastructure: domains, deliverability, sending sequences. Use AI tools (Claude, GPT) to build research and personalization layers into client workflows. Handle client onboarding, ongoing communication, and technical troubleshooting. Own your campaigns. When something breaks at 2 PM on a Tuesday, you fix it. When a client asks why response rates dropped, you investigate and have an answer.
Who This Is For
You are someone who figures things out. You read documentation, test until it works, and do not give up when the first approach fails. You have strong technical intuition even if you are not a traditional developer. You understand how APIs work, how data flows between systems, and how to debug when something is not behaving as expected.
You follow AI developments closely. Not casually. You know the practical performance difference between Claude Opus 4.6 & GPT 5.4 🙃, you have opinions on which tools are overhyped, and you have probably built something with AI that you are proud of, even if it was just for yourself.
You are hungry. Not in a cliché motivational poster way. You genuinely want to get better at what you do, you take ownership of your work, and you do not need someone checking in on you every few hours to make sure you are making progress.
Core skills (non negotiable):
- n8n: You have built workflows and understand how nodes, data flow, and error handling work.
- AI tools: Regular, meaningful use of Claude or ChatGPT. You know how to prompt effectively and understand the limitations.
- Technical aptitude: You pick up new tools fast and figure things out from documentation, not tutorials.
- English proficiency: Written and spoken. You will be communicating with international clients.
Great to have (you can learn these on the job):
Experience with cold email and outbound systems (Smartlead, Instantly, or similar). Understanding of email deliverability (SPF, DKIM, domain setup). API integration and webhook experience. Data enrichment workflows using tools like Apollo, web scraping, or similar.
Work Setup
Fully remote. Work from anywhere. 5 day week. You manage your own schedule as long as the work gets done and you are available for client calls when needed.
Why This Role Is Different
You are not joining a company to work on traditional endangered tech job, but working with AI all day long, and deploying it's outbound systems for clients. You are building and running AI infrastructure for organizations that most people only see on TV. The problems you solve are genuinely novel. There is no tutorial for most of what we do. You will learn faster here in 3 months than you would in years at most places, because we operate at the intersection of AI, automation, and high stakes client work.
If you are the kind of person who gets excited about building systems that actually work in the real world, not just demos, this is your role.
How to Apply
If you have a portfolio of n8n workflows, Airtable bases, or any AI projects you have built, include links. We value what you have actually built over what is listed on your resume. Practical proof of work is valued 100x than just writing cool things in your application.
AVOID WRITING USING AI ANYWHERE IN YOUR APPLICATION. WE WORK WITH AI ALL DAY. ANY APPLICATIONS WRITTEN USING AI WILL NOT BE READ AND WILL BE REMOVED BY OUR AI QUALIFIER AGENT ITSELF 🙂
Summary
We are looking for a motivated Odoo Developer to design, develop, and maintain ERP solutions on both Odoo Community and Enterprise editions. The ideal candidate will have strong Python skills, practical experience with the Odoo framework, and the ability to deliver scalable, customized modules that align with business requirements. Compensation will be offered as a 25% to 50% hike on the candidate’s last drawn salary, based on experience and skill set.
Key Responsibilities
- Develop, customize, and maintain Odoo ERP modules for both Community and Enterprise editions.
- Create new custom modules and enhance existing ones to extend system functionality.
- Write clean, efficient, and well-documented Python code following Odoo development standards.
- Troubleshoot, debug, and resolve technical issues to ensure optimal system performance.
- Collaborate with functional consultants and business stakeholders to deliver scalable ERP solutions.
- Design and implement integrations between Odoo and third-party systems such as APIs, payment gateways, CRM tools, and other business applications.
- Optimize database queries and improve system performance.
- Participate in code reviews, testing, and deployment processes.
Required Skills & Experience
- Minimum 3 years of experience in Odoo development (Community and/or Enterprise editions).
- Strong proficiency in Python and understanding of the Odoo framework.
- Experience with PostgreSQL and database design concepts.
- Knowledge of Odoo ORM, QWeb, XML, and JavaScript.
- Hands-on experience developing and customizing Odoo modules.
- Familiarity with REST APIs and third-party integrations.
- Good debugging and problem-solving skills.
- Understanding of Git or other version control systems.
- Ability to work independently and in a team environment.
Preferred Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Experience working with both Odoo Community and Enterprise editions.
- Exposure to Odoo.sh or cloud deployment environments.
- Basic understanding of business processes such as Accounting, Sales, Inventory, or HR in ERP systems.
- Experience in Agile development methodologies is a plus.
Note
This is an immediate full-time remote requirement. Candidates who are passionate about ERP development and can work with both Odoo Community and Enterprise editions are encouraged to apply.
Python Developer (Performance Optimization Focus)
Experience: 3–5 Years
Location: Remote (India-based candidates only)
Employment Type: Full-time
Role Overview
We are seeking a Python Developer with a strong focus on performance optimization and system efficiency. In this role, you will identify bottlenecks, enhance system performance, and contribute to building scalable, high-performance applications in a Linux-based environment.
Key Responsibilities
- Analyze and troubleshoot performance bottlenecks in applications and systems
- Optimize code, database queries, and architecture for scalability and speed
- Design, develop, test, and maintain robust Python applications
- Work with large datasets and improve data processing efficiency
- Collaborate with cross-functional teams to improve system reliability and performance
- Monitor system performance and implement proactive improvements
- Write clean, maintainable, and efficient code following best practices
Required Skills & Qualifications
- 3–5 years of hands-on experience in Python development
- Strong expertise in performance tuning and optimization techniques
- Experience with debugging and profiling tools
- Solid understanding of data structures and algorithms
- Experience with REST APIs and backend development
- Strong analytical and problem-solving skills
Linux & System Knowledge (Must-Have)
- Comfortable working in Linux/Unix environments
- Command-line proficiency, including:
- File editing (vi, nano)
- File permissions (chmod, chown)
- File downloads (wget, curl)
- Basic file and directory operations
Basic Python Knowledge (Interview Scope)
- Writing simple scripts and reusable functions
- String manipulation and data handling
- Example task: Count words in a file/string efficiently
Good to Have
- Familiarity with AI/ML concepts or tools
- Experience optimizing data-intensive or distributed systems
- Exposure to cloud platforms (AWS, GCP, Azure)
Why Join Us
- Work on performance-critical systems with real-world impact
- Fully remote work environment
- Opportunity to work with modern, scalable technologies
- Collaborative, growth-focused team culture
Brikito — Lead Full-Stack Developer
Job Description
About Brikito
Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.
This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.
The Role
Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)
Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)
Type: Full-time
Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)
Start Date: May 2026
Reports to: Founder/CEO
- What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
- Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
- Set up CI/CD pipeline, staging, and production environments
- Integrate payment gateway (Razorpay for India)
- Build both web and mobile-responsive interfaces
- Ship the MVP within 12 weeks
- Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
- Build features based on customer feedback (not assumptions)
- Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
- Hire and manage 1–2 junior developers as the team grows
- Set up monitoring, error tracking, and basic analytics
- Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
- Establish code review processes, documentation standards, and sprint rhythms
- Own the technical roadmap alongside the founder
- Participate in investor conversations as the technical co-founder (if CTO-level)
- Make build-vs-buy decisions for new features
- Required SkillsMust Have7+ years of professional software development experience
- Strong proficiency in React or Next.js (frontend)
- Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
- PostgreSQL or MySQL — database design, query optimisation, migrations
- REST API design — clean, well-documented APIs
- Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
- Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
- Git — clean branching, PR-based workflow
- Has shipped at least one product that real users used — not just academic or internal tools
- Comfortable working independently — no one will tell you what to do step by step
- Strongly PreferredPrevious experience at a startup (Series A or earlier)
- Experience building SaaS or B2B products
- Experience with mobile development (React Native or Flutter)
- Experience integrating payment gateways (Razorpay, Stripe)
- Experience with third-party API integrations (OpenAI, Twilio, etc.)
- Understanding of CI/CD pipelines (GitHub Actions, Docker)
- Basic understanding of construction, real estate, or field operations (not required, but a plus)
- Nice to HaveExperience with TypeScript
- Experience with real-time features (WebSockets, push notifications)
- Familiarity with Figma (to translate wireframes into UI)
- Experience hiring and mentoring junior developers
- Open source contributions or a personal project portfolio
- What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
- Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
- Someone who optimises for perfect code over shipping — we ship first, refactor later
- Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it
- What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
- Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
- Direct impact — every line of code you write will be used by real customers within weeks
- Technical freedom — you choose the stack, the tools, the architecture
- A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
- Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company
How to Apply
Send the following:
- A short note (5–10 lines) on why this role interests you and what you'd bring
- Your LinkedIn profile or resume
- One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
- Your availability — when can you start?
We will respond within 48 hours. The process is:
- 30-minute video call with the founder
- Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
- Final conversation about role, equity, and start date
- Offer within 1 week of first call
Questions?
DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/
This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.
Strong Software Engineer fullstack profile using NodeJS / Python and React
Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
Mandatory (Core Skills 1): Must have strong experience in working on Typescript
Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis
Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB
Mandatory (Company) - Product Companies Only
Mandatory (Education) - B.Tech or Dual degree (Btech and Mtech or Integrated Msc/MS) from Tier 1 Engineering Institutes (Top 7 IITs, Top 5 NITs, IIIT Bangalore, IIIT Hyderabad, IIIT Allahabad, MNNIT, IIT Dhanbad, BITS Pilani). Candidates from other institutions will not be considered unless they come from top-tier product companies
Mandatory (Note) : This role is a hybrid role (2 days WFO)
About the Role
We are looking for a high-calibre Senior Full Stack Engineer to join a product-focused team, building and iterating on modern applications in a fast-paced environment.
This role goes beyond traditional full-stack development. It is suited for engineers who combine strong technical fundamentals with product thinking, high ownership, and the ability to move quickly while maintaining quality. You will work across the stack, prototype rapidly, and leverage AI tools as a core part of your daily workflow.
The ideal candidate is an independent thinker who can operate with minimal direction, challenge assumptions (including AI-generated outputs), and deliver end-to-end solutions. This is a highly visible role requiring strong communication skills and the ability to engage confidently with senior stakeholders.
Responsibilities
- Design, build, and ship scalable full-stack applications across backend and frontend systems
- Take ownership of features end-to-end, from ideation to production deployment
- Prototype quickly and iterate based on product and user feedback
- Use AI tools (e.g., Copilot, ChatGPT, Cursor, Claude) to accelerate development while applying sound engineering judgment
- Evaluate and improve AI-generated code, ensuring quality, performance, and maintainability
- Contribute to system design, architecture, and technical decision-making
- Work across backend, frontend, and infrastructure layers as needed
- Collaborate with product stakeholders to define requirements and make informed trade-offs
- Identify gaps, inefficiencies, or product issues and proactively suggest improvements.
- Maintain high standards of code quality, testing, and performance
Requirements
- Strong academic background from top-tier engineering institutions (e.g., IITs, IISc, IIITs, BITS, top NITs, or equivalent)
- 6–10+ years of experience in software engineering, with strong full-stack exposure.
- Strong backend engineering experience (Ruby on Rails preferred, or Python, Go, Rust with equivalent depth)
- Solid frontend development experience with modern frameworks (e.g., React or similar).
- Strong understanding of system design, APIs, and scalable architecture
- Proven ability to build and ship production-grade applications end-to-end
- Demonstrated product mindset with the ability to think beyond implementation
- Experience working in product-driven environments with high ownership
- Hands-on experience using AI tools (e.g., Copilot, ChatGPT, Cursor, Claude) in day-to-day development
- Ability to critically evaluate AI-generated output and apply sound engineering judgment
- Strong communication skills with the ability to articulate technical decisions clearly
- High level of autonomy, ownership, and problem-solving capability
Nice to Have
- Experience working in high-growth startups or product-led companies
- Experience contributing across DevOps or infrastructure
- Strong track record of ownership and impact in previous roles
- Exposure to fast-paced, high-performance engineering cultures
Job Title: Full Stack Engineer (Django + Next.js)
We’re looking for a Full Stack Engineer with strong backend fundamentals and solid frontend experience to build scalable web products and APIs.
Must-Have
• Django + DRF (2+ years): Models, serializers, services, API views, migrations, query optimization (select_related / prefetch_related), transaction.atomic, custom managers
• Next.js + React (2+ years): App Router, SSR, client components, dynamic imports, useQuery, responsive UIs with Tailwind
• REST APIs: Auth, permissions, pagination, error handling, CORS, JWT flows
• PostgreSQL: Schema design, indexes, constraints, JSON fields, raw SQL when needed
• Celery / async tasks: Retry logic, idempotency, task chaining
• Git: Clean commits, branching, PR workflow
Good to Have
• AI / LLM integrations
• AWS S3 and presigned uploads
• Multi-tenancy
• WebRTC / MediaRecorder
• Docker
• Testing with pytest / Django TestCase / factory_boy
We’re looking for someone who can independently own features end-to-end and write clean, scalable code.
Description :
Job Title : Python Engineer- AI Agents & Code Optimization
Experience : 2+ Years
Employment Type : Full-time
Location : Remote
About the Role :
We are looking for a hands-on Software Engineer to build and improve AI agents that work directly on our production code.
Your core responsibility will be to design and evolve a specialized AI agent that deeply understands our codebase and actively helps make it faster, cleaner, simpler, and cheaper to maintain.
This is not a research role. This is real work on real systems with real business impact.
How We Work :
- Business impact first : Cheaper, Faster, Better
- Simple beats complex always
- Small changes, shipped fast
- You own your work end-to-end
- First question is always : Do we even need this?
- Flat team, zero micromanagement
- Decisions can change adaptability matters
- No long PRDs : one clear goal ? discuss ? execute
- Ship, measure, improve, repeat
What You Will Do :
- Build and use AI agents to optimize, refactor, and remove code
- Feed logs, metrics, and performance data back into AI agents
- Profile applications and identify performance bottlenecks
- Optimize SQL queries and database usage
- Improve deployment pipelines and release processes
- Continuously improve internal AI tooling
- Work closely with infrastructure and production systems
Tech You Should Be Comfortable With :
You dont need to be an expert in everything, but you should be comfortable working with :
- Linux CLI (Required)
- Python
- PHP
- SQL (MySQL or MariaDB)
- Shell scripting
- Large Language Models (LLMs)
What Were Looking For :
- 2+ years of software engineering experience (or strong hands-on projects)
- Solid understanding of performance optimization
- Experience cleaning up legacy or messy codebases
- Practical profiling and debugging skills
- Comfortable working close to infrastructure and deployments
- Automation-first mindset
- Ability to explain technical decisions clearly and simply in English
Nice to Have :
- Experience building AI agents
- Exposure to large or long-running systems
- CI/CD or deployment automation experience
When You Join :
- Career Growth : You are expected to grow into a tech lead, entrepreneur, or highly skilled specialist
- Bleeding-Edge Tech : Hands-on experience with alpha/beta software, cutting-edge infrastructure, and top tier hardware
- Global Exposure : Work with a global team and directly with C-level leadership
- Real Impact : Your code directly solves real user problems and moves the company forward
About the Role
Pendo is looking for a Senior Engineering Manager to lead teams building core product capabilities across Analytics, Guides, and Platform services. These are the systems that power how hundreds of millions of end users experience the software.
In this role, you will drive execution against business objectives, direct complex initiatives from kickoff through delivery, and build a team that operates with clarity and focus. You will set clear expectations, delegate effectively, and partner closely with product, design, and senior engineering leadership to keep teams aligned and moving. You default toward action, push teams to deliver value daily, and actively use AI tools as part of how you work.
If you're energized by directing high-impact teams, developing strong engineers, and building a culture where craft and velocity coexist, this role is a great fit.
What You'll Do
Team Leadership & Hiring
- Create an environment where engineers are encouraged to take risks, experiment, and challenge the status quo.
- Lead, mentor, and grow a team of engineers through clear expectations, coaching, and timely feedback.
- Own hiring end-to-end, partnering with recruiting to attract and close top engineering talent.
- Build an inclusive, high-performing team culture grounded in ownership, accountability, and continuous improvement.
Delivery & Execution
- Maintain a high bar for velocity, predictability, and quality.
- Own team execution against product and engineering goals.
- Partner with Product and Design to define roadmaps, scope work, and deliver high-quality outcomes.
- Identify and remove blockers, manage risks, and ensure strong planning and prioritization.
Technical Leadership
- Guide technical direction in partnership with senior engineers and tech leads.
- Shape architecture that drives delivery speed while preserving quality, reliability, and adaptability.
Cross-Functional Collaboration
- Work closely with product, design, infrastructure, and other engineering teams to deliver cohesive customer experiences.
- Align team priorities with broader organizational goals and strategy.
Operational Excellence
- Drive improvements in system reliability, performance, and scalability.
- Establish strong practices around monitoring, incident response, and continuous improvement.
What We're Looking For
- 8+ years of experience in software engineering.
- 3+ years of experience managing and growing engineering teams.
- Proven track record of hiring and building high-performing teams.
- Experience delivering complex, cross-functional initiatives in a product-driven environment.
- Strong technical foundation in backend, distributed systems, or full-stack development.
- Proven ability to lead teams through ambiguity and change while maintaining execution.
- Actively uses AI tools in day-to-day work and helps drive adoption across teams.
- Strong communication, organizational, and stakeholder management skills.
Nice to Have
- Experience working on analytics products, user-facing SaaS platforms, or data-intensive systems.
- Experience managing teams across both frontend and backend domains.
- Familiarity with modern cloud environments and scalable architectures.
- Experience working in distributed teams across multiple time zones.
About Us
We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable.
Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.
We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life.
Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk.
Our Guiding Principles
These principles define how we work at Incubyte. They are non-negotiable.
Relentless Pursuit of Quality with Pragmatism
We build high-quality systems without losing sight of delivery.
Extreme Ownership
We take responsibility end-to-end for decisions, execution, and outcomes.
Proactive Collaboration
We collaborate closely, challenge each other, and solve problems together.
Active Pursuit of Mastery
We continuously improve our craft and raise our bar.
Invite, Give, and Act on Feedback
We seek, give, and act on feedback to get better every day.
Ensuring Client Success
We act as trusted partners and focus on real outcomes, not just output.
Job Description
This is a remote position.
Experience Level
This role is ideal for engineers with 3–15 years of experience and a strong background in building secure, scalable platforms.
We are looking for hands-on DevOps and Backend Engineers with real-world experience in handling production incidents, distributed systems, and modern infrastructure challenges.
What You’ll Do as a Software Craftsperson
- Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
- Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
- Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
- Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
- Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
- Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability
Requirements
What You’ll Bring
5–15 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.
Strong hands-on expertise in DevOps and backend technologies including:
- Kubernetes, Terraform, and CI/CD pipelines
- Tools such as k9s, k3s (GitLab CI preferred)
- Backend technologies such as Go, Python, or Java
- Experience with Docker, gRPC, and Kubernetes-native services
Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)
Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.
Hands-on experience across multiple core functional areas, with exposure to at least five of the following:
- Identity & Access Management
- Observability (Prometheus + Grafana)
- CI/CD Pipelines
- Keycloak
- GitLab CI
- Terraform OSS
- Kubernetes ecosystem tools
Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges
Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security
Benefits
Life at Incubyte
We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.
Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.
Perks
Dedicated learning & development budget
Sponsorship for conference talks
Comprehensive medical & term insurance
Employee-friendly leave policies
Home Office fund
Medical Insurance
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities
Key Responsibilities
AI Architecture & Solution Design
- Design end-to-end AI solution architectures, including:
- Generative AI and LLM-based systems
- Retrieval-Augmented Generation (RAG) pipelines
- Agentic and multi-agent workflows
- Define reference architectures and best practices for AI-enabled features within enterprise products.
- Ensure AI solutions integrate seamlessly with existing applications, data, and cloud architectures.
AI Integration & MCP Servers
- Design and implement Model Context Protocol (MCP) servers to securely expose tools, APIs, and data to AI agents.
- Define standards for tool interfaces, access control, auditing, and safety guardrails.
- Enable product teams to onboard AI tools and capabilities using reusable, scalable integration patterns.
Agentic AI & Workflow Enablement
- Architect AI-driven workflows that support collaboration between humans and AI agents.
- Design AI-to-AI (A2A) and AI-to-system interaction patterns.
- Ensure agent behaviors are deterministic, explainable, and aligned with enterprise requirements.
Hands-On Development & Prototyping
- Build proofs-of-concept and production-ready implementations using Python and/or TypeScript.
- Rapidly validate ideas from ideation to deployment.
- Establish reusable frameworks, libraries, and CI/CD pipelines for AI development.
AI Governance, Quality & Safety
- Implement guardrails to minimize hallucinations, unsafe actions, and data leakage.
- Define evaluation and monitoring strategies for AI systems, including prompt regression and RAG accuracy checks.
- Ensure AI solutions comply with enterprise security, privacy, and governance standards.
Developer Enablement & Collaboration
- Partner with Product, Engineering, QE, Performance, and Security teams to deliver AI capabilities.
- Mentor teams on AI design patterns, tooling, and best practices.
- Contribute to internal AI communities through demos, documentation, and knowledge sharing.
Qualifications :
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience.
- Demonstrated expertise in cloud‑native system design, distributed architectures, and enterprise‑scale integrations.
- Proven ability to architect and implement AI-enabled systems, including integrating Large Language Models (LLMs) into production-grade software.
- Strong ownership of architectural decisions, technical direction, and solution delivery across complex, cross-functional initiatives.
- Hands-on experience applying security, observability, and automation best practices within enterprise environments.
- 6–10 years of experience in software architecture and distributed systems.
- 5+ years of experience building Generative AI or LLM-based solutions.
- Practical experience designing and implementing:
- Retrieval-Augmented Generation (RAG) architectures
- Agentic AI systems
- Tool-calling frameworks and AI integration layers
- Proficiency in Python and/or .Net/TypeScript/Node.js.
- Experience working with major cloud platforms such as Azure, AWS, or Google Cloud Platform (GCP).
Preferred Qualifications
- Experience with OpenAI, Azure OpenAI, Anthropic, or similar LLM platforms.
- Familiarity with Model Context Protocol (MCP) or equivalent AI tool-integration frameworks.
- Experience applying AI engineering practices beyond prototyping, including evaluation, reliability, and scalability considerations.
- Ability to translate ambiguous business problems into clear technical architecture and execution plans.
- History of influencing technical standards and mentoring senior engineers or architects.
- Experience with vector databases, embeddings, and retrieval optimisation.
- Experience building AI-enabled developer tooling and CI/CD pipelines.
- Prior experience in enterprise SaaS environments.
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding.
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (Preferably Top product companies, AI native companies, B2B SaaS)
Mandatory (Stability): Must have atleast 2 years of experience in each of the previous companies (if less exp, then proper reason)
Mandatory (Note): Candidates who have owned end-to-end product development or worked on app development projects during their graduation will be highly preferred.
Mandatory (Note 2): The role offers a mix of work setups, including remote, Mumbai (in-office), and Bangalore (in-office) opportunities
We are looking for a skilled ML Engineer with 3–5 years of experience in building and deploying production-grade AI solutions, particularly around LLMs, RAG systems, and agentic AI frameworks. The role involves designing end-to-end ML architectures, optimizing models at scale, and delivering client-ready AI solutions. You will collaborate closely with stakeholders, mentor junior engineers, and drive AI projects from experimentation to production.
What will you need to be successful in this role?
Core Technical Skills
• Strong hands-on experience with Python for ML/AI (NumPy, Pandas, Scikit-learn, PyTorch/TensorFlow)
• Proven experience deploying production LLM applications with 1M+ tokens processed
• Advanced prompt engineering expertise including ReAct, meta-prompting, and function calling
• Production experience with RAG systems including hybrid search and re-ranking
• Deep understanding of embedding models and vector databases at scale
• Experience with agentic AI frameworks (LangGraph, CrewAI, or AutoGen)
• Strong knowledge of LLM evaluation frameworks (RAGAS, LLM-as-judge patterns)
• Experience implementing multi-agent systems and orchestration
• Proficiency with cloud ML platforms (AWS SageMaker, Azure ML, or Vertex AI)
Advanced Capabilities
• Experience with model fine-tuning (LoRA, QLoRA, PEFT, instruction tuning)
• Knowledge of knowledge graphs and graph-based RAG implementations
• Understanding of model hosting, inference optimization, and cost management
• Experience with MLOps pipelines, CI/CD for ML, and model versioning
• Ability to architect end-to-end ML solutions from data ingestion to deployment
• Experience with data pipelines and ETL for ML workflows
• Proficiency in containerization and orchestration (Docker, Kubernetes)
Client Engagement & Delivery
• Experience presenting technical solutions to clients and stakeholders
• Ability to translate business requirements into technical ML solutions
• Track record of delivering client POCs and production implementations
• Experience creating technical documentation and implementation guides
Good to have
• Experience hosting private LLMs (7B-13B models on-premises or cloud)
• Knowledge of graph databases (Neo4j) and graph neural networks
• Experience with streaming and real-time ML inference
• Published research papers or contributions to open-source ML projects
• DeepLearning.AI certifications in Agentic AI, RAG, or Finetuning
• AWS/Azure ML certifications or working towards them
Competencies
• Excellent verbal and written communication skills
• Strong mentoring ability for junior ML engineers
• Self-driven with ability to work independently on complex problems
• Excellent problem-solving skills with systematic debugging approach
• Proactive ownership of projects from ideation to deployment
• Ability to stay current with rapidly evolving AI/ML landscape
• Excellent academic record – B.E./B.Tech, MCA,
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A

AI & ML (45 Days – Live Hands‑On)
Program Fee: ₹25,000
Duration: 45 Days
Mode: Hybrid (Online Sessions + Live Lab Access)
Eligibility: Freshers, Final‑Year Students, and Career Switchers
About the Internship
This intensive 45‑day internship program is designed for freshers who want to build strong, industry‑relevant skills in Cloud Infrastructure, Cybersecurity, and AI/ML model development. The program offers live production hands‑on training, allowing interns to work on real-world architectures, security workflows, and AI/ML deployments.
Participants will receive end‑to‑end exposure to modern cloud platforms, DevOps practices, security operations, and machine learning deployment, making them job‑ready for roles like:
- Cloud/Infra Engineer
- DevOps Engineer
- Security Analyst
- AI/ML Engineer
- Site Reliability Engineer (SRE)
Key Highlights
- 45 days of practical, mentor-led training
- Live production-style projects and deployments
- Hands‑on experience with:
- AWS / Azure Cloud
- Terraform, CI/CD, Docker, Kubernetes
- Security Hardening & IAM
- Python ML pipelines & model deployment
- Architect & deploy real systems using best practices
- Build portfolio-ready projects
- Receive an industry-recognized Internship Certificate
What You Will Learn
1️⃣ Cloud Infrastructure & DevOps
- Cloud fundamentals (AWS/Azure/GCP)
- Linux administration & scripting
- VPC, Subnets, Routing, NAT, Firewalls
- EC2 provisioning & autoscaling
- Load balancing & High Availability
- Terraform for Infrastructure as Code
- CI/CD pipelines using Jenkins / GitHub Actions
- Docker containerization & Kubernetes basics
2️⃣ Cybersecurity & Cloud Security
- IAM roles, policies, access control
- Server security & hardening
- SSL/TLS, encryption, key management
- Secure VPC & subnet design
- Threat detection & logging
- Secrets management
- Network segmentation & firewall best practices
3️⃣ AI & Machine Learning
- Python for ML
- Supervised and unsupervised algorithms
- Data preprocessing & model training
- Model evaluation & optimization
- Build ML inference APIs using FastAPI/Flask
- Containerize and deploy ML models to cloud
- Integrate monitoring for ML workflows
✅ Live Hands‑On Projects
Interns will work on real-world, production-grade projects such as:
- Deploy a secure 3‑tier web application on cloud
- Automate infra provisioning using Terraform
- Build CI/CD pipelines for automated deployments
- Harden servers & configure security groups, IAM
- Develop and deploy an ML model as a cloud API
- Create monitoring dashboards with Prometheus/Grafana
- End-to-end system deployment with logging and alerting
Each intern will complete a Capstone Project and present it during the final evaluation.
✅ Internship Deliverables
- Internship Completion Certificate
- 3+ Production‑level projects
- GitHub portfolio with all code and deployments
- Cloud & ML documentation
- Resume enhancement and guidance
- Career mentoring + interview preparation
✅ Who Should Apply
This internship is ideal for:
- Fresh graduates
- Final‑year engineering or IT students
- BSc, BCA, MCA, B.Tech learners
- Professionals switching careers to Cloud/DevOps/AI
- Anyone seeking hands‑on, real‑time industry experience
✅ Program Fee
₹25,000/- (includes training, labs, live‑project access, certificate, and mentorship)
✅ Certificate Provided
All participants will receive a verified Internship Certificate, including:
- Candidate Name
- Internship Duration & Dates
- Skills Covered
- Project Evaluation Score
- Authorized Signatory & Company Seal
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
mail me your CV and portfolio at hr @ hookux.com
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
AI Lead (Backend Systems & Architecture)
This is not a feature-delivery role. This is an architecture, ownership, and AI systems leadership role.
At Techjays, we build production-grade AI platforms for global clients. We are looking for an AI Lead with strong backend engineering expertise—someone who can design, scale, and take complete ownership of intelligent systems end-to-end.
You will operate at the intersection of backend engineering, distributed systems, and applied AI, driving both technical direction and execution.
What You’ll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement AI workflows such as RAG pipelines, agents, and LLM integrations
- Own systems end-to-end: architecture, development, deployment, and scaling
- Build reliable, high-performance distributed systems
- Integrate and optimize LLMs (Claude, GPT, etc.) for real-world use cases
- Lead backend and AI initiatives with strong technical ownership
- Ensure performance, scalability, observability, and cost efficiency
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to build AI-native solutions
What We’re Looking For
- Proven experience in architecting and scaling backend systems end-to-end
- Strong expertise in Python (Django / Flask / FastAPI)
- Deep understanding of distributed systems and system design
- Hands-on experience with AWS or GCP in production environments
- Solid experience working with LLMs (Claude, GPT, etc.)
- Strong knowledge of:
- Retrieval-Augmented Generation (RAG)
- Vector databases (Pinecone, FAISS, Weaviate, etc.)
- Experience in building and managing microservices architectures
- Ability to lead teams, mentor engineers, and drive technical excellence
- Strong problem-solving skills with an ownership mindset
Nice to Have
- Experience building AI agents or autonomous systems
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- Understanding of MLOps and AI system lifecycle
- Experience optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable in fast-moving, ambiguous environments
- You stay updated with the latest advancements in AI and backend technologies
This role is ideal for someone who wants to lead, build, and scale AI-powered backend systems in production while driving real-world impact.
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
About The Nexora Group Inc.
The Nexora Group Inc. is a technology-driven organization focused on building intelligent digital solutions using modern software engineering and artificial intelligence technologies. Our teams work on projects involving data-driven applications, automation systems, and AI-powered tools designed to solve real-world business challenges.
We are looking for motivated and enthusiastic Python Developer Interns with an interest in Artificial Intelligence who want to gain practical experience working on live development projects.
Internship Responsibilities
- Assist in developing backend applications using Python
- Work on AI-related modules such as machine learning models, data processing pipelines, and automation tools
- Write clean, scalable, and well-documented code
- Support the development of APIs and backend services
- Participate in debugging, testing, and performance optimization
- Collaborate with development teams on project tasks and deliverables
- Contribute to research and implementation of AI/ML solutions
Required Skills
- Basic understanding of Python programming
- Familiarity with data structures and algorithms
- Interest in Artificial Intelligence and Machine Learning
- Basic knowledge of NumPy, Pandas, or similar Python libraries
- Understanding of REST APIs is a plus
- Strong problem-solving skills
- Ability to learn quickly and work in a collaborative environment
Preferred Qualifications
- Students or recent graduates in Computer Science, IT, Data Science, or related fields
- Basic knowledge of Machine Learning concepts
- Experience with Git or version control systems is beneficial
- Familiarity with Flask, Django, or FastAPI is a plus
What Interns Will Gain
- Hands-on experience working on real-world development projects
- Exposure to AI and machine learning development workflows
- Mentorship from experienced developers
- Opportunity to build a strong portfolio with practical project experience
- Internship completion certificate based on performance and participation
About the Internship
The Nexora Group Inc. is looking for enthusiastic and motivated interns who want to build practical experience in Data Science and Artificial Intelligence. This internship is designed to provide hands-on exposure to real-world datasets, machine learning techniques, and AI-driven problem solving.
Interns will work closely with our technical team to analyze data, build predictive models, and explore AI tools that support data-driven decision-making.
Key Responsibilities
- Collect, clean, and preprocess structured and unstructured datasets
- Perform exploratory data analysis (EDA) to identify trends and patterns
- Develop machine learning models using Python-based libraries
- Assist in building AI-powered data analysis workflows
- Create dashboards, reports, and visualizations to communicate insights
- Work with tools such as Python, Pandas, NumPy, and visualization libraries
- Collaborate with team members on real-world data science projects
- Document project findings and maintain clear technical reports
Required Skills
- Basic knowledge of Python programming
- Understanding of data analysis and statistics
- Familiarity with Machine Learning concepts
- Knowledge of libraries such as Pandas, NumPy, Matplotlib, or Scikit-learn
- Strong analytical and problem-solving skills
- Good communication and documentation skills
Preferred Qualifications
- Students or recent graduates in Computer Science, Data Science, Statistics, Mathematics, or related fields
- Basic understanding of Artificial Intelligence concepts
- Familiarity with Jupyter Notebook or Google Colab
- Interest in working with real-world datasets and analytics tools
What You Will Gain
- Hands-on experience with Data Science and AI projects
- Mentorship from experienced professionals
- Internship completion certificate
- Opportunity to build portfolio projects
- Exposure to real-world industry workflows
Highlights - Current location of candidate should be Bangalore
Total Exp - 6-12yrs
Joining Time period - Within 30 days
GCP Bigquery expert, GCP Certified
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - GCP Certification
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Role Overview
We are looking for a QA Automation Engineer who can leverage AI-driven testing approaches to improve automation coverage, test reliability, and data generation.
The ideal candidate should have strong experience in backend-heavy automation testing, modern automation frameworks, and using AI tools to generate test cases, maintain test scripts, and create synthetic data for testing.
Key Responsibilities
- Design and develop automated test frameworks for backend and API-heavy applications.
- Use AI tools to generate test scripts from requirements (e.g., Gherkin/Cucumber-based test generation).
- Implement and maintain self-healing test automation frameworks that adapt to UI changes.
- Develop automated tests using Playwright, Appium, and other modern automation tools.
- Create synthetic test data using AI while ensuring PII compliance.
- Perform backend stress testing and API validation.
- Work closely with engineering teams to ensure product quality and release readiness.
- Continuously improve test coverage, test reliability, and automation efficiency.
Must-Have Skills
- 4+ years of experience in QA Automation
- Strong experience in automation testing frameworks
- Hands-on experience with Playwright for web automation
- Experience with Appium for mobile automation
- Proficiency in Python for test scripting and data generation
- Experience writing BDD-style test cases (Gherkin / Cucumber)
- Experience in API testing and backend automation
- Familiarity with AI-assisted test generation tools
- Strong knowledge of CI/CD pipelines and automated testing workflows
Relevant Skills
- Backend automation testing
- Test automation frameworks design
- AI-assisted test generation
- Synthetic test data generation
- Performance and stress testing
- API testing tools (Postman, REST clients)
- Test reporting and debugging
- Version control using Git
AI & Automation Expertise
- Using AI tools to generate test cases from requirements
- Experience with self-healing test automation frameworks such as Mabl or Testim
- Using AI to generate synthetic financial datasets for testing
- Testing AI-powered applications or AI features
Tools & Technologies
- Playwright
- Appium
- Python
- Cucumber / Gherkin
- CI/CD tools
- Git
Strong Plus
- Experience working in the Finance / FinTech sector
- Experience testing AI-powered applications
- Experience working closely with AI engineering teams
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is one part a Digital Product Studio that specializes in building superior product experiences, and one part Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi million valuation companies in the US, and a handful of sister ventures for large corporations including Target, US Ventures, Imprint Engine.
We’re a team of 100 strong from around the world that are radically open minded and believes in excellence, respecting one another and pushing our boundaries to furthest its ever been.
We are looking for passionate and motivated Developers to join our growing technical team. The ideal candidate should have strong foundational knowledge in Python/Django or React with Django and be eager to work on real-time web development projects.
Open Positions:
Python Django Developer
React + Django Developer
Key Responsibilities:
- Develop, test, and maintain scalable web applications.
- Write clean, efficient, and reusable code using Django and/or React.
- Collaborate with UI/UX designers and backend developers to implement new features.
- Debug, troubleshoot, and optimize application performance.
- Participate in code reviews and contribute to team discussions.
- Stay updated with the latest web development trends and technologies.
Requirements:
- Basic to strong knowledge of Python and Django framework.
- Familiarity with React.js (for React + Django role).
- Understanding of REST APIs and database concepts.
- Knowledge of HTML, CSS, and JavaScript.
- Strong problem-solving and logical thinking skills.
- Good communication and teamwork abilities.
- Freshers and career restart candidates are welcome to apply.
More Info:
Company: Altos Technologies
Website: www.altostechnologies.in
Job Type: Permanent Job
Industry: IT / Web Development
Function: Software Development
Employment Type: Full-time
Location: Kochi & Chennai
We're hiring a Python Developer in Jaipur.
Not looking for someone who can recite design patterns. Looking for someone who can open a Django codebase, figure out what's broken,
and fix it by end of day. 3-4 years. Django / Flask / FastAPI. REST APIs. PostgreSQL. If you've maintained production code (not just built tutorial projects) — this is your role.
Full-time | Jaipur | Industry-standard pay | Small team = real ownership
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)





















