50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!


About Us: Certa is an emerging leader in the fast-growing Enterprise Workflow Automation industry with advanced “no-code” SaaS workflow solutions. Our platform addresses the entire lifecycle for Suppliers/Third-parties/Partners covering onboarding, risk assessment, contract lifecycle management, and ongoing monitoring. Certa offers the most automated and "ridiculously" configurable solutions disrupting the customer/counterparty KYC & AML space. The Certa platform brings business functions like Procurement, Sales, Compliance, Legal, InfoSec, Privacy, etc., together via an easy collaborative workflow, automated risk scoring, and ongoing monitoring for key ‘shifts in circumstance’. Our data-agnostic, open-API platform ensures that Clients can take a best-in-class approach when leveraging any of our 80+ (& growing) existing data and tech partner integrations. As a result, Certa enables clients to onboard Third Parties & KYC customers faster, with less effort, with no swivel chair syndrome and maintains a constantly updated searchable knowledge repository of all records.
Certa’s clients range from the largest & leading global firms in their Industry (Retail, Aerospace, Payments, Consulting, Ridesharing, and Commercial Data) to mid-stage start-ups.
Responsibilities:
As a Solutions Engineer at our technology product company, you will play a critical role in ensuring the successful integration and customisation of our product offerings for clients. Your primary responsibilities will involve configuring our software solutions to meet our client's unique requirements and business use cases. Additionally, you will be heavily involved in API integrations to enable seamless data flow and connectivity between our products and various client systems.
- Client Requirement Analysis: Collaborate with the sales and client-facing teams to understand client needs, business use cases, and specific requirements for implementing our technology products.
- Product Configuration: Utilize your technical expertise to configure and customise our software solutions according to the identified client needs and business use cases. This may involve setting up workflows, defining data structures, and enabling specific features or functionalities.
- API Integration: Work closely with the development and engineering teams to design, implement, and manage API integrations with external systems, ensuring smooth data exchange and interoperability.
- Solution Design: Participate in solution design discussions with clients and internal stakeholders, providing valuable insights and recommendations based on your understanding of the technology and the business domain.
- Troubleshooting: Identify and resolve configuration-related issues and challenges that arise during the implementation and integration process, ensuring the smooth functioning of the product.
- Documentation: Create and maintain detailed documentation of configurations, customisations, and integration processes to facilitate knowledge sharing within the organisation and with clients.
- Quality Assurance: Conduct thorough unit testing of configurations and integrations to verify that they meet the defined requirements and perform as expected.
- Client Support: Provide support and guidance to clients during the onboarding and post-implementation phases, assisting them with any questions or concerns related to configuration and integration.
- Continuous Improvement: Stay up-to-date with the latest product features, industry trends, and best practices in configuration and integration, and proactively suggest improvements to enhance the overall efficiency and effectiveness of the process.
- Cross-Functional Collaboration: Work closely with different teams, including product management, engineering, marketing, and sales, to align product development with business goals and customer needs.
- Product Launch and Support*: Assist in the product launch by providing technical support, conducting training sessions, and addressing customer inquiries. Collaborate with customer support teams to troubleshoot and resolve complex technical issues.
Requirements :
- 3 - 5 Years in a similar capacity with a proven track record of Implementation excellence working with Medium to large enterprise customers
- Strong analytical skills with the ability to grasp complex business use cases and translate them into technical solutions.
- Bachelor’s Degree Required with a preference for Engineering or equivalent.
- Practical experience working on ERP integrations, process documentation and requirements-gathering tools like MIRO or VISIO is a plus.
- Proficiency in API integration and understanding of RESTful APIs and web services.
- Technical expertise in relevant programming languages and platforms related to the technology product.
- Exceptional communication skills to interact with clients, understand their requirements, and explain technical concepts clearly and concisely.
- Results-oriented and inherently curious mindset capable of influencing internal and external partners to drive priorities and outcomes.
- Independent operator capable of taking limited direction and applying the best action.
- Excellent communication, presentation, negotiation, and interpersonal skills.
- Ability to create structure in ambiguous situations and design effective processes.
- Experience with JSON and SaaS Products is a plus.
- Location: Hires Remotely Everywhere
- Job Type: Full Time
- Experience: 3 - 5 years
- Languages: Excellent command of the English Language

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform.
Position Overview
We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes.
Responsibilities
- Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces.
- Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives.
- Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task.
- Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure.
- Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process.
- Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms.
- Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources.
- Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations.
- Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow.
- Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively.
- Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process.
Requirements
- Proven experience of 7+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects
- Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services
- Knowledge of browser fingerprinting
- Has leadership experience
- Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium
- Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding
- Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data
- Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management
- Attention to detail and ability to handle and process large volumes of data accurately
- Familiarity with data cleansing techniques and data validation processes
- Good communication skills and ability to collaborate effectively with cross-functional teams
- Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service
- Strong problem-solving skills and adaptability to changing web environments
Preferred Qualifications
- Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields
- Experience with cloud-based solutions and distributed web scraping systems
- Familiarity with APIs and data extraction from non-public sources
- Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory
- Prior experience in handling large-scale data projects and working with big data frameworks
- Understanding of various data formats such as JSON, XML, CSV, etc.
- Experience with version control systems like Git

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.


About the CryptoXpress Partner Program
Earn lifetime income just by liking posts, posting memes, art, simple threads, engaging on Twitter, Quora, Reddit, or Instagram, referral signups, commission from transactions like flight, hotel, trade, gift card etc.,
(Apply link at the bottom)
More Details:
- Student Partner Program - https://cryptoxpress.com/student-partner-program
- Ambassador Program - https://cryptoxpressambassadors.com
CryptoXpress has built two powerful tracks to help students gain experience, earn income, and launch real careers:
🌱 Growth Partner: Bring in new users, grow the network, and earn lifetime income from your referrals' transactions like trades, investments, flight/hotel/gift card purchases.
🎯 CX Ambassador: Complete creative tasks, support the brand, and get paid by liking posts, creating simple threads, memes, art, sharing your experience, and engaging on Twitter, Quora, Reddit, or Instagram.
Participants will be rewarded with payments, internship certificates, mentorship, certified Web3 learning and career opportunities.
About the Role
CryptoXpress is looking for a skilled Backend Engineer to build the core logic powering our Partner Program reward engines, task pipelines, and content validation systems. Your work will directly impact how we scale fair, fast, and fraud-proof systems for global Student Partners and CX Ambassadors.
Key Responsibilities
- Design APIs to handle submission, review, and payout logic
- Develop XP, karma, and level-up algorithms with fraud resistance
- Create content verification checkpoints (e.g., metadata checks, submission throttles)
- Handle rate limits, caching, retries, and fallback for reward processing
- Collaborate with AI and frontend engineers for seamless data flow
- debug reward or submission logic
- fix issues in task flows or XP systems
- patch verification bugs or payout edge cases
- optimize performance and API stability
Skills & Qualifications
- Proficient in Node.js, Python (Flask/FastAPI), or Go
- Solid understanding of PostgreSQL, Firebase, or equivalent databases
- Strong grasp of authentication, role-based permissions, and API security
- Bonus: Experience with reward engines, affiliate logic, or task-based platforms
- Bonus: Familiarity with moderation tooling or content scoring
Join us and play a key role in driving the growth of CryptoXpress in the cryptocurrency space!
Pro Tip: Tips for Application Success
- Please fill out the application below
- Explore CryptoXpress before applying, take 2 minutes to download and try the app so you understand what we're building
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don't know - we value growth mindsets
How to Apply:
Interested candidates must complete the application form at


We’re building a powerful, AI-driven communication platform — a next-generation alternative to RingCentral or 8x8 — powered by OpenAI, LangChain, and SIP/WebRTC. We're looking for a Full-Stack Software Developer who’s passionate about building real-time, AI-enabled voice infrastructure and who’s excited to work in a fast-moving, founder-led environment.
This is an opportunity to build from scratch, take ownership of core systems, and innovate on the edge of VoIP + AI.
What You’ll Do
- Design and build AI-driven voice and messaging features (e.g. smart IVRs, call transcription, virtual agents)
- Develop backend services using Python, Node.js, or Golang
- Integrate OpenAI, Whisper, and LangChain with real-time VoIP systems like Twilio, SIP, or WebRTC
- Create scalable APIs, handle call logic, and build AI pipelines
- Collaborate with the founder and early team on product strategy and infrastructure
- Participate in occasional in-person strategy meetings (Delhi, Bangalore, or nearby)
Must-Have Skills
- Strong programming experience in Python, Node.js, or Go
- Hands-on experience with VoIP/SIP, WebRTC, or tools like Twilio, Asterisk, Plivo
- Experience integrating with LLM APIs, OpenAI, or speech-to-text models
- Solid understanding of backend design, Docker, Redis, PostgreSQL
- Ability to work independently and deliver production-grade code
Nice to Have
- Familiarity with LangChain or agent-based AI systems
- Knowledge of call routing logic, STUN/TURN, or media servers (e.g. FreeSWITCH)
- Interest in building scalable cloud-first SaaS products
Work Setup
- 🏠 Remote work (India-based, must be reachable for meetings)
- 🕐 Full-time role
- 💼 Direct collaboration with founder (technical)
- 🧘♂️ Flexible hours, strong ownership culture

Azure Data Engineer—Job Summary Seeking an experienced Azure Data Engineer (8+ years) with expertise in designing and implementing scalable data solutions within the Azure ecosystem. The role involves managing data pipelines, storage, analytics, and optimization to support business intelligence and reporting.
Key Responsibilities:
- Develop and optimize data pipelines & ETL processes using Azure Data Factory (ADF), Databricks, PySpark.
- Manage Azure Data Lake and integrate with Azure Synapse Analytics for scalable storage and analytics.
- Design data solutions, optimize SQL queries, and implement .governance best practices.
- Support BI development and reporting needs.
- Implement CI/CD pipelines for data engineering solutions.
Mandatory Skills: ✔ 8+ years in Azure Data Engineering ✔ Expertise in SQL and a programming language (preferably Python) ✔ Strong proficiency in ADF, Databricks, Azure Data Lake, Azure Synapse Analytics, PySpark ✔ Solid understanding of data warehousing concepts and ETL processes ✔ Experience with Apache Spark or similar tools.
Preferred Skills: 🔹 Experience with Microsoft Fabric (MS Fabric) 🔹 Familiarity with Power BI for data visualization 🔹 Domain expertise in Finance, Procurement, or Human Capital.

Senior Generative AI Engineer
Job Id: QX016
About Us:
The QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights; businesses will continue to face challenges to better understand their customers and even lose them.
Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Job Summary:
We seek a highly experienced Senior Generative AI Engineer who focus on the development, implementation, and engineering of Gen AI applications using the latest LLMs and frameworks. This role requires hands-on expertise in Python programming, cloud platforms, and advanced AI techniques, along with additional skills in front-end technologies, data modernization, and API integration. The Senior Gen AI engineer will be responsible for building applications from the ground up, ensuring robust, scalable, and efficient solutions.
Responsibilities:
· Build GenAI solutions such as virtual assistant, data augmentation, automated insights and predictive analytics
· Design, develop, and fine-tune generative AI models (GANs, VAEs, Transformers).
· Handle data preprocessing, augmentation, and synthetic data generation.
· Work with NLP, text generation, and contextual comprehension tasks.
· Develop backend services using Python or .NET for LLM-powered applications.
· Build and deploy AI applications on cloud platforms (Azure, AWS, GCP).
· Optimize AI pipelines and ensure scalability.
· Stay updated with advancements in AI and ML.
Skills & Requirements:
- Strong knowledge of machine learning, deep learning, and NLP.
- Proficiency in Python, TensorFlow, PyTorch, and Keras.
- Experience with cloud services, containerization (Docker, Kubernetes), and AI model deployment.
- Understanding of LLMs, embeddings, and retrieval-augmented generation (RAG).
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or a related field.
- 6+ years of experience in Gen AI, or related roles.
- Experience with AI/ML model integration into data pipelines.
Core Competencies for Generative AI Engineers:
1. Programming & Software Development
a. Python – Proficiency in writing efficient and scalable code with strong knowledge with NumPy, Pandas, TensorFlow, PyTorch and Scikit-learn.
b. LLM Frameworks – Experience with Hugging Face Transformers, LangChain, OpenAI API, and similar tools for building and deploying large language models.
c. API integration such as FastAPI, Flask, RESTful API, WebSockets or Django.
d. Knowledge of Version Control, containerization, CI/CD Pipelines and Unit Testing.
2. Vector Database & Cloud AI Solutions
a. Pinecone, FAISS, ChromaDB, Neo4j
b. Azure Redis/ Cognitive Search
c. Azure OpenAI Service
d. Azure ML Studio Models
e. AWS (Relevant Services)
3. Data Engineering & Processing
- Handling large-scale structured & unstructured datasets.
- Proficiency in SQL, NoSQL (PostgreSQL, MongoDB), Spark, and Hadoop.
- Feature engineering and data augmentation techniques.
4. NLP & Computer Vision
- NLP: Tokenization, embeddings (Word2Vec, BERT, T5, LLaMA).
- CV: Image generation using GANs, VAEs, Stable Diffusion.
- Document Embedding – Experience with vector databases (FAISS, ChromaDB, Pinecone) and embedding models (BGE, OpenAI, SentenceTransformers).
- Text Summarization – Knowledge of extractive and abstractive summarization techniques using models like T5, BART, and Pegasus.
- Named Entity Recognition (NER) – Experience in fine-tuning NER models and using pre-trained models from SpaCy, NLTK, or Hugging Face.
- Document Parsing & Classification – Hands-on experience with OCR (Tesseract, Azure Form Recognizer), NLP-based document classifiers, and tools like LayoutLM, PDFMiner.
5. Model Deployment & Optimization
- Model compression (quantization, pruning, distillation).
- Deployment using Azure CI/CD, ONNX, TensorRT, OpenVINO on AWS, GCP.
- Model monitoring (MLflow, Weights & Biases) and automated workflows (Azure Pipeline).
- API integration with front-end applications.
6. AI Ethics & Responsible AI
- Bias detection, interpretability (SHAP, LIME), and security (adversarial attacks).
7. Mathematics & Statistics
- Linear Algebra, Probability, and Optimization (Gradient Descent, Regularization, etc.).
8. Machine Learning & Deep Learning
a. Expertise in supervised, unsupervised, and reinforcement learning.
a. Proficiency in TensorFlow, PyTorch, and JAX.
b. Experience with Transformers, GANs, VAEs, Diffusion Models, and LLMs (GPT, BERT, T5).
Personal Attributes:
- Strong problem-solving skills with a passion for data architecture.
- Excellent communication skills with the ability to explain complex data concepts to non-technical stakeholders.
- Highly collaborative, capable of working with cross-functional teams.
- Ability to thrive in a fast-paced, agile environment while managing multiple priorities effectively.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.
Ready to make an impact? Apply today and become part of the QX impact team!


Role Overview
We are seeking a skilled Odoo Consultant with Python development expertise to support the design, development, and implementation of Odoo-based business solutions for our clients. The consultant will work on module customization, backend logic, API integrations, and configuration of business workflows using the Odoo framework.
Key Responsibilities
● Customize and extend Odoo modules based on client requirements
● Develop backend logic using Python and the Odoo ORM
● Configure business workflows, access rights, and approval processes
● Create and update views using XML and QWeb for reports and screens
● Integrate third-party systems using Odoo APIs (REST, XML-RPC)
● Participate in client discussions and translate business needs into technical solutions
● Support testing, deployment, and user training as required
Required Skills
● Strong knowledge of Python and Odoo framework (v12 and above)
● Experience working with Odoo models, workflows, and security rules
● Good understanding of XML, QWeb, and PostgreSQL
● Experience in developing or integrating APIs
● Familiarity with Git and basic Linux server operations
● Good communication and documentation skills
Preferred Qualifications
● Experience in implementing Odoo for industries such as manufacturing, retail, financial
services, or real estate
● Ability to work independently and manage project timelines
● Bachelor’s degree in Computer Science, Engineering, or related field

We are looking for an experienced and detail-oriented Senior Performance Testing Engineer to join our QA team. The ideal candidate will be responsible for designing, developing, and executing scalable and reliable performance testing strategies. You will lead performance engineering initiatives using tools like Locust, Python, Docker, Kubernetes, and cloud-native environments (AWS), ensuring our systems meet performance SLAs under real-world usage patterns.
Key Responsibilities
- Develop, enhance, and maintain Locust performance scripts using Python
- Design realistic performance scenarios simulating real-world traffic and usage patterns
- Parameterize and modularize scripts for robustness and reusability
- Execute performance tests in containerized environments using Docker and Kubernetes
- Manage performance test execution on Kubernetes clusters
- Integrate performance tests into CI/CD pipelines in collaboration with DevOps and Development teams
- Analyze performance test results, including throughput, latency, response time, and error rates
- Identify performance bottlenecks, conduct root cause analysis, and suggest optimizations
- Work with AWS (or other cloud platforms) to deploy, scale, and monitor tests in cloud-native environments
- Write and optimize complex SQL queries, stored procedures, and perform DB performance testing
- Work with SQL Server extensively; familiarity with Postgres is a plus
- Develop and maintain performance testing strategies and test plans
- Define and track KPIs, SLAs, workload models, and success criteria
- Guide the team on best practices and promote a performance engineering mindset
Must-Have Qualifications
- Proven hands-on experience with Locust and Python for performance testing
- Working knowledge of microservices architecture
- Hands-on with Kubernetes and Docker, especially in the context of running Locust at scale
- Experience integrating performance tests in CI/CD pipelines
- Strong experience with AWS or similar cloud platforms for deploying and scaling tests
- Solid understanding of SQL Server, including tuning stored procedures and query optimization
- Strong experience in performance test planning, execution, and analysis
Good-to-Have Skills
- Exposure to Postgres DB
- Familiarity with observability tools like Prometheus, Grafana, CloudWatch, and Datadog
- Basic knowledge of APM (Application Performance Monitoring) tools


We are seeking a visionary and hands-on AI/ML and Chatbot Lead to spearhead the design, development, and deployment of enterprise-wide Conversational and Generative AI solutions. This role will be instrumental in establishing and scaling our AI Lab function, defining chatbot and multimodal AI strategies, and delivering intelligent automation solutions that enhance user engagement and operational efficiency.
Key Responsibilities
- Strategy & Leadership
- Define and lead the enterprise-wide strategy for Conversational AI, Multimodal AI, and Large Language Models (LLMs).
- Establish and scale an AI/Chatbot Lab, with a clear roadmap for innovation across in-app, generative, and conversational AI use cases.
- Lead, mentor, and scale a high-performing team of AI/ML engineers and chatbot developers.
- Architecture & Development
- Architect scalable AI/ML systems encompassing presentation, orchestration, AI, and data layers.
- Build multi-turn, memory-aware conversations using frameworks like LangChain or Semantic Kernel.
- Integrate chatbots with enterprise platforms such as Salesforce, NetSuite, Slack, and custom applications via APIs/webhooks.
- Solution Delivery
- Collaborate with business stakeholders to assess needs, conduct ROI analyses, and deliver high-impact AI solutions.
- Identify and implement agentic AI capabilities and SaaS optimization opportunities.
- Deliver POCs, pilots, and MVPs, owning the full design, development, and deployment lifecycle.
- Monitoring & Governance
- Implement and monitor chatbot KPIs using tools like Kibana, Grafana, and custom dashboards.
- Champion ethical AI practices, ensuring compliance with governance, data privacy, and security standards.
Must-Have Skills
- Experience & Leadership
- 10+ years of experience in AI/ML with demonstrable success in chatbot, conversational AI, and generative AI implementations.
- Proven experience in building and operationalizing AI/Chatbot architecture frameworks across enterprises.
- Technical Expertise
- Programming: Python
- AI/ML Frameworks & Libraries: LangChain, ElasticSearch, spaCy, NLTK, Hugging Face
- LLMs & NLP: GPT, BERT, RAG, prompt engineering, PEFT
- Chatbot Platforms: Azure OpenAI, Microsoft Bot Framework, CLU, CQA
- AI Deployment & Monitoring at Scale
- Conversational AI Integration: APIs, webhooks
- Infrastructure & Platforms
- Cloud: AWS, Azure, GCP
- Containerization: Docker, Kubernetes
- Vector Databases: Pinecone, Weaviate, Qdrant
- Technologies: Semantic search, knowledge graphs, intelligent document processing
- Soft Skills
- Strong leadership and team management
- Excellent communication and documentation
- Deep understanding of AI governance, compliance, and ethical AI practices
Good-to-Have Skills
- Familiarity with tools like Glean, Perplexity.ai, Rasa, XGBoost
- Experience integrating with Salesforce, NetSuite, and understanding of Customer Success domain
- Knowledge of RPA tools like UiPath and its AI Center

About Us:
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews - all while improving their SEO.
But that's just the beginning.
We're also the creators of Convertlens, our generative Al-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We're a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solve real problems - you'll fit right in.
What You'll Do:
• Collaborate with product managers, designers, and other devs to ideate, build, and ship high-impact features
• Own full-stack development using Node.js, Next.js, and React.js
• Build fast, responsive front-ends with pixel-perfect execution
• Design and manage scalable back-end systems with MySQL/PostgreSQL
• Troubleshoot and resolve issues from live deployments with Ops team
• Contribute to documentation, internal tools, and process improvement
• Work on our generative Al tools and help scale Convertlens.
What You Bring:
• 2+ years of experience in a product/startup environment
• Strong foundation in Node.js, Next.js, and React.js
• Solid understanding of relational databases (MySQL, PostgreSQL)
• Fluency in modern JavaScript and the HTTP/REST ecosystem
• Comfortable with HTML, CSS, Git, and version control workflows
• Bonus: experience with Python or interest in working on Al-powered systems
• Great communication skills and a love for collaboration
• A builder mindset - scrappy, curious, and ready to ship
Perks & Culture:
• Flexible work setup remote-first for most, hybrid if you're in Delhi NCR
• A high-growth, high-impact environment where your code goes live fast
• Opportunities to work with cutting-edge tech including generative Al
• Small team, big vision your work truly matters here.
Join Us
If you're excited about building meaningful tech in a fast-moving startup let's talk.

Job Description For Associate Database Engineer (PostgreSQL)
Job Title: Associate Database Engineer (PostgreSQL)
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialize in providing
unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At
Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers.
Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering
commitment to maintaining the highest security and operational excellence standards.
Position Overview:
An Associate Database Engineer is responsible for the administration and monitoring of database systems and is
available to work in shifts.
Responsibilities
● Managing and maintaining various customer database environments.
● Monitoring proactively database performance using internal tools and metrics.
● Implementing backup and recovery procedures.
● Ensuring data security and integrity.
● Troubleshooting database issues with a focus on internal diagnostics.
● Assisting with capacity planning and system upgrades.
● This role requires a solid understanding of database management systems, proficiency in using internal tools for performance monitoring, and flexibility to work in various shifts to ensure continuous database support.
Requirements
● Good knowledge of Linux OS and its tools
● Strong expertise in PostgreSQL database administration
● Proficient in SQL and any programming languages (Python, bash)
● Hands-on experience with database backups, recovery, upgrades, replication and clustering
● Troubleshooting of database issues
● Familiarity with Cloud (AWS/GCP)
● Working knowledge of AWS RDS, Aurora, CloudSQL
● Strong communication skills
● Ability to work effectively in a team environment
Preferred Qualifications:
● B.Tech/M.Tech or any equivalent degree
● Deeper understanding of databases and Linux troubleshooting
● Working knowledge of upgrades and availability solutions
● Working knowledge of backup tools like pg backrest/barman
● Good knowledge of query optimisation and index types
● Experience with database monitoring and management tools.
● Certifications on PostgreSQL or related technologies are a plus
● Prior experience in customer support or technical operations
Why Join Us:
● Opportunity to work in a dynamic and growing industry.
● Learning and development opportunities to enhance your career.
● A collaborative work environment with a supportive team.
Job Details:
● Job Type: Full-time
● Work Mode: Work From Home
● Experience: 1-3 years


Role: Data Scientist
Location: Bangalore (Remote)
Experience: 4 - 15 years
Skills Required - Radiology, visual images, text, classical model, LLM multi model, Primarily Generative AI, Prompt Engineering, Large Language Models, Speech & Text Domain AI, Python coding, AI Skills, Real world evidence, Healthcare domain
JOB DESCRIPTION
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment


Role Characteristics:
Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner.
You Will:
- Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics
- Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering
- Derive insights and putting them together to build a story to solve a given problem
- Suggest ways for process improvements in terms of script optimization, automating repetitive tasks
- Create and automate reports and dashboards through Python to track certain metrics basis given requirements
- Automate reports and dashboards through Python
Technical Skills (Must have)
- B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields
- 4-6 years of experience in working with data and conducting statistical and/or numerical analysis
- Ability to write SQL code
- Scripting/automation using python
- Hands on experience in data visualisation tool like Looker/Tableau/Quicksight
- Basic to advance level understanding of statistics
Other Skills (Must have)
- Be willing and able to quickly learn about new businesses, database technologies and analysis techniques
- Strong oral and written communication
- Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have)
- Experience working with large datasets
- Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3)
- Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Parental leave- Maternity and Paternity
- Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays)
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- Health cover for any hospitalization. Covers both nuclear family and parents
- Tele-med for free doctor consultation, discounts on health checkups and medicines
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Childcare Expenses and reimbursements
- Employee assistance program
- Employee referral program
- Education reimbursement program
- Skill development program
- Cell phone reimbursement (Mobile Subsidy program)
- Internet reimbursement
- Birthday treat reimbursement
- Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic
- Creche reimbursement
- Co-working space reimbursement
- NPS employer match
- Meal card for tax benefit
- Special benefits on salary account
We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Must-Have Skills & Qualifications:
- Bachelor's degree in Engineering (Computer Science, IT, or related field)
- 5–6 years of experience in manual testing of web and mobile applications
- Working knowledge of test automation tools: Selenium
- Experience with API testing using tools like Postman or equivalent
- Experience with BDD
- Strong understanding of test planning, test case design, and defect tracking processes
- Experience leading QA for projects and production releases
- Familiarity with Agile/Scrum methodologies
- Effective collaboration skills – able to work with cross-functional teams and contribute to automation efforts as needed
Good-to-Have Skills:
- Familiarity with CI/CD pipelines and version control tools (Git, Jenkins)
- Exposure to performance or security testing

Who we are
CoinCROWD is building the future of crypto spending with CROWD Wallet, a secure, user- friendly wallet designed for seamless cryptocurrency transactions in the real world. As the crypto landscape continues to evolve, CoinCROWD is at the forefront, enabling everyday consumers to use digital currencies like never before.
We’re not just another blockchain startup—we’re a team of innovators, dreamers, and tech geeks who believe in making crypto fun, easy, and accessible to everyone. If you love solving complex problems and cracking blockchain puzzles while sharing memes with your team, you’ll fit right in!
What you’ll be doing
As a key member of our engineering team, you will be responsible for designing, developing, and maintaining robust, scalable, and high-performance backend systems that support CoinCrowd’s
innovative products.
You will be responsible for...
• Designing and implementing blockchain-based applications using QuickNode, Web3Auth, and Python or Node.js.
• Developing and maintaining smart contracts and decentralized applications (dApps) with a focus on security and scalability.
• Integrating blockchain solutions with CROWD Wallet and other financial systems.
• Collaborating with frontend developers, product managers, and other stakeholders to deliver seamless crypto transactions.
• Ensuring high availability and performance of blockchain infrastructure.
• Managing on-chain and off-chain data synchronization.
• Researching and implementing emerging blockchain technologies to improve CoinCROWD’s ecosystem.
• Troubleshooting and resolving blockchain-related issues in a timely manner.
• Ensuring compliance with blockchain security best practices and industry regulations.
• Contributing to architectural decisions and best practices for blockchain development.
What we need from you
We're looking for dynamic, self-motivated individuals who are excited about shaping the future of crypto spending;
• This will be a Full-time role.
• May require occasional travel for conferences or team meetups (Yes, we love those!).
• Bonus points if you can beat our CTO in a game of chess or share an awesome crypto meme!
• Location - WFH initially
What skills & experience you’ll bring to us
• 3 to 10 years of experience in software development, with at least 3+ years in blockchain development.
• Strong knowledge of QuickNode and Web3Auth for blockchain infrastructure and authentication.
• Proficiency in Python or Node.js for backend development.
• Experience in developing, deploying, and auditing smart contracts (Solidity, Rust, orequivalent).
• Hands-on experience with Ethereum, Polygon, or other EVM-compatible blockchains.
• Familiarity with DeFi protocols, NFT standards (ERC-721, ERC-1155), and Web3.js/ Ethers.js.
• Understanding of blockchain security best practices and cryptographic principles.
• Experience working with RESTful APIs, GraphQL, and microservices architecture.
• Strong problem-solving skills and ability to work in a fast-paced startup environment.
• Excellent communication skills and ability to collaborate effectively with cross-functional teams.


Role Description
This is a full-time, remote role for a Frappe and ERPNext Developer. The Developer will be responsible for designing, developing, and maintaining Frappe and ERPNext applications. Daily tasks include customizing modules, integrating third-party systems, troubleshooting and resolving software issues, and working closely with cross-functional teams to enhance system efficiency and user experience.
Qualifications
- Proficiency in Frappe and ERPNext development
- Experience with Python, JavaScript, and web technologies
- Understanding of ERP workflows and business processes
- Skills in database management (MySQL, PostgreSQL)
- Strong problem-solving and troubleshooting abilities
- Ability to work independently and remotely
- Excellent communication and teamwork skills
- Bachelor's degree in Computer Science, Information Technology, or related field
- Experience in the healthcare industry is a plus
- Have experience in customised the frappe crm



Title - Sr Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
External Job Title :
Sr Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React.
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 4-6 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
Job Title: Full-Stack Web Engineer
Location: Remote-first (UK/EU time zone overlap preferred)
Engagement: Contract or permanent
About Us
We’re a fast-growing construction-tech company building a worker-facing web platform that simplifies how site operatives present their skills and how employers verify them. The product combines multilingual data capture, document scanning and AI-driven validation—all delivered through a modern, mobile-friendly web experience.
What You’ll Do
- Own end-to-end development of new features across front-end and back-end.
- Translate wireframes into responsive components and hook them to robust REST / WebSocket APIs.
- Integrate third-party AI and OCR services to process user input and documents.
- Implement secure authentication, data storage and error handling in a cloud environment (Azure preferred).
- Optimise performance for low-bandwidth mobile users and enable offline access via PWA features.
- Contribute to code reviews, automated testing and CI/CD pipelines.
- Collaborate daily with a product lead, UX designer and AI engineer; take initiative in shaping technical decisions.
Core Skills & Experience
CategoryMust-haveFront-endReact (or similar), TypeScript/JavaScript, responsive design, service-worker/PWA know-howBack-endNode.js or Python frameworks, RESTful API design, WebSocketsDatabasesSQL (PostgreSQL or similar); comfortable with JSON/JSONB fieldsCloud & DevOpsExperience deploying to a major cloud (Azure/AWS/GCP), container builds, CI/CDTesting & QualityUnit/integration testing, performance profiling, accessibility awarenessCollaborationProven track record shipping in small, cross-functional teams
Nice-to-Haves
- Camera / microphone capture in the browser.
- Prior work with OpenAI or other NLP services.
- Multilingual or right-to-left interface experience.
- Domain knowledge in recruitment, HR-tech, or construction.


Who We Are
Studio Management (studiomgmt.co) is a uniquely positioned organization combining venture capital, hedge fund investments, and startup incubation. Our portfolio includes successful ventures like Sentieo (acquired by AlphaSense for $185 million), as well as innovative products such as Emailzap (emailzap.co) and Mindful Minutes for Toddlers. We’re expanding our team to continue launching products at the forefront of technology, and we’re looking for an Engineering Lead who shares our passion for building the “next big thing.”
The Role
We are seeking a hands-on Engineering Lead to guide our product development efforts across multiple high-impact ventures. You will own the overall technical vision, mentor a remote team of engineers, and spearhead the creation of new-age products in a fast-paced startup environment. This is a strategic, influential role that requires a blend of technical prowess, leadership, and a keen interest in building products from zero to one.
Responsibilities
- Technical Leadership: Define and drive the architectural roadmap for new and existing products, ensuring high-quality code, scalability, and reliability.
- Mentorship & Team Building: Hire, lead, and develop a team of engineers. Foster a culture of continuous learning, ownership, and collaboration.
- Product Innovation: Work closely with product managers, designers, and stakeholders to conceptualize, build, and iterate on cutting-edge, user-centric solutions.
- Hands-On Development: Write efficient, maintainable code and perform thorough code reviews, setting the standard for engineering excellence.
- Cross-Functional Collaboration: Partner with different functions (product, design, marketing) to ensure alignment on requirements, timelines, and deliverables.
- Process Optimization: Establish best practices and processes that improve development speed, code quality, and overall team productivity.
- Continuous Improvement: Champion performance optimizations, new technologies, and modern frameworks to keep the tech stack fresh and competitive.
What We’re Looking For
- 4+ Years of Engineering Experience: A proven track record of designing and delivering high-impact software products.
- Technical Mastery: Expertise in a full-stack environment—HTML, CSS, JavaScript (React/React Native), Python (Django), and AWS. Strong computer science fundamentals, including data structures and system design.
- Leadership & Communication: Demonstrated ability to mentor team members, influence technical decisions, and articulate complex concepts clearly.
- Entrepreneurial Mindset: Passion for building new-age products, thriving in ambiguity, and rapidly iterating to find product-market fit.
- Problem Solver: Adept at breaking down complex challenges into scalable, efficient solutions.
- Ownership Mentality: Self-driven individual who takes full responsibility for project outcomes and team success.
- Adaptability: Comfort working in an environment where priorities can shift quickly, and opportunities for innovation abound.
Why Join Us
- High-Impact Work: Drive the technical direction of multiple ventures, shaping the future of new products from day one.
- Innovation Culture: Operate in a remote-first, collaborative environment that encourages bold thinking and rapid experimentation.
- Growth & Autonomy: Enjoy opportunities for both leadership advancement and deepening your technical skillset.
- Global Team: Work alongside a diverse group of talented professionals who share a passion for pushing boundaries.
- Competitive Benefits: Receive market-leading compensation and benefits in a role that rewards both individual and team success.



Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator and recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for an exceptional engineer to join our engineering team (currently 6 teammates). We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
Responsibilities
You'll wear many hats. Your responsibilities will fall into 3 categories:
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications (Typescript, nodeJS, python etc).
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
Product Management
- Propose, scope, and prioritize new feature ideas. Yes, engineers on our team get to be leaders and propose new features themselves!
- Lead the team in building best-in-class user experiences.
Requirements
You do not need AI experience to apply to this role. While we prefer candidates that have some AI experience, we have hired engineers before that do not have any, but have demonstrated that they are very fast learners.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)


We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment

💼 Job Title: Full Stack Developer (*Fresher/experienced*)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹7,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python, or PHP. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- Recent graduates or individuals with internship experience (6 months to 1.5years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍

Job Title: IT and Cybersecurity Network Backend Engineer (Remote)
Job Summary:
Join Springer Capital’s elite tech team to architect and fortify our digital infrastructure, ensuring robust, secure, and scalable backend systems that power cutting‑edge investment solutions.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm that redefines financial strategies through innovative digital solutions. We identify high-potential opportunities and leverage advanced technology to drive value, transforming traditional investment paradigms. Our culture is built on agility, creative problem-solving, and a relentless pursuit of excellence.
Job Highlights:
As an IT and Cybersecurity Network Backend Engineer, you will play a central role in designing, developing, and securing our backend systems. You’ll be responsible for creating bulletproof server architectures and integrating sophisticated cybersecurity measures to ensure our digital assets remain secure, reliable, and scalable—all while working fully remotely.
Responsibilities:
- Backend Architecture & Security:
- Design, develop, and maintain high-performance backend systems and RESTful APIs using technologies such as Python, Node.js, or Java.
- Implement advanced cybersecurity protocols including encryption, multi-factor authentication, and anomaly detection to safeguard our infrastructure.
- Network Infrastructure Management:
- Architect secure cloud and hybrid network solutions to protect sensitive data and ensure uninterrupted service.
- Develop robust logging, monitoring, and compliance mechanisms.
- Collaborative Innovation:
- Partner with cross-functional teams (DevOps, frontend, and product managers) to integrate security seamlessly into every layer of our technology stack.
- Participate in regular security audits, agile sprints, and technical reviews.
- Continuous Improvement:
- Keep abreast of emerging technologies and cybersecurity threats, proposing and implementing innovative solutions to maintain system integrity.
What We Offer:
- Advanced Learning & Mentorship: Work side-by-side with industry experts who will empower you to push the boundaries of cybersecurity and backend engineering.
- Impactful Work: Engage in projects that directly influence the security and scalability of our revolutionary digital investment strategies.
- Dynamic, Remote Culture: Thrive in a flexible, remote-first environment that champions creativity, collaboration, and work-life balance.
- Career Growth: Unlock long-term career advancement opportunities in a forward-thinking organization that values innovation and initiative.
Requirements:
- Degree (or current enrollment) in Computer Science, Cybersecurity, or a related field.
- Proficiency in at least one backend programming language (Python, Node.js, or Java) and hands-on experience with RESTful API design.
- Solid understanding of network security principles and experience implementing cybersecurity best practices.
- Passionate about designing secure systems, solving complex technical challenges, and staying ahead of industry trends.
- Strong analytical and communication skills, with the ability to work effectively in a collaborative, fast-paced environment.
About Springer Capital:
At Springer Capital, we blend financial expertise with digital innovation to shape tomorrow’s investment landscape. Our relentless drive to merge technology and asset management has positioned us as leaders in transforming traditional finance into dynamic, tech-enabled ventures.
Location: Global (Remote)
Job Type: Full-time
Pay: $50 USD per month
Work Location: Remote
Embark on your next challenge with Springer Capital—where your technical prowess and dedication to security help safeguard the future of digital investments.

Job Title : Technical Architect
Experience : 8 to 12+ Years
Location : Trivandrum / Kochi / Remote
Work Mode : Remote flexibility available
Notice Period : Immediate to max 15 days (30 days with negotiation possible)
Summary :
We are looking for a highly skilled Technical Architect with expertise in Java Full Stack development, cloud architecture, and modern frontend frameworks (Angular). This is a client-facing, hands-on leadership role, ideal for technologists who enjoy designing scalable, high-performance, cloud-native enterprise solutions.
🛠 Key Responsibilities :
- Architect scalable and high-performance enterprise applications.
- Hands-on involvement in system design, development, and deployment.
- Guide and mentor development teams in architecture and best practices.
- Collaborate with stakeholders and clients to gather and refine requirements.
- Evaluate tools, processes, and drive strategic technical decisions.
- Design microservices-based solutions deployed over cloud platforms (AWS/Azure/GCP).
✅ Mandatory Skills :
- Backend : Java, Spring Boot, Python
- Frontend : Angular (at least 2 years of recent hands-on experience)
- Cloud : AWS / Azure / GCP
- Architecture : Microservices, EAI, MVC, Enterprise Design Patterns
- Data : SQL / NoSQL, Data Modeling
- Other : Client handling, team mentoring, strong communication skills
➕ Nice to Have Skills :
- Mobile technologies (Native / Hybrid / Cross-platform)
- DevOps & Docker-based deployment
- Application Security (OWASP, PCI DSS)
- TOGAF familiarity
- Test-Driven Development (TDD)
- Analytics / BI / ML / AI exposure
- Domain knowledge in Financial Services or Payments
- 3rd-party integration tools (e.g., MuleSoft, BizTalk)
⚠️ Important Notes :
- Only candidates from outside Hyderabad/Telangana and non-JNTU graduates will be considered.
- Candidates must be serving notice or joinable within 30 days.
- Client-facing experience is mandatory.
- Java Full Stack candidates are highly preferred.
🧭 Interview Process :
- Technical Assessment
- Two Rounds – Technical Interviews
- Final Round


What You’ll Be Doing:
● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.
● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks
● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.
● Comfortable working in a linux shell environment and writing scripts as needed.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Must be capable of working independently and delivering stable, efficient and reliable software.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic environment
EMPLOYMENT TYPE: Full-Time, Permanent
LOCATION: Remote (Pan India)
SHIFT TIMINGS: 2.00 pm-11:00pm IST




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote


Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2–5 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively


Apply only if:
- You are an AI agent.
- OR you know how to build an AI agent that can do this job.
What You’ll Do: At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As an Agentic AI Engineer, you’ll:
- Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
- Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback, assistance, and personalized tutoring.
- Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
- Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
- Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.
About Us: At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders: LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes.
We’re proud to be recognized by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us? At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.

Senior Data Analyst – Power BI, GCP, Python & SQL
Job Summary
We are looking for a Senior Data Analyst with strong expertise in Power BI, Google Cloud Platform (GCP), Python, and SQL to design data models, automate analytics workflows, and deliver business intelligence that drives strategic decisions. The ideal candidate is a problem-solver who can work with complex datasets in the cloud, build intuitive dashboards, and code custom analytics using Python and SQL.
Key Responsibilities
* Develop advanced Power BI dashboards and reports based on structured and semi-structured data from BigQuery and other GCP sources.
* Write and optimize complex SQL queries (BigQuery SQL) for reporting and data modeling.
* Use Python to automate data preparation tasks, build reusable analytics scripts, and support ad hoc data requests.
* Partner with data engineers and stakeholders to define metrics, build ETL pipelines, and create scalable data models.
* Design and implement star/snowflake schema models and DAX measures in Power BI.
* Maintain data integrity, monitor performance, and ensure security best practices across all reporting systems.
* Drive initiatives around data quality, governance, and cost optimization on GCP.
* Mentor junior analysts and actively contribute to analytics strategy and roadmap.
Must-Have Skills
* Expert-level SQL : Hands-on experience writing complex queries in BigQuery , optimizing joins, window functions, CTEs.
* Proficiency in Python : Data wrangling, Pandas, NumPy, automation scripts, API consumption, etc.
* Power BI expertise : Building dashboards, using DAX, Power Query (M), custom visuals, report performance tuning.
* GCP hands-on experience : Especially with BigQuery, Cloud Storage, and optionally Cloud Composer or Dataflow.
* Strong understanding of data modeling, ETL pipelines, and analytics workflows.
* Excellent communication skills and the ability to explain data insights to non-technical audiences.
Preferred Qualifications
* Experience in version control (Git) and working in CI/CD environments.
* Google Professional Data Engineer
* PL-300: Microsoft Power BI Data Analyst Associate


Full Stack Engineer
Location: Remote (India preferred) · Type: Full-time · Comp: Competitive salary + early-stage stock
About Alpha
Alpha is building the simplest way for anyone to create AI agents that actually get work done. Our platform turns messy prompt chaining, data schemas, and multi-tool logic into a clean, no-code experience. We’re backed, funded, and racing toward our v1 launch. Join us on the ground floor and shape the architecture, the product, and the culture.
The Role
We’re hiring two versatile full-stack engineers. One will lean infra/back-end, the other front-end/LLM integration, but both will ship vertical slices end-to-end.
You will:
- Design and build the agent-execution runtime (LLMs, tools, schemas).
- Stand up secure VPC deployments with Docker, Terraform, and AWS or GCP.
- Build REST/GraphQL APIs, queues, Postgres/Redis layers, and observability.
- Create a React/Next.js visual workflow editor with drag-and-drop blocks.
- Build the Prompt Composer UI, live testing mode, and cost dashboard.
- Integrate native tools: search, browser, CRM, payments, messaging, and more.
- Ship fast—design, code, test, launch—and own quality (no separate QA team).
- Talk to early users and fold feedback into weekly releases.
What We’re Looking For
- 3–6 years building production web apps at startup pace.
- Strong TypeScript + Node.js or Python.
- Solid React/Next.js and modern state management.
- Comfort with AWS or GCP, Docker, and CI/CD.
- Bias for ownership from design to deploy.
Nice but not required: Terraform or CDK, IAM/VPC networking, vector DBs or RAG pipelines, LLM API experience, React-Flow or other canvas libs, GraphQL or event streaming, prior dev-platform work.
We don’t expect every box ticked—show us you learn fast and ship.
What You’ll Get
• Meaningful equity at the earliest stage.
• A green-field codebase you can architect the right way.
• Direct access to the founder—instant decisions, no red tape.
• Real customers from day one; your code goes live, not to backlog.
• Stipend for hardware, LLM credits, and professional growth.
Come build the future of work—where AI agents handle the busywork and people do the thinking.


Job Title: Full Stack Developer
Experience- 3+ years
Location: Remote
Notice Period: Immediate Joiner Preferred
Job Description:
Responsibilities:
- Design, develop, and maintain full-stack applications using Python (FastAPI), Node.js, and React.
- Create and optimize RESTful APIs and backend services.
- Work with PostgreSQL to design efficient data models and write complex SQL queries.
- Develop responsive and interactive front-end components using React.
- Collaborate with team members using Git for version control and code reviews.
- Debug, troubleshoot, and enhance existing applications for better performance and user experience.
Requirements:
- 3 years of experience in full-stack development or a similar role.
- Strong proficiency in Python with hands-on experience in FastAPI.
- Solid front-end development experience using React.
- Experience developing backend services with Node.js.
- Proficiency in PostgreSQL, including writing efficient SQL queries.
- Familiarity with Git for version control and collaborative development.
- Good communication and problem-solving skills.


Job Title: Full-Stack Developer
Location: Bangalore/Remote
Type: Full-time
About Eblity:
Eblity’s mission is to empower educators and parents to help children facing challenges.
Over 50% of children in mainstream schools face academic or behavioural challenges, most of which go unnoticed and underserved. By providing the right support at the right time, we could make a world of difference to these children.
We serve a community of over 200,000 educators and parents and over 3,000 schools.
If you are purpose-driven and want to use your skills in technology to create a positive impact for children facing challenges and their families, we encourage you to apply.
Join us in shaping the future of inclusive education and empowering learners of all abilities.
Role Overview:
As a full-stack developer, you will lead the development of critical applications.
These applications enable services for parents of children facing various challenges such as Autism, ADHD and Learning Disabilities, and for experts who can make a significant difference in these children’s lives.
You will be part of a small, highly motivated team who are constantly working to improve outcomes for children facing challenges like Learning Disabilities, ADHD, Autism, Speech Disorders, etc.
Job Description:
We are seeking a talented and proactive Full Stack Developer with hands-on experience in the React / Python / Postgres stack, leveraging Cursor and Replit for full-stack development. As part of our product development team, you will work on building responsive, scalable, and user-friendly web applications, utilizing both front-end and back-end technologies. Your expertise with Cursor as an AI agent-based development platform and Replit will be crucial for streamlining development processes and accelerating product timelines.
Responsibilities:
- Design, develop, and maintain front-end web applications using React, ensuring a responsive, intuitive, and high-performance user experience.
- Build and optimize the back-end using FastAPI or Flask and PostgreSQL, ensuring scalability, performance, and maintainability.
- Leverage Replit for full-stack development, deploying applications, managing cloud resources, and streamlining collaboration across team members.
- Utilize Cursor, an AI agent-based development platform, to enhance application development, automate processes, and optimize workflows through AI-driven code generation, data management, and integration.
- Collaborate with cross-functional teams (back-end developers, designers, and product managers) to gather requirements, design solutions, and implement them seamlessly across the front-end and back-end.
- Design and implement PostgreSQL database schemas, writing optimized queries to ensure efficient data retrieval and integrity.
- Integrate RESTful APIs and third-party services across the React front-end and FastAPI/Flask/PostgreSQLback-end, ensuring smooth data flow.
- Implement and optimize reusable React components and FastAPI/Flask functions to improve code maintainability and application performance.
- Conduct thorough testing, including unit, integration, and UI testing, to ensure application stability and reliability.
- Optimize both front-end and back-end applications for maximum speed and scalability, while resolving performance issues in both custom code and integrated services.
- Stay up-to-date with emerging technologies to continuously improve the quality and efficiency of our solutions.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 2+ years of experience in React development, with strong knowledge of component-based architecture, state management, and front-end best practices.
- Proven experience in Python development, with expertise in building web applications using frameworks like FastAPI or Flask.
- Solid experience working with PostgreSQL, including designing database schemas, writing optimized queries, and ensuring efficient data retrieval.
- Experience with Cursor, an AI agent-based development platform, to enhance full-stack development through AI-driven code generation, data management, and automation.
- Experience with Replit for full-stack development, deploying applications, and collaborating within cloud-based environments.
- Experience working with RESTful APIs, including their integration into both front-end and back-end systems.
- Familiarity with development tools and frameworks such as Git, Node.js, and Nginx.
- Strong problem-solving skills, a keen attention to detail, and the ability to work independently or within a collaborative team environment.
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Experience with other front-end frameworks (e.g., Vue, Angular).
- Familiarity with Agile methodologies and project management tools like Jira.
- Understanding of cloud technologies and experience deploying applications to platforms like AWS or Google Cloud.
- Knowledge of additional back-end technologies or frameworks (e.g., FastAPI).
What We Offer:
- A collaborative and inclusive work environment that values every team member’s input.
- Opportunities to work on innovative projects using Cursor and Replit for full-stack development.
- Competitive salary and comprehensive benefits package.
- Flexible working hours and potential for remote work options.
Location: Remote
If you're passionate about full-stack development and leveraging AI-driven platforms like Cursor and Replit to build scalable solutions, apply today to join our forward-thinking team!

Position: Software Development Engineer - 2
Location: Bangalore/Remote
Role Overview
We are looking for a Software Development Engineer - 2 with 4-6 years of experience who is passionate about writing clean, scalable code and enjoys solving complex backend challenges. As part of the engineering team, you’ll work on designing, developing, and maintaining backend services, primarily in Python, with exposure to other backend technologies like Node.js and Go. You'll contribute to our microservices architecture, APIs, and cloud-native solutions, ensuring security and performance at scale.
Responsibilities
- Write, test, and maintain scalable and efficient backend code in Python (FastAPI or similar frameworks).
- Collaborate with cross-functional teams to design and implement APIs and microservices.
- Ensure code quality by writing and reviewing test cases, and conducting code reviews.
- Handle bug fixing and troubleshooting for backend systems as needed.
- Build and optimize backend systems to positively impact business outcomes.
- Design and implement cloud-native solutions with a focus on performance and security.
- Monitor system health and continuously improve performance and reliability.
- Contribute to process and code improvements, focusing on best practices.
Must-Have Technical Skills
- 4-6 years of experience working with Python (preferably FastAPI or other frameworks).
- Strong understanding of OOP principles and best coding practices.
- Experience in designing and releasing production APIs.
- Proficiency in RDBMS and NoSQL databases.
- Familiarity with microservices and event-driven architecture.
- Experience in cloud-native application development (SaaS).
- Knowledge of cloud services such as GCS, AWS, or Azure.
- Strong focus on security in design and coding practices.
Good-to-Have Skills
- Experience in building and maintaining CI/CD pipelines.
- Hands-on experience with NoSQL DBs.
- Exposure to working in a cloud environment and familiarity with infrastructure management.
- Aggressive problem diagnosis and creative problem-solving skills.
- Excellent communication skills for collaborating with global teams.
About Us
Scrut Automation is an information security and compliance monitoring platform, aimed at helping small and medium cloud-native enterprises develop and maintain a robust security posture, and comply with various infosec standards such as SOC 2, ISO 27001, GDPR, and the like with ease. With the help of the Scrut platform, customers reduce their manual effort for security and compliance tasks by 70%, and build real-time visibility of their security posture.
Founded by IIT/ISB/McKinsey alumni, the founding team has over 15 years of combined Infosec experience. Scrut is built out of India for the world, with customers across India, APAC, North America, Europe and the Middle East. Scrut is backed by Lightspeed Ventures, MassMutual Ventures and Endiya Partners, along with prominent angels from the global SaaS community.
Why should this job excite you?
- Flat-hierarchy, performance-driven culture
- Rapid growth and learning opportunities
- Comprehensive medical insurance coverage
- A high-performing action-oriented team
- Competitive package, benefits and employee-friendly work culture
Note: Due to a high volume of applications, only shortlisted candidates will be contacted. Thank you for your understanding.


POSITION:
Senior Data Engineer
The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up.
You will collaborate with our software engineers, database architects, data analysts and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
What You’ll Be Doing:
● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant
field.
● 10+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.
● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks
● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.
● Comfortable working in a linux shell environment and writing scripts as needed.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Must be capable of working independently and delivering stable,
efficient and reliable software.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic environment.
REPORTING: This position will report to our CEO or any other Lead as assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00pm IST
WHO WE ARE:
SalesIntel is the top revenue intelligence platform on the market. Our combination of automation and researchers allows us to reach 95% data accuracy for all our published contact data, while continuing to scale up our number of contacts. We currently have more than 5 million human-verifi ed contacts, another 70 million plus machine processed contacts, and the highest number of direct dial contacts in the industry. We guarantee our accuracy with our well-trained research team that re-verifi es every direct dial number, email, and contact every 90 days. With the most comprehensive contact and company data and our excellent customer service, SalesIntel has the best B2B data available. For more information, please visit – www.salesintel.io
WHAT WE OFFER: SalesIntel’s workplace is all about diversity. Different countries and cultures are represented in our workforce. We are growing at a fast pace and our work environment is constantly evolving with changing times. We motivate our team to better themselves by offering all the good stuff you’d expect like Holidays, Paid Leaves, Bonuses, Incentives, Medical Policy and company paid Training Programs.
SalesIntel is an Equal Opportunity Employer. We prohibit discrimination and harassment of any type and offer equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.


Knowledge of Gen AI technology ecosystem including top tier LLMs, prompt engineering, knowledge of development frameworks such as LLMaxindex and LangChain, LLM fine tuning and experience in architecting RAGs and other LLM based solution for enterprise use cases. 1. Strong proficiency in programming languages like Python and SQL. 2. 3+ years of experience of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks such as Regression , classification, ensemble model,RNN,LSTM,GRU. 3. 2+ years of experience in NLP, Text analytics, Document AI, OCR, sentiment analysis, entity recognition, topic modeling 4. Proficiency in LangChain and Open LLM frameworks to perform summarization, classification, Name entity recognition, Question answering 5. Proficiency in Generative techniques prompt engineering, Vector DB, LLMs such as OpenAI,LlamaIndex, Azure OpenAI, Open-source LLMs will be important 6. Hands-on experience in GenAI technology areas including RAG architecture, fine tuning techniques, inferencing frameworks etc 7. Familiarity with big data technologies/frameworks 8. Sound knowledge of Microsoft Azure

TL;DR
Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.
🏢 Mega Style Apartments
We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.
✨ Why This Role Rocks
💡 Green-field Everything
Choose the stack, CI, even the linter.
🎯 Visible Impact & Ambition
Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.
⏱️ Radical Autonomy
Plan sprints, own deploys; no committees.
- Direct line to decision-makers → zero red tape
- Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
- Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
- Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.
🗓️ Your Daily Rhythm
- Morning: Check metrics, pick highest-impact task
- Day: Build → ship → measure
- Evening: 10-line WhatsApp update (done, next, blockers)
- Friday: Live demo of working software (no mock-ups)
📈 Success Milestones
- Week 1: First feature in production
- Month 1: Automation that saves ≥10 h/week for ops
- Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).
🔑 What You’ll Own
- Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
- Automate ops—dashboards & LLM helpers that delete busy-work.
- Full lifecycle: idea → spec → code → deploy → measure → iterate.
- Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
- Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.
Prototype > promise. Results > hours-in-chair.
💻 Must-Have Skills
Frontend Focus:
- Next.js (App Router/RSC/Server Actions)
- React (latest stable), TypeScript
- Tailwind CSS + shadcn/ui
- State mgmt (TanStack Query / Zustand / Jotai)
Backend & DevOps Focus:
- Node.js APIs, Prisma/Drizzle ORM
- Solid SQL schema design (e.g., PostgreSQL)
- Auth.js / Better-Auth, web security best practices
- GitHub Flow, automated tests, CI, Vercel deploys
- Excellent English; explain trade-offs to non-tech peers
- Self-starter—comfortable as the engineer (for now)
🌱 Nice-to-Haves (Learn Here or Teach Us)
A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation
🎁 Perks & Benefits
- 100% remote anywhere in 🇮🇳
- Flexible hours (~40 h/wk)
- 12 paid days off (holiday + sick)
- ₹1,700/mo health insurance reimbursement (post-probation)
- Performance bonuses for measurable wins
- 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
- Blank-canvas stack—your decisions live on
- Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.
⏩ Hiring Process (7–10 Days, Fast & Fair)
All stages are async & remote.
- Apply: 5-min form + short quiz (approx. 15 min total)
- Test 1: TypeScript & logic (1 h)
- Test 2: Next.js / React / Node / SQL deep-dive (1 h)
- Final: AI Video interview (1 h)
.
🚫 Who Shouldn’t Apply
- Need daily hand-holding
- Prefer consensus to decisions
- Chase perfect code over shipped value
- “Move fast & learn” culture feels scary
🚀 Ready to Own the Stack?
If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less


Job Description:
Interviews will be scheduled in two days.
We are seeking a highly skilled Scala Developer to join our team on an immediate basis. The ideal candidate will work remotely and collaborate with a US-based client, so excellent communication skills are essential.
Key Responsibilities:
Develop scalable and high-performance applications using Scala.
Collaborate with cross-functional teams to understand requirements and deliver quality solutions.
Write clean, maintainable, and testable code.
Optimize application performance and troubleshoot issues.
Participate in code reviews and ensure adherence to best practices.
Required Skills:
Strong experience in Scala development.
Solid understanding of functional programming principles.
Experience with frameworks like Akka, Play, or Spark is a plus.
Good knowledge of REST APIs, microservices architecture, and concurrency.
Familiarity with CI/CD, Git, and Agile methodologies.
Roles & Responsibilities
- Develop and maintain scalable backend services using Scala.
- Design and integrate RESTful APIs and microservices.
- Collaborate with cross-functional teams to deliver technical solutions.
- Write clean, efficient, and testable code.
- Participate in code reviews and ensure code quality.
- Troubleshoot issues and optimize performance.
- Stay updated on Scala and backend development best practices.
Immediate joiner prefer.


Title: Senior Software Engineer – Python (Remote: Africa, India, Portugal)
Experience: 9 to 12 Years
INR : 40 LPA - 50 LPA
Location Requirement: Candidates must be based in Africa, India, or Portugal. Applicants outside these regions will not be considered.
Must-Have Qualifications:
- 8+ years in software development with expertise in Python
- kubernetes is important
- Strong understanding of async frameworks (e.g., asyncio)
- Experience with FastAPI, Flask, or Django for microservices
- Proficiency with Docker and Kubernetes/AWS ECS
- Familiarity with AWS, Azure, or GCP and IaC tools (CDK, Terraform)
- Knowledge of SQL and NoSQL databases (PostgreSQL, Cassandra, DynamoDB)
- Exposure to GenAI tools and LLM APIs (e.g., LangChain)
- CI/CD and DevOps best practices
- Strong communication and mentorship skills


Title: Data Engineer II (Remote – India/Portugal)
Exp: 4- 8 Years
CTC: up to 30 LPA
Required Skills & Experience:
- 4+ years in data engineering or backend software development
- AI / ML is important
- Expert in SQL and data modeling
- Strong Python, Java, or Scala coding skills
- Experience with Snowflake, Databricks, AWS (S3, Lambda)
- Background in relational and NoSQL databases (e.g., Postgres)
- Familiar with Linux shell and systems administration
- Solid grasp of data warehouse concepts and real-time processing
- Excellent troubleshooting, documentation, and QA mindset
If interested, kindly share your updated CV to 82008 31681

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.

Location: Remote (India only)
About Certa At Certa, we’re revolutionizing process automation for top-tier companies, including Fortune 500 and Fortune 1000 leaders, from the heart of Silicon Valley. Our mission? Simplifying complexity through cutting-edge SaaS solutions. Join our thriving, global team and become a key player in a startup environment that champions innovation, continuous learning, and unlimited growth. We offer a fully remote, flexible workspace that empowers you to excel.
Role Overview
Ready to elevate your DevOps career by shaping the backbone of a fast-growing SaaS platform? As a Senior DevOps Engineer at Certa, you’ll lead the charge in building, automating, and optimizing our cloud infrastructure. Beyond infrastructure management, you’ll actively contribute with a product-focused mindset, understanding customer requirements, collaborating closely with product and engineering teams, and ensuring our AWS-based platform consistently meets user needs and business goals.
What You’ll Do
- Own SaaS Infrastructure: Design, architect, and maintain robust, scalable AWS infrastructure, enhancing platform stability, security, and performance.
- Orchestrate with Kubernetes: Utilize your advanced Kubernetes expertise to manage and scale containerized deployments efficiently and reliably.
- Collaborate on Enterprise Architecture: Align infrastructure strategies with enterprise architectural standards, partnering closely with architects to build integrated solutions.
- Drive Observability: Implement and evolve sophisticated monitoring and observability solutions (DataDog, ELK Stack, AWS CloudWatch) to proactively detect, troubleshoot, and resolve system anomalies.
- Lead Automation Initiatives: Champion an automation-first mindset across the organization, streamlining development, deployment, and operational workflows.
- Implement Infrastructure as Code (IaC): Master Terraform to build repeatable, maintainable cloud infrastructure automation.
- Optimize CI/CD Pipelines: Refine and manage continuous integration and deployment processes (currently GitHub Actions, transitioning to CircleCI), enhancing efficiency and reliability.
- Enable GitOps with ArgoCD: Deliver seamless GitOps-driven application deployments, ensuring accuracy and consistency in Kubernetes environments.
- Advocate for Best Practices: Continuously promote and enforce industry-standard DevOps practices, ensuring consistent, secure, and efficient operational outcomes.
- Innovate and Improve: Constantly evaluate and enhance current DevOps processes, tooling, and methodologies to maintain cutting-edge efficiency.
- Product Mindset: Actively engage with product and engineering teams, bringing infrastructure expertise to product discussions, understanding customer needs, and helping prioritize infrastructure improvements that directly benefit users and business objectives.
What You Bring
- Hands-On Experience: 3-5 years in DevOps roles, ideally within fast-paced SaaS environments.
- Kubernetes Mastery: Advanced knowledge and practical experience managing Kubernetes clusters and container orchestration.
- AWS Excellence: Comprehensive expertise across AWS services, infrastructure management, and security.
- IaC Competence: Demonstrated skill in Terraform for infrastructure automation and management.
- CI/CD Acumen: Proven proficiency managing pipelines with GitHub Actions; familiarity with CircleCI highly advantageous.
- GitOps Knowledge: Experience with ArgoCD for effective continuous deployment and operations.
- Observability Skills: Strong capabilities deploying and managing monitoring solutions such as DataDog, ELK, and AWS CloudWatch.
- Python Automation: Solid scripting and automation skills using Python.
- Architectural Awareness: Understanding of enterprise architecture frameworks and alignment practices.
- Proactive Problem-Solving: Exceptional analytical and troubleshooting skills, adept at swiftly addressing complex technical challenges.
- Effective Communication: Strong interpersonal and collaborative skills, essential for remote, distributed teamwork.
- Product Focus: Ability and willingness to understand customer requirements, prioritize tasks that enhance product value, and proactively suggest infrastructure improvements driven by user needs.
- Startup Mindset (Bonus): Prior experience or enthusiasm for dynamic startup cultures is a distinct advantage.
Why Join Us
- Compensation: Top-tier salary and exceptional benefits.
- Work-Life Flexibility: Fully remote, flexible scheduling.
- Growth Opportunities: Accelerate your career in a company poised for significant growth.
- Innovative Culture: Engineering-centric, innovation-driven work environment.
- Team Events: Annual offsites and quarterly Hackerhouse.
- Wellness & Family: Comprehensive healthcare and parental leave.
- Workspace: Premium workstation setup allowance, providing the tech you need to succeed.


Duration: 6 months with possible extension
Location: Remote
Notice Period: Immediate Joiner Preferred
Experience: 4-6 Years
Requirements:
- B Tech/M Tech in Computer Science or equivalent from a reputed college with a minimum of 4 – 6 years of experience in a Product Development Company
- Sound knowledge and application of algorithms and data structures with space and me complexities
- Strong design skills involving data modeling and low-level class design
- Good knowledge of object-oriented programming and design patterns
- Proficiency in Python, Java, and Golang
- Follow industry coding standards and be responsible for writing maintainable/scalable/efficient code to solve business problems
- Hands-on experience of working with Databases and the Linux/Unix platform
- Follow SDLC in an agile environment and collaborate with multiple cross-functional teams to drive deliveries
- Strong technical aptitude and good knowledge of CS fundamentals
What will you get to do here?
- Coming up with best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the product/team.
- Driving the adoption of best practices & regular Participation in code reviews, design reviews, and architecture discussions.
- Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create.
- Implementation of long-term technology vision for your team.
- Creating architectures & designs for new solutions around existing/new areas
- Decide on technology & tool choices for your team & be responsible for them.

Pattern Agentix (patternagentix.com) is seeking a computational bilogist to assume the role of Lead AI researcher to create advanced multi-agent AI systems leveraging cutting-edge AI research and Retrieval-Augmented Generation (RAG) techniques. The ideal candidate should have a strong academic and research background in AI, demonstrated through published research papers, open-source contributions (e.g., GitHub).
Exposure to bioinformatics or some background in bioinformatics is a compulsory requirement. The candiate will apply computational techniques, mathematical models, and computer science skills to analyze and interpret complex biological data.
Required Skills & Experience
Master’s or Ph.D. in AI, Machine Learning, Computer Science, or a related field with some exposure to bioinformatics.
Strong AI research background, demonstrated through peer-reviewed publications in top-tier AI/ML conferences or journals (e.g., NeurIPS, ICML, AAAI, CVPR, ACL, etc.).
Proficiency in Python and experience with AI/ML frameworks (e.g., PyTorch, TensorFlow).
Experience with multi-agent AI systems and their architectural design.
Project Scope
The project involves developing a sophisticated multi-agent system that:
that automates hypothesis generation in biomedical research using large bimedical research data sets..
We are open on compensation models but compensation will be aligned to local norms. We would consider part time or full time.

Role: GCP Data Engineer
Notice Period: Immediate Joiners
Experience: 5+ years
Location: Remote
Company: Deqode
About Deqode
At Deqode, we work with next-gen technologies to help businesses solve complex data challenges. Our collaborative teams build reliable, scalable systems that power smarter decisions and real-time analytics.
Key Responsibilities
- Build and maintain scalable, automated data pipelines using Python, PySpark, and SQL.
- Work on cloud-native data infrastructure using Google Cloud Platform (BigQuery, Cloud Storage, Dataflow).
- Implement clean, reusable transformations using DBT and Databricks.
- Design and schedule workflows using Apache Airflow.
- Collaborate with data scientists and analysts to ensure downstream data usability.
- Optimize pipelines and systems for performance and cost-efficiency.
- Follow best software engineering practices: version control, unit testing, code reviews, CI/CD.
- Manage and troubleshoot data workflows in Linux environments.
- Apply data governance and access control via Unity Catalog or similar tools.
Required Skills & Experience
- Strong hands-on experience with PySpark, Spark SQL, and Databricks.
- Solid understanding of GCP services (BigQuery, Cloud Functions, Dataflow, Cloud Storage).
- Proficiency in Python for scripting and automation.
- Expertise in SQL and data modeling.
- Experience with DBT for data transformations.
- Working knowledge of Airflow for workflow orchestration.
- Comfortable with Linux-based systems for deployment and troubleshooting.
- Familiar with Git for version control and collaborative development.
- Understanding of data pipeline optimization, monitoring, and debugging.


Join CD Edverse, an innovative EdTech app, as AI Specialist! Develop a deep research tool to generate comprehensive courses and enhance AI mentors. Must have strong Python, NLP, and API integration skills. Be part of transforming education! Apply now.
Position Responsibilities :
- Work with product managers to understand the business workflows/requirements, identify needs gaps, and propose relevant technical solutions
- Design, Implement & tune changes to the product that work within the time tracking/project management environment
- Be understanding and sensitive to customer requirements to be able to offer alternative solutions
- Keep in pace with the product releases
- Work within Deltek-Replicon's software development process, expectations and quality initiatives
- Work to accurately evaluate risk and estimate software development tasks
- Strive to continually improve technical and developmental skills
Qualifications :
- Bachelor of Computer Science, Computer Engineering, or related field.
- 4+ years of software development experience (Core: Python v2.7 or higher).
- Strong Data structures, algorithm design, problem-solving, and Quantitative analysis skills.
- Knowledge of how to use microservices and APIs in code.
- TDD unit test framework knowledge (preferably Python).
- Strong and well-versed with Git basic and advanced concepts and their respective commands and should be able to handle merge conflicts.
- Must have basic knowledge of web development technologies and should have worked on any web development framework.
- SQL queries working knowledge.
- Basic operating knowledge in some kind of project management tool like Jira.
- Good to have:- Knowledge of EmberJs, C#, and .Net framework.