
- Developing chatbots and voice assistants on various platforms for diverse business use-cases
- Work on a chatbot framework/architecture using an open-source tool or library
- Implement Natural Language Processing (NLP) for chatbots
- Integration of chatbots with Management Dashboards and CRMs
- Resolve complex technical design issues by analyzing the logs, debugging code, and identifying technical issues/challenges/bugs in the process
- Deploy applications using CI/CD tools
- Designing and building highly scalable AI and ML solutions
- Ability to understand business requirements and translate them into technical requirements
- Open-minded, flexible, and willing to adapt to changing situations
- Ability to work independently as well as on a team and learn from colleagues
- High adaptability in a dynamic start-up environment.
- Experience with bot multi-lingual utilization (preferred)
- Experience with bot human escalation
- Ability to optimize applications for maximum speed and scalability
- Come up with new approaches and ideas to improve the current performance of Chatbots across multiple domains and build a highly personalized user experience.
QUALIFICATIONS : B. Tech/ B.E. /M. Tech or a related technical discipline from reputed universities
SKILLS REQUIRED :
- Minimum 3+ years- of experience in Chatbot Development using the Rasa open-source framework.
- Hands-on experience building and deploying chatbots.
- Experience in Conversational AI platforms for enterprises using ML and Deep Learning.
- Experience with both text to speech and vice versa transformation incorporation.
- Should have a good understanding of various Chatbot frameworks/platforms/libraries.
- Build and evolve/train the NLP platform from natural language text data being gathered from users on a daily basis.
- Code using primarily Python.
- Experience with bots for platforms like Facebook Messenger, Slack, Twitter, WhatsApp, etc.
- Knowledge of digital assistants such as Amazon Alexa, Google Assistant, etc.
- Experience in applying different NLP techniques to problems such as text. classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning
- Techniques - NLP Skills/Tools: NLP, HMM, MEMM, P/LSA, CRF, LDA, Semantic Hashing, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, Core NLP, NLU, NLG, etc.
- Should be familiar with these terms: Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context.
- Knowledge of SQL and NoSQL Databases such as MySQL, MongoDB, Cassandra, Redis, PostgreSQL
- Experience with working on public cloud services such as Digital Ocean, AWS, Azure, or GCP.
- Knowledge of Linux shell commands.
- Integration with Chat/Social software like Facebook Messenger, Twitter, SMS.
- Integration with Enterprise systems like Microsoft Dynamics CRM, Salesforce, Zendesk, Zoho, etc.
MUST HAVE :
- Strong foundation in the python programming language.
- Experience with various chatbot frameworks especially Rasa and Dialogflow.
- Strong understanding of other AI tools and applications like TensorFlow, Spacy, and Google Cloud ML is a BIG plus.
- Experience with RESTful services.
- Good understanding of HTTPS and Enterprise security.

Similar jobs
Job Title: Senior Python Developer – Product Engineering
Location: Pune, India
Experience Required: 3 to 7 Years
Employment Type: Full-time
Employment Agreement: Minimum 3 years (At the completion of 3 years, One Time Commitment Bonus will be applicable based on performance)
🏢 About Our Client
Our client is a leading enterprise cybersecurity company offering an integrated platform for Digital Rights Management (DRM), Enterprise File Sync and Share (EFSS), and Content-Aware Data Protection (CDP). With patented technologies for secure file sharing, endpoint encryption, and real-time policy enforcement, helps organizations maintain control over sensitive data — even after it leaves the enterprise perimeter.
🎯 Role Overview
We are looking for a skilled Python Developer with a strong product mindset and experience building scalable, secure, and performance-critical systems. You will join our core engineering team to enhance backend services powering DRM enforcement, file tracking, audit logging, and file sync engines.
This is a hands-on role for someone who thrives in a product-first, security-driven environment and wants to build technologies that handle terabytes of enterprise data across thousands of endpoints.
🛠️ Key Responsibilities
● Develop and enhance server-side services for DRM policy enforcement, file synchronization, data leak protection, and endpoint telemetry.
● Build Python-based backend APIs and services that interact with file systems, agent software, and enterprise infrastructure.
● Work on delta sync, file versioning, audit trails, and secure content preview/rendering services.
● Implement secure file handling, encryption workflows, and token-based access controls across modules.
● Collaborate with DevOps to optimize scalability, performance, and availability of core services across hybrid deployments (on-prem/cloud).
● Debug and maintain production-level services; drive incident resolution and performance optimization.
● Integrate with 3rd-party platforms such as LDAP, AD, DLP, CASB, and SIEM systems.
● Participate in code reviews, architecture planning, and mentoring junior developers.
📌 Required Skills & Experience
● 3+ years of professional experience with Python 3.x, preferably in enterprise or security domains.
● Strong understanding of multithreading, file I/O, inter-process communication, and low-level system APIs.
● Expertise in building RESTful APIs, schedulers, workers (Celery), and microservices.
● Solid experience with encryption libraries (OpenSSL, cryptography.io) and secure coding practices.
● Hands-on experience with PostgreSQL, Redis, SQLite, or other transactional and cache stores.
● Familiarity with Linux internals, filesystem hooks, journaling/logging systems, and OS-level operations.
● Experience with source control (Git), containerization (Docker/K8s), and CI/CD.
● Proven ability to write clean, modular, testable, and scalable code for production environments.
➕ Preferred/Bonus Skills
● Experience in EFSS, DRM, endpoint DLP, or enterprise content security platforms.
● Knowledge of file diffing algorithms (rsync, delta encoding) or document watermarking.
● Prior experience with agent-based software (Windows/Linux), desktop sync tools, or version control systems.
● Exposure to compliance frameworks (e.g., DPDP Act, GDPR, RBI-CSF) is a plus.
🌟 What We Offer
● Work on a patented and mission-critical enterprise cybersecurity platform
● Join a fast-paced team focused on innovation, security, and customer success
● Hybrid work flexibility with competitive compensation and growth opportunities
● Direct impact on product roadmap, architecture, and IP development
Role & Responsibilities:
We are seeking a Software Developer with 2-10 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling. • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features. • Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
• Strong knowledge of Python (scripting, APIs, data handling).
• Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
• Experience with JSON data parsing and transformations.
• Familiarity with PostgreSQL or other relational databases.
• Ability to write clean, maintainable, and well-documented code.
• Strong problem-solving skills and eagerness to learn.
• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
• Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
• Experience working in startups or fast-paced environments.
• Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
• The opportunity to define the future of GovTech through AI-powered solutions.
• A strategic leadership role in a fast-scaling startup with direct impact on product direction and market success.
• Collaborative and innovative environment with cross-functional exposure.
• Growth opportunities backed by a strong leadership team.
• Remote flexibility and work-life balance.
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
Responsibilities:
• Collaborate with cross-functional teams, including front-end developers, product managers, and designers, to understand project requirements and translate them into technical specifications.
• Design and develop server-side logic, APIs, and database schema to support the functionality and performance requirements of our SaaS platform.
• Write clean, modular, and well-documented code using any relevant programming language preferably Java with SpringBoot.
• Optimize the backend systems for maximum speed and scalability, ensuring high performance and responsiveness of the application.
• Implement data storage solutions using PostgreSQL or other relational databases, ensuring data integrity and security.
• Conduct thorough testing and debugging to identify and resolve any issues or bugs in the backend code.
• Stay up-to-date with emerging technologies, industry trends, and best practices in backend development and contribute to the continuous improvement of our development processes.
Requirements:
• Proven work experience as a Backend Developer or similar role, with a focus on server-side development.
• Proficiency in working with relational databases, particularly PostgreSQL, and writing efficient SQL queries.
• Familiarity with SaaS concepts and architecture.
• Experience with API design and development, including RESTful APIs.
• Solid understanding of software development principles, including object-oriented programming, design patterns, and data structures.
• Experience with version control systems, such as Git.
• Strong problem-solving and analytical skills, with keen attention to detail.
• Excellent communication and teamwork skills, with the ability to collaborate effectively with cross-functional teams.
• Bachelor's degree in Computer Science, Engineering, or a related field is preferred, but not mandatory
- Experience in designing scalable micro-services required
- Sound knowledge of Python and Django, familiarity with Linux and git
- Deep understanding of how RESTful APIs work
- Familiarity with HTML / CSS and templating systems, Redis, RabbitMQ, NGINX preferred
-Bonus - Preliminary knowledge of any one of these languages - Golang / JavaScript / Lua Responsibilities
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar Python expert to help develop and deploy our AI pipeline. The main task will be deploying models and algorithms developed by our AI team, and keeping the daily production pipeline running. Our pipeline is centered around several microservices, all written in Python, that coordinate their actions through a database. We’re looking for developers with deep experience in Python including profiling and improving the performance of production code, multiprocessing / multithreading, and managing a pipeline that is constantly running. AI/ML experience is a plus, but not necessary. AWS / docker / CI/CD practices are also a plus. If you are a gamer or streamer, or enjoy watching video games and streams, that is also definitely a plus :-)
You will be responsible for:
- Building Python scripts to deploy our AI components into pipeline and production
- Developing logic to ensure multiple different AI components work together seamlessly through a microservices architecture
- Managing our daily pipeline on both on-premise servers and AWS
- Working closely with the AI engineering, backend and frontend teams
You should have the following qualities:
- Deep expertise in Python including:
- Multiprocessing / multithreaded applications
- Class-based inheritance and modules
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Understanding Python performance bottlenecks, and how to profile and improve the performance of production code including:
- Optimal multithreading / multiprocessing strategies
- Memory bottlenecks and other bottlenecks encountered with large datasets and use of numpy / opencv / image processing
- Experience in creating soft real-time processing tasks is a plus
- Expertise in Docker-based virtualization including:
- Creating & maintaining custom Docker images
- Deployment of Docker images on cloud and on-premise services
- Experience with maintaining cloud applications in AWS environments
- Experience in deploying machine learning algorithms into production (e.g. PyTorch, tensorflow, opencv, etc) is a plus
- Experience with image processing in python is a plus (e.g. openCV, Pillow, etc)
- Experience with running Nvidia GPU / CUDA-based tasks is a plus (Nvidia Triton, MLFlow)
- Knowledge of video file formats (mp4, mov, avi, etc.), encoding, compression, and using ffmpeg to perform common video processing tasks is a plus.
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 4 years to 8 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Sizzle works with thousands of gaming streamers to automatically create highlights and social content for them. Sizzle is available at www.sizzle.gg.
lives of API developers and consumers easier. If you love thinking big and delving deep and
enjoy envisioning truly elegant solutions, this role is definitely for you.
What you will be Doing
- You will abstract away complex data interactions with easy-to-use APIs that will power several
mobile and web applications.
- You will also own, scale, and maintain the computational and storage infrastructure for the
various micro-services and long-running jobs, designed and implemented by you and the team.
- We will look to you to make key decisions on the technology stack, architecture, networking,
and security. We love working with bleeding-edge technology, especially if it improves the
malleability, and simplicity of our deliverables.
What you need
- The ideal Backend Engineers are polyglots who are fluent in HTTP and core CS concepts such
as algorithms, data structures, and programming paradigms, and always pick the right tools for
the right job.
- They have a keen eye for common security vulnerabilities and how to act on them (example:
DDOS attacks, SQL Injection etc.).
- They understand what it takes to work in a startup environment and know when to trade
performance for simplicity.
- They fail fast, learn faster, and execute in time.
- Strong communication skills, get-things-done attitude, and empathy
- Strong sense of ownership, drive and obsessive attention to detail.
- Comfortable with iterative development practices and code reviews
- Previous experience as part of a product-oriented team is a plus
Technical Skillsets:
NodeJS + Javascript, GoLang, Typescript + Nodejs, Clojure/Haskell/F#/Scala
(languages/environments)
Koa, Express, Play (frameworks)
Asynchronous Programming Frameworks (Akka, Nodejs, Tornado)
MongoDB, postgres, Bigtable, Dynamo (databases)
Apache Kafka, NATS, RabbitMQ, ZeroMQ (queues)
FunctionalProgramming, FRP (functional reactive)
microservices, multi-tenant, distributed-systems, distributed-computing, event-sourcing
Good to have Skillsets:
Clojure/Haskell/F#/Scala
Apache Kafka, NATS
FunctionalProgramming, FRP (functional reactive)
event-sourcing
Koa
Why you should consider this role seriously?
- We have an audacious vision of helping companies fight counterfeiting and managing their
supply chain more efficiently
- We have built a product and solved problems for some of the largest brands in the country and
tested platform at scale (With our tags present in over 50 Million products already). We have
plans to grow 10x in the next 1 year
- Ownership of key problems. Fast-paced environment
- We are a well-balanced team of experienced entrepreneurs and are backed by top investors
across India and the Silicon Valley (Venture Highway, Startup Buddy etc.)
- Competitive market salary
- Opportunity to work directly with the CEO, COO, CTO of the company
- A chance to interact with top-notch executives from multiple industries
- Open vacation policy (and we really mean it!)
- Open Pantry
- As a team, we love to travel :). An off-site every quarter
We are looking for a full-time remote Senior Backend Developer who has worked with big data and stream processing, to solve big technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
1) Writing well tested, readable code using Python that is capable of processing large volumes of data
2) Experience with cloud platforms such as GCP, Azure or AWS are essential
3) The ability to work to project deadlines efficiently and with minimum guidance
4) A positive attitude and love working within a global distributed team
Skills
1) Highly proficient working with Python
2)Comfort working with large data sets and high velocity data streams
3) Experienced with microservices and backend services
4) Good relational and NoSQL database working knowledge
5) An interest in healthcare and medical sectors
6) Technical degree with minimum of 2 plus years- backend data heavy development or data engineering experience in Python
7) Desirable ETL/ELT
8) Desirable Apache Spark and big data pipelines, and stream data processing (e.g. Kafka, Flink, Kinesis, Event Hub)
We are looking for a Node.js Developer who is proficient with writing API's, working with data, using AWS and capable of applying algorithms mainly machine learning-based to solve problems and create/modify features for our students. Your primary focus will be the development of all server-side logic, definition and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of front-end technologies is necessary as well.
Responsibilities
- Integration of user-facing elements developed by front-end developers with server-side logic
- Writing reusable, testable, and efficient code
- Design and implementation of low-latency, high-availability, and performant applications
- Implementation of security and data protection
- Use of algorithms to drive data analytics and features.
- Ability to use AWS to solve scale issues.
Apply if you can only arrive for a face to face interview in Bangalore.










