
• As a Python full-stack developer, your role would involve design development and deploy full-stack applications out of Artificial intelligence projects with a focus on low latency and scalability.
• You also need to optimize the application for better performance and a large number of concurrent users.
• A strong technologist we care about doing things the right way rather than just doing them and thrives in a complex and challenging environment.
Who are we looking for?
• Bachelors / Masters in Computer Science or equivalent with at least 3+ years of professional experience.
• Solid understanding of design patterns, data structures, and advanced programming techniques
• As an Engineer in our team, you will design, code, test, and debug quality software programs.
• Strong software design and architectural skills in object-oriented and functional programming styles.
• Python, Celery, RabbitMQ, Kafka, Multithreading, Async, Microservices, Docker, Kubernetes.
• Experience in working with Machine Learning Pipelines
• Experience in Reactjs.
• Experience in Celery, RabbitMQ/Kafka.
• Experience in Unit Testing Tools.
• Experience in working with SQL & NonSql databases such as MySQL, Mongo DB.
• Exposure to cloud technologies.
• Demonstrate the ability to work in a fast paced and hyper-growth environment where the requirements are constantly changing.
• Nice to have: Experience developing products containing machine learning use cases.
• Familiar with agile techniques like code reviews, pair programming, collective code ownership, clean code, TDD and refactoring.

Similar jobs
Intern (GenAI - Python) - 3 Months Unpaid Internship
Job Title: GenAI Intern (Python) - 3 Months Internship (Unpaid)
Location: Ahmedabad (On-Site)
Duration: 3 Months
Stipend: Unpaid Internship
Company: Softcolon Technologies
About the Internship:
Softcolon Technologies is seeking a dedicated GenAI Intern who is eager to delve into real-world AI applications. This internship provides hands-on experience in Generative AI development, focusing on RAG systems and AI Agents. It is an ideal opportunity for individuals looking to enhance their skills in Python-based AI development through practical project involvement.
Eligibility:
- Freshers or currently pursuing BE (IT/CE) or related field
- Strong interest in Generative AI and real-world AI product development
Required Skills (Must Have):
- Basic knowledge of Python
- Basic understanding of Python frameworks like FastAPI and basic Django
- Familiarity with APIs and JSON
- Submission of resume, GitHub Profile/Project Portfolio, and any AI/Python project links
What You Will Learn (Internship Goals):
You will gain hands-on experience in:
- Fundamentals of Generative AI (GenAI)
- Building RAG (Retrieval-Augmented Generation) applications
- Working with Vector Databases and embeddings
- Creating AI Agents using Python
- Integrating LLMs such as OpenAI (GPT models), Claude, Gemini
- Prompt Engineering + AI workflow automation
- Building production-ready APIs using FastAPI
Responsibilities:
- Assist in developing GenAI-based applications using Python
- Support RAG pipeline implementation (embedding + search + response)
- Work on API integrations with OpenAI/Claude/Gemini
- Assist in building backend services using FastAPI
- Maintain project documentation and GitHub updates
- Collaborate with team members for tasks and daily progress updates
Selection Process:
- Resume + GitHub portfolio screening
- Short technical discussion (Python + basics of APIs)
- Final selection by the team
Why Join Us?
- Practical experience in GenAI through real projects
- Mentorship from experienced developers
- Opportunity to work on portfolio-level projects
- Certificate + recommendation (based on performance)
- Potential for a paid role post-internship (based on performance)
How to Apply:
Share your resume and GitHub portfolio link via:
We are looking for a Senior Backend Engineer to build and operate the core AI/ML-backed systems that power large-scale, consumer-facing products. You will work on production-grade AI runtimes, retrieval systems, and ML-adjacent backend infrastructure, making pragmatic tradeoffs across quality, latency, reliability, and cost.
This role is not an entry point into AI/ML. You are expected to already have hands-on experience shipping ML-backed backend systems in production.
At Proximity, you won’t just build APIs - you’ll own critical backend systems end-to-end, collaborate closely with Applied ML and Product teams, and help define the foundations that power intelligent experiences at scale.
Responsibilities -
- Own and deliver end-to-end backend systems for AI product runtime, including orchestration, request lifecycle management, state/session handling, and policy enforcement.
- Design and implement retrieval and memory primitives end-to-end — document ingestion, chunking strategies, embeddings generation, indexing, vector/hybrid search, re-ranking, caching, freshness, and deletion semantics.
- Productionize ML workflows and interfaces, including feature and metadata services, online/offline parity, model integration contracts, and evaluation instrumentation.
- Drive performance, reliability, and cost optimization, owning P50/P95 latency, throughput, cache hit rates, token and inference costs, and infrastructure efficiency.
- Build observability by default, including structured logs, metrics, distributed tracing, guardrail signals, failure taxonomies, and reliable fallback paths.
- Collaborate closely with Applied ML teams on model routing, prompt and tool schemas, evaluation datasets, and release safety gates.
- Write clean, testable, and maintainable backend code, contributing to design reviews, code reviews, and operational best practices.
- Take systems from design → build → deploy → operate, including on-call ownership and incident response.
- Continuously identify bottlenecks and failure modes in AI-backed systems and proactively improve system robustness.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 6–10 years of experience building backend systems in production, with 2–3+ years working on ML/AI-backed products such as search, recommendations, ranking, RAG pipelines, or AI assistants.
- Strong practical understanding of ML system fundamentals, including embeddings, vector similarity, reranking, retrieval quality, and evaluation metrics (precision/recall, nDCG, MRR).
- Proven experience implementing or operating RAG pipelines, covering ingestion, chunking, indexing, query understanding, hybrid retrieval, and rerankers.
- Solid distributed systems fundamentals, including API design, idempotency, concurrency, retries, circuit breakers, rate limiting, and multi-tenant reliability.
- Experience with common ML/AI platform components, such as feature stores, metadata systems, streaming or batch pipelines, offline evaluation jobs, and A/B measurement hooks.
- Strong proficiency in backend programming languages and frameworks (e.g., Go, Java, Python, or similar) and API development.
- Ability to work independently, take ownership of complex systems, and collaborate effectively with cross-functional teams.
- Strong problem-solving, communication, and system-design skills.
Desired Skills -
- Experience with agentic runtimes, including tool-calling or function-calling patterns, structured outputs, and production guardrails.
- Hands-on exposure to vector and hybrid retrieval stacks such as FAISS, Milvus, Pinecone, or Elasticsearch.
- Experience running systems on Kubernetes, with strong knowledge of observability stacks like OpenTelemetry, Prometheus, Grafana, and distributed tracing.
- Familiarity with privacy, security, and data governance considerations for user and model data.
Benefits
- Best in class compensation: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet engineers, designers, and product leaders — and learn from experts across domains.
Keep on learning with a world-class team: Work on real, production AI systems at scale, challenge yourself daily, and grow alongside some of the best minds in the industry.
Location: Chennai
Requirements:
Job Description:
As a L3 analyst, you will play a key role within the Application Management team, leading services for its customer(s). This service has responsibility for the smooth daily operational running of the Client platform both in the cloud and on-premises. The Client application is business critical and centrally manages data for trading, settlement, risk, client and regulatory reporting systems. As a L3 analyst, you will be responsible for supporting complex infrastructure and application issues raised by L1/L2 support teams, and business analysts. You would also be working on client requests for enhancements, fixes, maintaining custom code, and following the best practices such as Agile, DevOps, etc.
The role is a technical role requiring a good technical knowledge of SQL/PLSQ, Core Java along with good debugging skill while supporting client implementations. You should have basic understanding on Unix system and commands. The ability to troubleshoot issues, assist users and help write specifications is essential. This position also works with product management to improve the software engineering processes and practices associated with continuously building, deploying, and updating software and environments.
- Accountability and primary responsibility/duties:
- A typical day would look like as follows: -
- 40% support issues.
- 40% configuration/development work.
- 20% continuous improvement, automation, CICD, so on.
- Activities would range across the lifecycle from discovery to post-implementation support.
- Understanding client’s requirement, replicate the problem and providing the solution
- Understanding and developing new requirements coming in from BAs (Business Analysts) and customers.
- Update the existing functionality based on client needs.
- Develop changes that cater to enhancing the product and/or fixing production issues
- Prepare product releases of software components. (Java/SQL-PLSQL)
- Contribute in all phases of the development cycle. (development/Deployment/Testing, peer review)
- Create technical document for all changes made.
Required Qualifications/Knowledge/Skills
- Bachelor’s degree in Computer Science or related field
- Strong basics and working experience in following technologies: -
- Core Java.
- SQL / PLSQL coding.
- Object Oriented Programming concepts and data structures.
- Working knowledge on Unix platform
- Working knowledge on XSLT and XML handling
- Basic understanding App & web server working knowledge (JBOSS, WebLogic and WebSphere) and debugging skills.
- Advantageous to have:
- Working understanding on CICD, Dev-ops technologies.
- Queuing technologies including Kafka, MQ, Solace.
- Scripting including Python, Unix, java.
- Hands on knowledge of Dev ops processes and tools Good interpersonal and communication skills.
- Work on a chatbot framework/architecture using an open-source tool or library
- Implement Natural Language Processing (NLP) for chatbots
- Integration of chatbots with Management Dashboards and CRMs
- Resolve complex technical design issues by analyzing the logs, debugging code, and identifying technical issues/challenges/bugs in the process
- Deploy applications using CI/CD tools
- Designing and building highly scalable AI and ML solutions
- Ability to understand business requirements and translate them into technical requirements
- Open-minded, flexible, and willing to adapt to changing situations
- Ability to work independently as well as on a team and learn from colleagues
- High adaptability in a dynamic start-up environment.
- Experience with bot multi-lingual utilization (preferred)
- Experience with bot human escalation
- Ability to optimize applications for maximum speed and scalability
- Come up with new approaches and ideas to improve the current performance of Chatbots across multiple domains and build a highly personalized user experience.
QUALIFICATIONS : B. Tech/ B.E. /M. Tech or a related technical discipline from reputed universities
SKILLS REQUIRED :
- Minimum 3+ years- of experience in Chatbot Development using the Rasa open-source framework.
- Hands-on experience building and deploying chatbots.
- Experience in Conversational AI platforms for enterprises using ML and Deep Learning.
- Experience with both text to speech and vice versa transformation incorporation.
- Should have a good understanding of various Chatbot frameworks/platforms/libraries.
- Build and evolve/train the NLP platform from natural language text data being gathered from users on a daily basis.
- Code using primarily Python.
- Experience with bots for platforms like Facebook Messenger, Slack, Twitter, WhatsApp, etc.
- Knowledge of digital assistants such as Amazon Alexa, Google Assistant, etc.
- Experience in applying different NLP techniques to problems such as text. classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning
- Techniques - NLP Skills/Tools: NLP, HMM, MEMM, P/LSA, CRF, LDA, Semantic Hashing, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, Core NLP, NLU, NLG, etc.
- Should be familiar with these terms: Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context.
- Knowledge of SQL and NoSQL Databases such as MySQL, MongoDB, Cassandra, Redis, PostgreSQL
- Experience with working on public cloud services such as Digital Ocean, AWS, Azure, or GCP.
- Knowledge of Linux shell commands.
- Integration with Chat/Social software like Facebook Messenger, Twitter, SMS.
- Integration with Enterprise systems like Microsoft Dynamics CRM, Salesforce, Zendesk, Zoho, etc.
MUST HAVE :
- Strong foundation in the python programming language.
- Experience with various chatbot frameworks especially Rasa and Dialogflow.
- Strong understanding of other AI tools and applications like TensorFlow, Spacy, and Google Cloud ML is a BIG plus.
- Experience with RESTful services.
- Good understanding of HTTPS and Enterprise security.
Company Introduction –
- Information Security & Data Analytics Series A funded company.
- Working in cutting edge technologies - Using AI for predictive intelligence and Facial Biometrics.
- Among Top 5 Cyber excellence companies globally (Holger Schulze awards)
- Bronze award for best startup of the year (Indian Express IT awards), only cyber Security Company in top 3.
- More than 100+ clients in India.
Job Description:-
Job Title: Python Developer
Key Requirements:-
- Mine data from structured and unstructured data sources.
- Extract data (text, images, and videos) from multiple documents in different formats.
- Extract information and intelligence from data.
- Extract data based on regular expressions.
- Collect data from structured RDBMS databases.
- Work closely with Project/Business/Research teams to provide mined data/intelligence for analysis.
- Should have strong exposure to core python skills like multiprocessing, multithreading, file handling, data structure like JSON, Data frames, and User Defined Data structure.
- Should have excellent knowledge of classes, file handling, memory manipulations.
- Strong Knowledge in Python.
- Strong exposure to frond end languages like CSS, JavaScript, Ajax etc.
- Should have exposure to requests, Frontera, scarpy-cluster, elastic-search, distributed computing tools like Kafka, Hbase, Redis, Zookeeper, restAPI.
- Should be familiar with *nix development environment.
- Knowledge of Django will be added advantage.
- Excellent knowledge on Web Crawling/Web scraping.
- Should have used scraping modules like Selenium, Scrapy, and Beautiful soup.
- Experience with text processing.
- Basics of databases. Good troubleshooting and debugging skills.
Experience : 1-4 Years Experiene
Education
B.Tech, MCA, Computer Engineering









