
Python Developer (FastAPI)
Job Title: Python Developer (FastAPI)
Experience Required: 4+ years
Location: Pune, Bangalore, Hyderabad, Mumbai, Panchkula, Mohali
Shift: Night Shift 6:30 pm to 3:30 AM IST
About the Role
We are seeking an experienced Python Developer with strong expertise in FastAPI to join our engineering team. The ideal candidate should have a solid background in backend development, RESTful API design, and scalable application development.
Required Skills & Qualifications
· 4+ years of professional experience in backend development with Python.
· Strong hands-on experience with FastAPI (or Flask/Django with migration experience).
· Familiarity with asynchronous programming in Python.
· Working knowledge of version control systems (Git).
· Good problem-solving and debugging skills.
· Strong communication and collaboration abilities.

About Hunarstreet Technologies Pvt Ltd
About
At Hunarstreet Technologies Pvt Ltd, we specialize in delivering India’s fastest hiring solutions, tailored to meet the unique needs of businesses across various industries. Our mission is to connect companies with exceptional talent, enabling them to achieve their growth and operational goals swiftly and efficiently.
We are able to achieve a success rate of 87% in relevancy of candidates to the job position and 62% success rate in closing positions shared with us.
Similar jobs
Role: Technical Co-Founder
Experience: 3+ years (Mandatory)
Compensation: Equity Only (No Salary)
Requirements:
Strong full-stack development skills
Experience building web applications from scratch
Able to manage complete tech independently
Startup mindset & owne
rship attitude
Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid
Duration: Full time
Job Summary:
About the Role:
We are growing our Data Science and Data Engineering team and are looking for an
experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.
Responsibilities:
· Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.
· Analyze existing data entry workflows and propose automation opportunities.
Design:
· Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.
· Collaborate with clients to define project scopes and objectives.
Technology Selection:
· Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.
· Ensure seamless integration with existing systems and workflows.
Prototyping and Testing:
· Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.
· Conduct rigorous testing to ensure accuracy and reliability.
Implementation and Integration:
· Work closely with clients and IT teams to integrate AI solutions effectively.
· Provide technical support during the implementation phase.
Training and Documentation:
· Develop training materials for end-users and support staff.
· Create comprehensive documentation for implemented solutions.
Continuous Improvement:
· Monitor and optimize the performance of deployed solutions.
· Identify opportunities for further automation and improvement.
Qualifications:
· Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).
· Proven experience in developing and implementing AI solutions for data entry automation.
· Expertise in NLP, LLM and other machine-learning techniques.
· Strong programming skills, especially in Python.
· Familiarity with healthcare data privacy and regulatory requirements.
Additional Qualifications( great to have):
An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.
Designation: Python Developer
Experienced in AI/ML
Location: Turbhe, Navi Mumbai
CTC: 6-12 LPA
Years of Experience: 2-5 years
At Arcitech.ai, we’re redefining the future with AI-powered software solutions across education, recruitment, marketplaces, and beyond. We’re looking for a Python Developer passionate about AI/ML, who’s ready to work on scalable, cloud-native platforms and help build the next generation of intelligent, LLM-driven products.
💼 Your Responsibilities
AI/ML Engineering
- Develop, train, and optimize ML models using PyTorch/TensorFlow/Keras.
- Build end-to-end LLM and RAG (Retrieval-Augmented Generation) pipelines using LangChain.
- Collaborate with data scientists to convert prototypes into production-grade AI applications.
- Integrate NLP, Computer Vision, and Recommendation Systems into scalable products.
- Work with transformer-based architectures (BERT, GPT, LLaMA, etc.) for real-world AI use cases.
Backend & Systems Development
- Design, develop, and maintain robust Python microservices with REST/GraphQL APIs.
- Implement real-time communication with Django Channels/WebSockets.
- Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
- Configure and manage AWS (EC2, S3, RDS, SageMaker, CloudWatch) for AI/ML workloads.
Reliability & Automation
- Develop background task queues with Celery, ensuring smart retries and monitoring.
- Implement CI/CD pipelines for automated model training, testing, and deployment.
- Write automated unit & integration tests (pytest/unittest) with ≥80% coverage.
Collaboration
- Contribute to MLOps best practices and mentor peers in LangChain/AI integration.
- Participate in tech talks, code reviews, and AI learning sessions within the team.
🎓 Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field.
- 2–5 years of experience in Python development with strong AI/ML exposure.
- Hands-on experience with LangChain for building LLM-powered workflows and RAG systems.
- Deep learning experience with PyTorch or TensorFlow.
- Experience deploying ML models and LLM apps into production systems.
- Familiarity with REST/GraphQL APIs and cloud platforms (AWS/Azure/GCP).
- Skilled in Git workflows, automated testing, and CI/CD practices.
🌟 Nice to Have
- Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
- Knowledge of LLM fine-tuning, prompt engineering, and evaluation frameworks.
- Familiarity with Airflow/Prefect/Dagster for data and model pipelines.
- Background in statistics, optimization, or applied mathematics.
- Contributions to AI/ML or LangChain open-source projects.
- Experience with model monitoring and drift detection in production.
🎁 Why Join Us
- Competitive compensation and benefits 💰
- Work on cutting-edge LLM and AI/ML applications 🤖
- A collaborative, innovation-driven work culture 📚
- Opportunities to grow into AI/ML leadership roles 🚀
About OptimizeGEO
OptimizeGEO.ai is our flagship product that helps brands stay visible and discoverable in AI-powered answers. Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).
Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Company is backed by Micron Ventures, Better Capital, FalconX, and leading angels including Randi Zuckerberg, Vani Kola and Harsh Jain.
Role Overview
We are hiring a Senior Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting. This role partners closely with our GEO/AEO domain experts and data teams to translate framework gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals into scalable backend systems and APIs.
You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.
Key Responsibilities
- Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
- Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
- Data Scaping using various tools and working on volume estimates for accurate consumer insights for brands
- Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
- Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
- Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
- Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
- Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
- Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.
Required Qualification (Must Have)
- Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
- Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
- Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
- Background in digital PR/reputation signals and local/international SEO considerations.
- Comfort working with analysts to co-define KPIs and build executive-level reporting.
Expected Qualifications
- 4+ years of experience building backend systems in production (startups or high-growth product teams preferred).
- Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
- Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
- Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
- Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.
Tooling & Stack (Illustrative)
- Runtime: Python/TypeScript/Go
- Data: Postgres/BigQuery + object storage (S3/GCS)
- Pipelines: Airflow/Prefect, Kafka/PubSub
- Infra: AWS/GCP, Docker, Kubernetes, Terraform
- Observability: OpenTelemetry, Prometheus/Grafana, ELK/Cloud Logging
- Collab: GitHub, Linear/Jira, Notion, Looker/Metabase
Working Model
- Hybrid-remote within India with limited periodic in-person collaboration
- Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.
Equal Opportunity
OptimizeGEO is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Software engineers are the lifeblood of impress.ai. They build the software that powers our platform, the dashboard that recruiters around the world use, and all the other cool things we build and release. We are looking to expand our team with highly skilled backend engineers. As backend engineers, you don’t just build backend APIs and architect databases, you help bring to production the AI prototypes our Analytics/Data team builds, and you ensure that the cloud infrastructure, DevOps, and CI/CD processes that keep us ticking are optimal.
The Job:
The ideal candidate should have a few years of experience under the belt and have the technical skill, competencies, and maturity necessary to independently execute projects with minimal supervision. They should also have the ability to architect engineering solutions that require minimal input from senior software engineers in order to satisfy the business requirements.
At impress.ai our mission is to make hiring fairer for all applicants. We combine I/O Psychology with AI to create an application screening process that gives an opportunity to all candidates to undergo a structured interview. impress.ai has consciously used it to ensure that people were chosen based on their talent, knowledge, and capabilities as opposed to their gender, race, or name.
Responsibilities:
- Execute full software development life cycle (SDLC)
- Write well-designed, testable code
- Produce specifications and determine operational feasibility
- Build and integrate new software components into the Impress Platform
- Develop software verification plans and quality assurance procedures
- Document and maintain software functionality
- Troubleshoot, debug, and upgrade existing systems
- Deploy programs and evaluate user feedback
- Comply with project plans and industry standards
- Develop flowcharts, layouts, and documentation to identify requirements and solutions
You Bring to the Table:
- Proven work experience as a Software Engineer or Software Developer
- Proficiency in software engineering tools
- The ability to develop software in the Django framework (Python) is necessary for the role.
- Excellent knowledge of relational databases, SQL, and ORM technologies
- Ability to document requirements and specifications
- A BSc degree in Computer Science, Engineering, or a relevant field is preferred but not necessary.
Our Benefits:
- Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions.
- A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team.
- Opportunity to contribute to the success of a fast-growing, market-leading product.
- Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two.
Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance.
About Company
Espressif Systems (688018) is a public multinational, fabless semiconductor company established in 2008, with headquarters in Shanghai and offices in Greater China, India, and Europe. We have a passionate team of engineers and scientists from all over the world, focused on developing cutting-edge WiFi-and-Bluetooth, low-power IoT solutions. We have created the popular ESP8266 and ESP32 series of chips, modules, and development boards. By leveraging wireless computing, we provide green, versatile, and cost-effective chipsets. We have always been committed to offering IoT solutions that are secure, robust, and power-efficient. By open-sourcing our technology, we aim to enable developers to use Espressif’s technology globally and build smart connected devices. In July 2019, Espressif made its Initial Public Offering on the Sci-Tech Innovation Board (STAR) of the Shanghai Stock Exchange (SSE).
Espressif has a technology center in Pune. The focus is on embedded software engineering and IoT solutions for our growing customers.
About the Role
Espressif’s https://rainmaker.espressif.com/ is a paradigm-shifting IoT cloud platform that provides seamless connectivity to IoT devices to mobile apps, voice assistants, and other services. It is designed with scalability, security, reliability, and operational cost at the center. We are looking for senior cloud engineers who can significantly contribute to this platform by means of architecture, design, and implementation. It’s highly desirable that the candidate has earlier experience of working on large-scale cloud product development and understand the responsibilities and challenges well. Strong hands-on experience in writing code in Go, Java, or Python is a must.
This is an individual contributor role.
Minimum Qualifications
-
BE/B.Tech in Computer Science with 5-10 years of experience.
-
Strong Computer Science Fundamentals.
-
Extensive programming experience in one of these programming languages ( Java, Go, Python) is a must.
-
Good working experience of any of the Cloud Platforms - AWS, Azure, Google Cloud Platform.
-
Certification in any of these cloud platforms will be an added advantage.
-
Good Experience in the development of RESTful APIs, handling the security and
performance aspects.
-
Strong debugging and troubleshooting skills.
-
Experience working with RDBMS or any NoSQL database like DynamoDB, MYSQL, Oracle.
-
Working knowledge about CI/CD tools - Maven/Gradle, Jenkins, experience in a Linux (or Unix) based environment.
Desired Qualifications
-
Exposure to Serverless computing frameworks like AWS Lambda, Google Cloud Functions, Azure Functions
-
Some Exposure to front end development tools - HTML5, CSS, Javascript, React.js/Anular.js
-
Working knowledge on Docker, Jenkins.
Prior experience working in the IoT domain will be an added advantage.
What to expect from our interview process
-
The first step is to email your resume or apply to the relevant open position, along with a sample of something you have worked on such as a public GitHub repo or side project, etc.
-
Next, post shortlisting your profile recruiter will get in touch with you via a mechanism that works for you e.g. via email, phone. This will be a short chat to learn more about your background and interests, to share more about the job and Espressif, and to answer any initial questions you have.
-
Successful candidates will then be invited for 2 to 3 rounds of the technical interviews as per the previous round feedback.
-
Finally, Successful candidates will have interviews with HR. What you offer us
-
Ability to provide technical solutions, support that fosters collaboration and innovation.
Ability to balance a variety of technical needs and priorities according to Espressif’s growing needs.
What we offer
- An open-minded, collaborative culture of enthusiastic technologists.
- Competitive salary
- 100% company paid medical/dental/vision/life coverage
- Frequent training by experienced colleagues and chances to take international trips, attend exhibitions, technical meetups, and seminars.
Experience working on waterfall or Agile (Agile model preferred)Solid understanding of Python scripting and/or frameworks like Django, Flask Back up RRS.
Fulfil’s software engineers develop the next-generation technologies that change how millions of customer orders are fulfilled by merchants. Our products need to handle information at massive scale. We're looking for engineers who bring fresh ideas from all areas into our technology.
As a senior software engineer, you will work on our python based ORM and applications that scales to handle millions of transactions every hour. This is mission critical software and your primary focus will be building robust and scalable solutions that are easy to maintain.
In this role, you will be collaborating closely with the rest of the team working on different layers of infrastructure in an international environment. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product are important.
What You’ll Do:
- Own definition and implementation of API interfaces (REST and GraphQL). We take pride in our 100% open API with over 600 endpoints.
- Implement simple solutions to complex business logic that enables our merchants to manage financials, orders and shipments across millions of transactions.
- Build reusable components and packages for future use.
- Translate specs and user stories into reviewable, test covered patches.
- Peer review code and refactor existing code.
- Integrate with our eCommerce partners (Shopify, BigCommerce, Amazon), shipping partners (UPS, USPS, FedEx, DHL) and EDI.
- Manage Kubernetes and Docker based global deployment of our infrastructure.
Requirements We’re Looking for Someone With:
- Experience working with ORMs like SQLAlchemy or Django (2-3 years)
- Experience with SQL and databases (Postgres preferred)
- Experience in developing large server side applications and microservices
- Ability to create high quality code
- Experience with python testing tools (pytest) and test automation
- Familiarity with code versioning tools like GIT
- Strong sense of ownership and leadership quality
- Experienced in the tools of our web stack- Python, Celery, Postgres, Redis, RabbitMQ
Nice to Haves:
- Prior experience at a growth stage Internet/Software company
- Experience with ReactJS, Google Cloud, Heroku
- Cloud deployment and scaling experience
Role – Technical Architect
Job Description
ZipLoan is looking out for a strong technology leader of Software Product Engineering, with a Technical Experience of about 12+ years, and hands-on experience in Software Product Development and ownership. You will head our Platform team which will be tasked with building ZipLoan's platform layer along with a set of engineers reporting to you.
Role:
- Understand the business end-to-end in order to drive a use-case driven architecture.
- Identify parts of the code-base which can be made reusable as modules or services.
- Propose architecture improvements to provide reliability and robustness at scale.
- Provide a roadmap for evolution of the technology ecosystem.
- Identify and erase technical debt.
- Provide consultancy to engineering teams on specific design challenges.
- Propose engineering best practices and help teams in adopting them.
Desired Profile:
- 12 years min experience in core software development.
- Hands-on experience building consumer Web/mobile apps at scale.
- Strong exposure to open-source technology – Python and other languages,
Linux, SQL and NoSQL databases, Web development frameworks
- Strong architecture skills.
- Ability to mentor engineering team members effectively.
- Preferably a strong full-stack engineer or at least strong backend skills with some front-end skill.













