

VMax eSolutions India Pvt Ltd
https://vmaxindia.comAbout
Jobs at VMax eSolutions India Pvt Ltd
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.
You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.
This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.
Key Responsibilities
Architecture & System Design
- Design low-latency, real-time voice agent architectures for local/on-prem deployment
- Define scalable architectures for ASR → LLM → TTS pipelines
- Optimize systems for GPU utilization, concurrency, and throughput
- Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)
Voice & Conversational AI
- Design and integrate:
- Automatic Speech Recognition (ASR)
- Natural Language Understanding / LLMs
- Dialogue management & conversation state
- Text-to-Speech (TTS)
- Build streaming voice pipelines with sub-second response times
- Enable multi-turn, interruptible, natural conversations
Model & Inference Engineering
- Deploy and optimize local LLMs and speech models (quantization, batching, caching)
- Select and fine-tune open-source models for voice use cases
- Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar
Infrastructure & Production
- Design GPU-based inference clusters (bare metal or Kubernetes)
- Implement autoscaling, load balancing, and GPU scheduling
- Establish monitoring, logging, and performance metrics for voice agents
- Ensure security, privacy, and data isolation for local deployments
Leadership & Collaboration
- Set architectural standards and best practices
- Mentor ML and platform engineers
- Collaborate with product, infra, and applied research teams
- Drive decisions from prototype → production → scale
Required Qualifications
Technical Skills
- 7+ years in software / ML systems engineering
- 3+ years designing production AI systems
- Strong experience with real-time voice or conversational AI systems
- Deep understanding of LLMs, ASR, and TTS pipelines
- Hands-on experience with GPU inference optimization
- Strong Python and/or C++ background
- Experience with Linux, Docker, Kubernetes
AI & ML Expertise
- Experience deploying open-source LLMs locally
- Knowledge of model optimization:
- Quantization
- Batching
- Streaming inference
- Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)
Systems & Scaling
- Experience with high-QPS, low-latency systems
- Knowledge of distributed systems and microservices
- Understanding of edge or on-prem AI deployments
Preferred Qualifications
- Experience building AI voice agents or call automation systems
- Background in speech processing or audio ML
- Experience with telephony, WebRTC, SIP, or streaming audio
- Familiarity with Triton Inference Server / vLLM
- Prior experience as Tech Lead or Principal Engineer
What We Offer
- Opportunity to architect state-of-the-art AI voice systems
- Work on real-world, high-scale production deployments
- Competitive compensation and equity (if applicable)
- High ownership and technical influence
- Collaboration with top-tier AI and infrastructure talent
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Company Description
VMax e-Solutions India Private Limited, based in Hyderabad, is a dynamic organization specializing in Open Source ERP Product Development and Mobility Solutions. As an ISO 9001:2015 and ISO 27001:2013 certified company, VMax is dedicated to delivering tailor-made and scalable products, with a strong focus on e-Governance projects across multiple states in India. The company's innovative technologies aim to solve real-life problems and enhance the daily services accessed by millions of citizens. With a culture of continuous learning and growth, VMax provides its team members opportunities to develop expertise, take ownership, and grow their careers through challenging and impactful work.
About the Role
We’re hiring a Senior Data Scientist with deep real-time voice AI experience and strong backend engineering skills.
1. You’ll own and scale our end-to-end voice agent pipeline that powers AI SDRs, customer support 2. agents, and internal automation agents on calls. This is a hands-on, highly technical role where you’ll design and optimize low-latency, high-reliability voice systems.
3. You’ll work closely with our founders, product, and platform teams, with significant ownership over architecture, benchmarks.
What You’ll Do
1. Own the voice stack end-to-end – from telephony / WebRTC entrypoints to STT, turn-taking, LLM reasoning, and TTS back to the caller.
2. Design for real-time – architect and optimize streaming pipelines for sub-second latency, barge-in, interruptions, and graceful recovery on bad networks.
3. Integrate and tune models – evaluate, select, and integrate STT/TTS/LLM/VAD providers (and self-hosted models) for different use-cases, balancing quality, speed, and cost.
4. Build orchestration & tooling – implement agent orchestration logic, evaluation frameworks, call simulators, and dashboards for latency, quality, and reliability.
5. Harden for production – ensure high availability, observability, and robust fault-tolerance for thousands of concurrent calls in customer VPCs.
6. Shape the voice roadmap – influence how voice fits into our broader Agentic OS vision (simulation, analytics, multi-agent collaboration, etc.).
You’re a Great Fit If You Have
1. 6+ years of software engineering experience (backend or full-stack) in production systems.
2. Strong experience building real-time voice agents or similar systems using:
STT / ASR (e.g. Whisper, Deepgram, Assembly, AWS Transcribe, GCP Speech)
TTS (e.g. ElevenLabs, PlayHT, AWS Polly, Azure Neural TTS)
VAD / turn-taking and streaming audio pipelines
LLMs (e.g. OpenAI, Anthropic, Gemini, local models)
3. Proven track record designing and operating low-latency, high-throughput streaming systems (WebRTC, gRPC, websockets, Kafka, etc.).
4. Hands-on experience integrating ML models into live, user-facing applications with real-time inference & monitoring.
5. Solid backend skills with Python and TypeScript/Node.js; strong fundamentals in distributed systems, concurrency, and performance optimization.
6. Experience with cloud infrastructure – especially AWS (EKS, ECS, Lambda, SQS/Kafka, API Gateway, load balancers).
7. Comfortable working in Kubernetes / Docker environments, including logging, metrics, and alerting.
8. Startup DNA – at least 2 years in an early or mid-stage startup where you shipped fast, owned outcomes, and worked close to the customer.
Nice to Have
1. Experience self-hosting AI models (ASR / TTS / LLMs) and optimizing them for latency, cost, and reliability.
2. Telephony integration experience (e.g. Twilio, Vonage, Aircall, SignalWire, or similar).
3. Experience with evaluation frameworks for conversational agents (call quality scoring, hallucination checks, compliance rules, etc.).
4. Background in speech processing, signal processing, or dialog systems.
5. Experience deploying into enterprise VPC / on-prem environments and working with security/compliance constraints.
Similar companies
About the company
Tech Prescient delivers cutting-edge software product development and technology services, with a core focus on Identity and Data solutions for modern enterprises.
At the forefront of Digital Engineering and Enterprise Modernization, we accelerate innovation with our AI powered platforms, Identity Confluence and Data Confluence.
With deep technical expertise and domain insight, we help enterprises unlock smarter, faster, and more secure outcomes built for what's next.
Our vision is to be the most reliable and trusted technology partner for our customers. We aspire to be a globally admired company in our category and be recognized for speed of execution, quality of deliverables and high customer satisfaction.
We partner with our clients at all stages including ideation, design, development, testing, deployment, and support.
Jobs
3
About the company
Jobs
15
About the company
Jobs
4
About the company
Your Go-To AI Consultancy For AI Research, AI Products, AI Solutions, AI MVP Design, Idea Validation
Jobs
21
About the company
Jobs
1
About the company
ZestFindz Private Limited is a Hyderabad-based startup founded in February 2025.
We simplify online retail by offering a curated marketplace for everyday essentials, fashion, home goods, skincare, and more backed by powerful seller tools. Our goal: make selling and shopping seamless with solid tech, transparent operations and customer-first design.
Jobs
2
About the company
Jobs
9
About the company
Jobs
0



