
Similar jobs

Role Overview
We are seeking a highly skilled and experienced Senior AI Engineer with deep expertise in computer vision and architectural design. The ideal candidate will lead the development of robust, scalable AI systems, drive architectural decisions, and contribute significantly to the deployment of real-time video analytics, multi-model systems, and intelligent automation
solutions.
Key Responsibilities
Design and lead the architecture of complex AI systems in the domain of computer vision and real-time inference.
Build and deploy models for object detection, image segmentation, classification, and tracking.
Mentor and guide junior engineers on deep learning best practices and scalable software engineering.
Drive end-to-end ML pipelines: from data ingestion and augmentation to training, deployment, and monitoring.
Work with YOLO-based and transformer-based models for industrial use-cases.
Lead integration of AI systems into production with hardware, backend, and DevOps teams.
Develop automated benchmarking, annotation, and evaluation tools.
Ensure maintainability, scalability, and reproducibility of models through version control, CI/CD, and containerization.
Required Skills
Advanced proficiency in Python and deep learning frameworks (PyTorch, TensorFlow).
Strong experience with YOLO, segmentation networks (UNet, Mask R-CNN), and
tracking (Deep SORT).
Sound understanding of real-time video analytics and inference optimization.
Hands-on experience designing model pipelines using Docker, Git, MLflow, or similar tools.
Familiarity with OpenCV, NumPy, and image processing techniques.
Proficiency in deploying models on Linux systems with GPU or edge devices (Jetson, Coral)
Good to Have
Experience with multi-model orchestration, streaming inference (DeepStream), or virtual camera inputs.
Exposure to production-level MLOps practices.
Knowledge of cloud-based deployment on AWS, GCP, or DigitalOcean.
Familiarity with synthetic data generation, augmentation libraries, and 3D modeling tools.
Publications, patents, or open-source contributions in the AI/ML space.
Qualifications
B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related field.
4+ years of proven experience in AI/ML with a focus on computer vision and system- level design.
Strong portfolio or demonstrable projects in production environments
Technology: node js, DynamoDB / Mongo DB
Roles:
- Design & implement Backend Services.
- Able to redesign the architecture.
- Designing & implementation of application in MVC & Microservice.
- 9+ years of experience developing service-based applications using Node.js.
- Expert-level skills in developing web applications using JavaScript, CSS and HTML5.
- Experience working on teams that practice BDD (Business Driven Development).
- Understanding of micro-service architecture and RESTful API integration patterns.
- Experience using Node.js for automation and leveraging NPM for package management
- Solid Object-Oriented design experience, and creating and leveraging design patterns.
- Experience working in a DevOps/Continuous Delivery environment and associated toolsets (i.e. Jenkins, Puppet etc.)
Desired/Preferred Qualifications:
- Bachelor's degree or equivalent experience
- Strong problem solving and conceptual thinking abilities
- Desire to work in a collaborative, fast-paced, start-up like environment
- Experience leveraging node.js frameworks such as Express.
- Experience with distributed source control management, i.e. Git

What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services



