
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months

About Synorus
About
Company social profiles
Similar jobs
About Techjays
At Techjays, we build production-grade AI platforms for global clients. We operate at the intersection of backend engineering, distributed systems, and applied AI — delivering secure, scalable, and enterprise-ready intelligent systems. Our team has built and scaled products at Google, Akamai, NetApp, ADP, Cognizant, and Capgemini.
About the Role
This is not a feature-delivery role. We are looking for an AI Lead who can architect, own, and scale intelligent backend systems end-to-end. You will drive both technical direction and execution — working across LLM integrations, RAG pipelines, agentic AI workflows, and cloud-native backend systems for global clients.
What You'll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement RAG pipelines, AI agents, and LLM integrations
- Own systems end-to-end — from architecture to deployment and scaling
- Integrate and optimize LLMs (Claude, GPT, Gemini) for real-world production use cases
- Build high-performance distributed systems with observability and cost efficiency
- Lead backend and AI initiatives with strong technical ownership
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to deliver AI-native solutions
What We're Looking For
- 6–10 years of strong backend engineering experience
- Hands-on expertise in Python (FastAPI / Django / Flask)
- Deep understanding of Generative AI and LLM-based systems
- Strong experience with RAG pipelines and Vector Databases (Pinecone, FAISS, ChromaDB, Weaviate)
- Solid knowledge of Agentic AI — building autonomous agents and multi-agent workflows
- Proficiency in AWS or GCP in production environments
- Experience with distributed systems, microservices, and system design
- Strong grasp of Data Structures, Algorithms, and Design Patterns
- Familiarity with WebSockets, Git, Linux/Unix, and CI/CD
Nice to Have
- Experience with Anthropic Claude API and Claude Code
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- MLOps and AI system lifecycle experience
- Optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable navigating fast-moving, ambiguous environments
- You stay updated with the latest in Generative AI and backend technologies
- Strong communicator who can collaborate across teams and global clients
What We Offer
- Competitive compensation (Best in Industry)
- Work on production-grade AI systems used by global clients
- Exposure to cutting-edge AI tools and frameworks
- A culture that values clarity, integrity, and continuous growth
Job description
Job Overview:
The position requires an experienced and ambitious candidate who is passionate about technology and is self-driven. We have a challenging workplace where we welcome innovative ideas and offers growth opportunities and positive environment for accomplishing goals. Our purpose is to create abundance for everyone we touch.
Job Description:
- Experience on Open Source Platforms in designing/ developing Web-based applications.
- Require Strong knowledge in Python with application/package/module development, tuning, and debugging tools.
- Proficient understanding of Python Identifiers, Reserved Words, Basic Operators, Variable Types, and User defined exception handling with their usage.
- In-depth knowledge of Python Dictionary and default modules included in Python (String, DateTime, Numbers, and other required functions usage).
- Proficient understanding of Sequences and its differences (Tuples/Lists).
- File manipulation-using Python (Create, Edit, and Delete view file).
- Ability to organize code logically and with the understandable procedure that is well defined, documented, and testable.
Skills Required:
- Strong understanding of OOPS and Design Patterns (Code design skills in Python-based object-oriented programming and functional programming).
- Developing Web Applications with Python - Hands on experience using MVC frameworks like Django.
- Exposure to Code Versioning Systems such as Atlassian Bitbucket.
- In-depth Knowledge in JavaScript and jQuery.
- The candidate must display excellent written and verbal skills with demonstrated interpersonal and organizational abilities.
Developing core infrastructure in Python, Django.
- Developing models and business logic (e. g. transactions, payments, diet plan, search, etc).
- Architecting servers and services that enable new product features.
- Building out newly enabled product features.
- Monitoring system uptime and errors to drive us toward a high-performing and reliable product.
- Take ownership and understand the need for code quality, elegance, and robust infrastructure.
- Worked collaboratively on a software development team.
- Built scalable web applications.
Skills:
- Minimum 4 years of industry or open-source experience.
- Proficient in at least one OO language: Python(preferred)/Golang/Java.
- Writing high-performance, reliable and maintainable code.
- Good knowledge of database structures, theories, principles, and practices.
- Experience working with AWS components [EC2, S3, RDS, SQS, ECS, Lambda]
Person should have strong knowledge in -
1. Core Java,
2. JSP
3. Spring Framework,
4. Spring Boot,
5. SOAP and REST Webservices
6. Application Security.
Person should have 4 to 6 Years of work experience in Java and the above-mentioned related technology.
Person should have good Oracle Database knowledge and have good communication skills.
Having work experience in BFSI domain will added advantage
Person needs to work from Mumbai location and also available to join the office
Responsibilities:
Lead the design and development of sophisticated, high availability, and secured
server-side applications with a primary focus on Golang.
● Collaborate with cross-functional teams to understand requirements, architect
solutions, and deliver high-quality software products.
● Mentor and guide junior engineers, sharing your engineering expertise and best
practices to foster skill development within the team.
● Analyze and optimize performance, scalability, and reliability of existing Golang
applications, making strategic improvements where necessary.
● Design and implement automated unit and integration tests to ensure code quality,
maintainability, and stability.
● Stay up-to-date with the latest advancements in software technologies,
recommending their adoption when appropriate.
● Champion code reviews, architectural discussions, and technical documentation to
maintain high development standards.
● Troubleshoot and resolve complex issues, providing innovative solutions to overcome
challenges.
● Contribute to the recruitment and hiring process by participating in interviews,
evaluating candidates, and providing input on hiring decisions.
Requirements
Bachelor's or Master's degree in Computer Science, or a related field.
● 3+ years of experience in software development, with substantial experience in
Golang and cloud infrastructure.
● Expert-level proficiency in designing and developing high-performance, concurrent
applications with Golang.
● Experience with distributed systems, microservices architecture, and containerization
(e.g., Docker, Kubernetes).
● Solid knowledge of software testing methodologies and tools, including unit testing
and integration testing for Golang applications.
● Demonstrated ability to lead projects, collaborate effectively with teams, and mentor
junior engineers.
● Excellent problem-solving and analytical skills, with the ability to tackle complex
technical challenges.
● Having prior experience in the FinTech domain would be an added advantage.
any device. Amagi helps bring entertainment to hundreds of millions of consumers leading the transformation in
media consumption. We believe in a connected ecosystem bringing content owners, distribution platforms,
consumers and advertisers together to create great experiences.
Amagi grew by 136% last year and on its way to double itself again this year. The market leader in FAST (Free
Ad-supported Streaming TV), it delivers more than 500 media brands to 1500+ end points and growing
exponentially.
We are looking for a Software Engineers to join our engineering team. You will be working with a team of
engineers in building cutting-edge next generation media technology software components using the latest cloudtech stacks.
Key responsibilities include (but are not limited to):
● Design and write with code with the cutting-edge technologies to improve the availability, scalability, latency, and efficiency of Amagi products
● Participate in code and design reviews to maintain our high development standards
● Engage in service capacity and demand planning, software performance analysis, tuning and optimization
● Collaborate with product teams to define and prototype feature specifications
● Work closely with Platform Engineering team in building and scaling back-end services as well as performing root cause analysis investigations
● Design, build, analyze and fix large-scale systems
● Learn full stack performance tuning and optimization
● Debug and modify complex, production software
You will excel at this role, if you have
● A bachelor’s/master’s degree in Computer Science, with 2 to 6 years of Experience in building highly available and scalable products.
● Have worked in product software development teams that have taken individual module-level responsibility and have taken the product to production/customer deployments
● Loves to write code in one or more of Python, Golang, RoR,
● Have worked in building back-end systems around DBMS, Caches, NoSQL, Web and App servers.
● Passionate about algorithms, design patterns, open-source technologies and in general good software
design
● Desirable to have Prior experience in working on any of the Public cloud infrastructures
● Managing and guiding a team of junior developers for timely delivery of product and
milestones
● Optimization of the application for maximum speed and scalability
● Implementation of security and data protection
● Design and implementation of data storage solutions
● Design & Build: Designing and developing high-volume, low-latency applications for
mission-critical systems, and delivering high-availability and performance.
● Collaborate - Collaborating within your product streams and team to bring best
practices and leverage a world-class tech stack.
● Measurable Outcome - You will need to set quantifiable objectives that encapsulate
the quality attributes of a system. The fitness of the application is measured against
set marks.
● DevOps - You will need to set up every essential (tracking/alerting) to make sure the
infrastructure/software you built is working as expected.
● Design and development of our REST APIs
● Help maintain code quality, architecture, and automation
Required Knowledge and Skills
● 3 - 5 years of experience working in backend development technologies and DevOps
● Highly curious and ready to dive into complex technical challenges.
● Proficiency in development and scripting in Python, Django/Flask framework.
● Database design and management, including being up on the latest practices -
bonus points for MySQL and MongoDB
● User authentication and authorization between multiple systems, servers, and
environments
● Integration of multiple data sources and databases into one system
● Management of hosting environment, including database administration and scaling
an application to support load changes
● Setup and administration of backups
● Understanding differences between multiple delivery platforms such as mobile vs
desktop, and optimizing output to match the specific platform
● Creating database schemas that represent and support business processes
● Implementing automated testing platforms and unit tests
● Understanding of “session management” in a distributed server environment
● Server management and deployment for the relevant environment
● Appreciation for clean and well-documented code
● Hands-on experience with architecture and structural design patterns
● Expertise in designing, developing, deploying, and integrating RESTful APIs
● Ability to understand business requirements and translate them into technical
requirements
● A knack for benchmarking and optimization
● Proficient understanding of code versioning tools, such as Git
Personality
● Requires excellent communication skills – written, verbal, and presentation.
● You should be a team player.
● You should be positive towards problem-solving, have a very structural thought
process to solve problems
● Interest to work in a high paced start-up environment with a large amount of learning.
● Good understanding of different frameworks, and be able to pick up new
technologies at ease.
● You should be agile enough to figure out a need for new technologies/frameworks
and learn new technology for better product performanc
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.
- B Tech/BE or M.Tech/ME in Computer Science or equivalent from a reputed college.
- Experience level of 7+ years in building large scale applications.
- Strong problem solving skills, data structures and algorithms.
- Experience with distributed systems handling large amount of data.
- Excellent coding skills in Java / Python / Node / Go.
- Very good understanding of Web Technologies.
- Very good understanding of any RDBMS and/or messaging.







