
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months

About Synorus
About
Company social profiles
Similar jobs
At BigThinkCode, our technology solves complex problems. We are looking for a talented engineer to join our technology team at Chennai.
This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.
Below job description for your reference, if interested please share your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Job Role: Software Engineer / Senior Software Engineer
Experience: 2 - 5 years
Work location: Chennai
Work Mode: Hybrid
Joining time: Immediate – 4 weeks
Responsibilities
- Build and enhance backend features as part of the tech team.
- Take ownership of features end-to-end in a fast-paced product/startup environment.
- Collaborate with managers, designers, and engineers to deliver user-facing functionality.
- Design and implement scalable REST APIs and supporting backend systems.
- Write clean, reusable, well-tested code and contribute to internal libraries.
- Participate in requirement discussions and translate business needs into technical tasks.
- Support the technical roadmap through architectural input and continuous improvement.
Required Skills:
- Strong understanding of Algorithms, Data Structures, and OOP principles.
- Integrate with third-party systems (payment/SMS APIs, mapping services, etc.).
- Proficiency in Python and experience with at least one framework (Flask / Django / FastAPI).
- Hands-on experience with design patterns, debugging, and unit testing (pytest/unittest).
- Working knowledge of relational or NoSQL databases and ORMs (SQLAlchemy / Django ORM).
- Familiarity with asynchronous programming (async/await, FastAPI async).
- Experience with caching mechanisms (Redis).
- Ability to perform code reviews and maintain code quality.
- Exposure to cloud platforms (AWS/Azure/GCP) and containerization (Docker).
- Experience with CI/CD pipelines.
- Basic understanding of message brokers (RabbitMQ / Kafka / Redis streams).
Benefits:
· Medical cover for employee and eligible dependents.
· Tax beneficial salary structure.
· Comprehensive leave policy
· Competency development training programs.
What we are looking
- You must have experience in Python, including development of microservices using flask/fastapi/sanic.
- You hold knowledge of at least one Python web frameworks
- You have experience in NoSQL databases such as MongoDB
- You are familiar with relational databases and unstructured data.
- You are familiar with some ORM (Object Relational Mapper) libraries.
- You have prior exposure to code versioning tools such as Git.
- You understand fundamental design principles behind a scalable application
Good to have
- Experience in cloud services such as AWS
- Knowledge of user authentication and authorization between multiple systems, servers, and environments
- Understanding of the threading concepts in Python and multi-process architecture
What you'll do
- Create database schemas that represent and support business processes
- Integrate multiple data sources and databases into one system.
- Develop and integrate RESTful APIs and services for business use cases
- Perform unit and integration tests on developed modules
About the Role
As a result of our rapid growth, we are looking for a Java Backend Engineer to join our existing Cloud Engineering team and take the lead in the design and development of several key initiatives of our existing Miko3 product line as well as our new product development initiatives.
Responsibilities
- Designing, developing and maintaining core system features, services and engines
- Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility
- Ensure the developed backend infrastructure is optimized for scale and responsiveness
- Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project.
- Introducing new ideas, products, features by keeping track of the latest developments and industry trends
- Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules
Requirements
- Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms)
- Proficiency in working on Linux based Operating system.
- Working Knowledge of container orchestration platform like Kubernetes
- Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus
- Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java).
- Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks.
- Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc.
- Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc.
- Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming).
- Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling)
- Experience developing cloud software services and an understanding of design for scalability, performance and reliability
- Good understanding of networking and communication protocols, and proficiency in identification CPU, memory & I/O bottlenecks, solve read & write-heavy workloads.
- Proficiency is concepts of monolithic and microservice architectural paradigms.
- Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc.
- Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb
- Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus
- Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus
- Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus
- Proficiency in functional programming ( Scala ) languages is an added plus
- Proficiency in working with NoSQL/graph databases is an added plus
- Proficient understanding of code versioning tools, such as Git is an added plus
- Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus
- Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus
- Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus
CricStox is a Pune startup building a trading solution in the realm of gametech x fintech.
We intend to build a sport-agnostic platform to allow trading in stocks of sportspersons under any sport
through our mobile & web-based applications.
We’re currently hiring a Backend Cloud Engineer who will gather, refine specifications and requirements
based on technical needs and implement the same by using best software development practices.
Responsibilities?
● Mainly, but not limited to maintaining, expanding, and scaling our microservices/ app/ site.
● Integrate data from various back-end services and databases.
● Always be plugged into emerging technologies/industry trends and apply them into operations and
activities.
● Comfortably work and thrive in a fast-paced environment, learn rapidly and master diverse web
technologies and techniques.
● Juggle multiple tasks within the constraints of timelines and budgets with business acumen.
What skills do I need?
● Excellent programming skills in Javascript or Typescript.
● Excellent programming skills in Nodejs with Nestjs framework or equivalent.
● A solid understanding of how web applications work including security, session management, and
best development practices.
● Good working knowledge and experience of how AWS cloud infrastructure works including services
like APIGateway, Cognito, S3, EC2, RDS, SNS, MSK, EKS is a MUST.
● Solid understanding of distributed event streaming technologies like Kafka is a MUST.
● Solid understanding of microservices communication using Saga Design pattern is a MUST.
● Adequate knowledge of database systems, OOPs and web application development.
● Adequate knowledge to create well-designed, testable, efficient APIs using tools like Swagger (or
equivalent).
● Good functional understanding of ORMs like Prisma (or equivalent).
● Good functional understanding of containerising applications using Docker.
● Good functional understanding of how a distributed microservice architecture works.
● Basic understanding of setting up Github CI/CD pipeline to automate Docker images building,
pushing to AWS ECR & deploying to the cluster.
● Proficient understanding of code versioning tools, such as Git (or equivalent).
● Hands-on experience with network diagnostics, monitoring and network analytics tools.
● Aggressive problem diagnosis and creative problem-solving skills.
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.

Job Role – Technical Lead – Back End
Work Location - Bengaluru
Job Description –
Requirements
• 5+ years of experience in product companies
• Experience in software development and coding in multiple languages
• Experience in both front-end and back-end development with mastery in back-
end.
• Excellent knowledge of software and application design and architecture
• Understanding of software quality assurance principles
• Managed at least 4-5 engineers in their experience.
• High quality organisational and leadership skills
• Outstanding communication and presentation abilities
If you think you are one of them then please reply with your interest and we will take your candidature ahead for the next level of evaluation. Below information would be appreciated more:
Current CTC:
Expected CTC:
Notice Period:
Update Resume: please attach







