



Similar jobs


We are building an advanced, AI-driven multi-agent software system designed to revolutionize task automation and code generation. This is a futuristic AI platform capable of:
✅ Real-time self-coding based on tasks
✅ Autonomous multi-agent collaboration
✅ AI-powered decision-making
✅ Cross-platform compatibility (Desktop, Web, Mobile)
We are hiring a highly skilled **AI Engineer & Full-Stack Developer** based in India, with a strong background in AI/ML, multi-agent architecture, and scalable, production-grade software development.
### Responsibilities:
- Build and maintain a multi-agent AI system (AutoGPT, BabyAGI, MetaGPT concepts)
- Integrate large language models (GPT-4o, Claude, open-source LLMs)
- Develop full-stack components (Backend: Python, FastAPI/Flask, Frontend: React/Next.js)
- Work on real-time task execution pipelines
- Build cross-platform apps using Electron or Flutter
- Implement Redis, Vector databases, scalable APIs
- Guide the architecture of autonomous, self-coding AI systems
### Must-Have Skills:
- Python (advanced, AI applications)
- AI/ML experience, including multi-agent orchestration
- LLM integration knowledge
- Full-stack development: React or Next.js
- Redis, Vector Databases (e.g., Pinecone, FAISS)
- Real-time applications (websockets, event-driven)
- Cloud deployment (AWS, GCP)
### Good to Have:
- Experience with code-generation AI models (Codex, GPT-4o coding abilities)
- Microservices and secure system design
- Knowledge of AI for workflow automation and productivity tools
Join us to work on cutting-edge AI technology that builds the future of autonomous software.


Requirements-
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,Predictive modeling, Clustering, Deep Learning stack, NLP.
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark etc.
● Experience with databases: MongoDB

Designation: Graphics and Simulation Engineer
Experience: 3-15 Yrs
Position Type: Full Time
Position Location: Hyderabad
Description:
We are looking for engineers to work on applied research problems related to computer graphics in autonomous driving of electric tractors. The team works towards creating a universe of farm environments in which tractors can driver around for the purposes of simulation, synthetic data generation for deep learning training, simulation of edges cases and modelling physics.
Technical Skills:
● Background in OpenGL, OpenCL, graphics algorithms and optimization is necessary.
● Solid theoretical background in computational geometry and computer graphics is desired. Deep learning background is optional.
● Experience in two view and multi-view geometry.
● Necessary Skills: Python, C++, Boost, OpenGL, OpenCL, Unity3D/Unreal, WebGL, CUDA.
● Academic experience for freshers in graphics is also preferred.
● Experienced candidates in Computer Graphics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply.
● Software development experience on low-power embedded platforms is a plus.
Responsibilities:
● Understanding of engineering principles and a clear understanding of data structures and algorithms.
● Ability to understand, optimize and debug imaging algorithms.
● Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform
● Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents.
● Optimize runtime performance of designed models.
● Deploy models to production and monitor performance and debug inaccuracies and exceptions.
● Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives.
● Thrive in a fast-paced environment and have the ability to own the project end to end with minimum hand holding
● Learn & adapt new technologies & skillsets
● Work on projects independently with timely delivery & defect free approach.
● Thesis focusing on the above skill set may be given more preference.
Sr Product Manager / Lead Product Manager – Data Platform
Description:
At Amagi we are looking for a product leader to build world class big data and analytics platform to help our teams in making data driven decisions and to accelerate our business outcomes.
We are looking for someone who is innovative and experienced in end-to-end product management to drive our long-term data and analytics strategy.
The ideal candidate would be responsible for owning product roadmap and KPIs, driving product operational tasks to ensure configurable and scalable solutions.
Primary Responsibilities:
- Lead the product requirements to build real time, highly scalable, low latency data platform.
- Author PRD and define the strategic roadmap.
- Lead the Product design, MVPs and POCs and fast track deliveries
- Collaborate with various functions and design the most effective solutions.
- Understand customer needs and define the data solutions and insights to drive business outcomes
- Define key product performance metrices to drive business and customer outcomes
Basic Qualification
- 12+ years of overall SDLC experience with 7+ years of product management experience in fast paced company.
- Proven experience delivering large scale highly available big data processing systems.
- Knowledge of data pipeline design, data transformation and integration methodologies.
- Technically savvy with experience in big data systems like AWS, RedShift, Athena, Kafka and related technologies
- Demonstrate collaborative approach and ability to work with distributed, cross functional teams.
- Experience in taking products through full life cycle, from proposal to launch
- Strategic thinking capabilities to define the product roadmap and right prioritization of the backlog in line with the long-term vision
- Strong communication and stakeholder management skills with the ability to coordinate across a diverse group of technical and non-technical stakeholders.
- Ability to deal with ambiguity and use data to solve ambiguous problems
- Technical ability to understand and discuss software architecture, product integration, non-functional requirements etc. with the Engineering team.
Preferred Qualification:
- Technically savvy with good understanding of cloud application development.
- Software development experience building Enterprise platforms
- Experience in third-party vendor assessments
- Experience working with AI , ML , Bigdata and analytics tools
- Understanding of regulations such as data privacy, data security and governance

• Have a good understanding of both the AEC/O industry and 3D visualization
industry.
• Good knowledge of 3D modeling software such as 3ds MAX, Maya, Rhino, Revit
and Blender
• Experience with cloud collaboration tools like BIM360, and ACC.
• VR/AR experience is a plus. For example, Unreal with C++ or Unity3d with C#.
• Strong understanding of visual programming plugins: Dynamo, Grasshopper, etc.
• Knowledge of batch rendering and how to set up a render farm
• Have experience in developing tools and scripts to standardize the visualization
workflow and to shorten the timeline by automating redundant tasks.
• Draft up wireframe quickly before proceeding with the production



Responsibilities
- Build and mentor the platform team at Checko.
- Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
- Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
- Entrepreneurial and out-of-box thinking essential for a technology startup
- Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability
Requirements
- Must have design, development, testing, deployment of systems capable of handling massive amounts of requests with high reliability and scalability
- Must have strong command in writing production-level code in Java or Python including skills in debugging, performance analysis/optimization and memory usage optimization
- Must have worked with real-time web/mobile applications and event-driven architectures
- Must have experience working with relational and non-relational databases and understanding their data models and performance tradeoffs.
- Must have solid engineering principles and a clear understanding of data structures and algorithms
- Should have knowledge of service-oriented architecture, caching techniques, micro-services, and distributed systems
- Should have basic understanding of C++/reactJS/Angular/Node
Desired Skills and Experience
Algorithms, debugging, performance optimization on low-end processors, data structures, REST, service-oriented architecture.


Below are the skill set needed for the immediate Lanner board bring up requirements
- Good understanding of Linux kernel modules, with familiarity of kernel debugging, good shell scripting
- 4 to 6 years of experience
- Knowledge in board bring up, bootloader ( U-boot/BIOS/GRUB2 etc..) and device drivers
- Strong C programming with knowledge of networking protocols ( L2/L3, TCP.UDP, HTTPS etc..)
This is the white-box variant would be using
https://www.lannerinc.com/products/network-appliances/x86-desktop-network-appliances/nca-1515">https://www.lannerinc.com/products/network-appliances/x86-desktop-network-appliances/nca-1515



along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production



