2+ RDF Jobs in India
Apply to 2+ RDF Jobs on CutShort.io. Find your next job, effortlessly. Browse RDF Jobs and apply today!

Pattern Agentix is seeking a skilled engineer to design and develop a comprehensive knowledge graph that spans multiple biological domains—including molecular biology, biochemistry, biophysics, immunology, virology, pharmacology, and computational biology. The goal is to integrate heterogeneous datasets (ontologies, public databases, scientific literature) into a scalable, semantically rich graph that supports hypothesis generation and interdisciplinary research.
Responsibilities:
• Architecture & Design:
- Develop a scalable knowledge graph architecture using graph databases
- Design data models to capture entities (e.g., genes, proteins, chemical compounds, immune markers) and their relationships.
• Data Integration & Curation:
- Integrate data from domain-specific ontologies (e.g., Gene Ontology, ChEBI) and public repositories (e.g., UniProt, PDB).
- Build API connectors and automation pipelines for data ingestion.
- Leverage NLP tools to extract relationships from scientific literature with validation by domain experts.
• Collaboration & Documentation:
- Work closely with interdisciplinary teams and domain experts to ensure semantic accuracy.
- Document data models, integration methods, and curation guidelines.
Requirements:
• Proven experience in knowledge graph development and graph databases
• Strong programming skills (Python, Java, or similar) and experience with API development.
• Familiarity with semantic web technologies (RDF, ) and ontology management tools
• Solid background in bioinformatics, computational biology, or related disciplines.
• Demonstrated ability to work with heterogeneous data sources and interdisciplinary projects.
Contract
We are seeking part time or full time on contract and will compensate to local norms.
The Knowledge Graph Architect is responsible for designing, developing, and implementing knowledge graph technologies to enhance organizational data understanding and decision-making capabilities. This role involves collaborating with data scientists, engineers, and business stakeholders to integrate complex data into accessible and insightful knowledge graphs.
Work you’ll do
1. Design and develop scalable and efficient knowledge graph architectures.
2. Implement knowledge graph integration with existing data systems and business processes.
3. Lead the ontology design, data modeling, and schema development for knowledge representation.
4. Collaborate with IT and business units to understand data needs and deliver comprehensive knowledge graph solutions.
5. Manage the lifecycle of knowledge graph data, including quality, consistency, and updates.
6. Provide expertise in semantic technologies and machine learning to enhance data interconnectivity and retrieval.
7. Develop and maintain documentation and specifications for system architectures and designs.
8. Stay updated with the latest industry trends in knowledge graph technologies and data management.
The Team
Innovation & Technology anticipates how technology will shape the future and begins building future capabilities and practices today. I&T drives the Ideation, Incubation and scale of hybrid businesses and tech enabled offerings at prioritized offering portfolio and industry interactions.
It drives cultural and capability transformation from solely services – based businesses to hybrid businesses. While others bet on the future, I&T builds it with you.
I&T encompasses many teams—dreamers, designers, builders—and partners with the business to bring a unique POV to deliver services and products for clients.
Qualifications and Experience
Required:
1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
2. 6-10 years of professional experience in data engineering with Proven experience in designing and implementing knowledge graph systems.
3. Strong understanding of semantic web technologies (RDF, SPARQL, GraphQL,OWL, etc.).
4. Experience with graph databases such as Neo4j, Amazon Neptune, or others.
5. Proficiency in programming languages relevant to data management (e.g., Python, Java, Javascript).
6. Excellent analytical and problem-solving abilities.
7. Strong communication and collaboration skills to work effectively across teams.
Preferred:
1. Experience with machine learning and natural language processing.
2. Experience with Industry 4.0 technologies and principles
3. Prior exposure to cloud platforms and services like AWS, Azure, or Google Cloud.
4. Experience with containerization technologies like Docker and Kubernetes