11+ RDF Jobs in Pune | RDF Job openings in Pune
Apply to 11+ RDF Jobs in Pune on CutShort.io. Explore the latest RDF Job opportunities across top companies like Google, Amazon & Adobe.
The Knowledge Graph Architect is responsible for designing, developing, and implementing knowledge graph technologies to enhance organizational data understanding and decision-making capabilities. This role involves collaborating with data scientists, engineers, and business stakeholders to integrate complex data into accessible and insightful knowledge graphs.
Work you’ll do
1. Design and develop scalable and efficient knowledge graph architectures.
2. Implement knowledge graph integration with existing data systems and business processes.
3. Lead the ontology design, data modeling, and schema development for knowledge representation.
4. Collaborate with IT and business units to understand data needs and deliver comprehensive knowledge graph solutions.
5. Manage the lifecycle of knowledge graph data, including quality, consistency, and updates.
6. Provide expertise in semantic technologies and machine learning to enhance data interconnectivity and retrieval.
7. Develop and maintain documentation and specifications for system architectures and designs.
8. Stay updated with the latest industry trends in knowledge graph technologies and data management.
The Team
Innovation & Technology anticipates how technology will shape the future and begins building future capabilities and practices today. I&T drives the Ideation, Incubation and scale of hybrid businesses and tech enabled offerings at prioritized offering portfolio and industry interactions.
It drives cultural and capability transformation from solely services – based businesses to hybrid businesses. While others bet on the future, I&T builds it with you.
I&T encompasses many teams—dreamers, designers, builders—and partners with the business to bring a unique POV to deliver services and products for clients.
Qualifications and Experience
Required:
1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
2. 6-10 years of professional experience in data engineering with Proven experience in designing and implementing knowledge graph systems.
3. Strong understanding of semantic web technologies (RDF, SPARQL, GraphQL,OWL, etc.).
4. Experience with graph databases such as Neo4j, Amazon Neptune, or others.
5. Proficiency in programming languages relevant to data management (e.g., Python, Java, Javascript).
6. Excellent analytical and problem-solving abilities.
7. Strong communication and collaboration skills to work effectively across teams.
Preferred:
1. Experience with machine learning and natural language processing.
2. Experience with Industry 4.0 technologies and principles
3. Prior exposure to cloud platforms and services like AWS, Azure, or Google Cloud.
4. Experience with containerization technologies like Docker and Kubernetes
🚀 Hiring: Data Engineer ( Azure ) at Deqode
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
✅ Implement declarative pipelines using Lakeflow
✅ Design idempotent, replayable pipelines with safe backfills
✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
👉 Structured Streaming & Autoloader Implementation
👉 Databricks SQL (DBSQL) & Data Modeling for Analytics
Job Title: AI Architect
Location: [Pune, Onsite]
Experience: 6+ Years
Employment Type: Full-Time
Working requirements:
Engagement Commitment: 3 months to start with to ensure that the engagement is progressing well.
Deliverable: Minimum Viable Product is already there, enhancement is required
Working Hours Preferable: California Time (which is ~12 hours behind IST), requesting this because then it will be easier to huddle and both US and India team can work simultaneously
About the Role:
We are seeking an experienced and visionary AI Architect to lead the design, development, and deployment of cutting-edge AI/ML solutions. The ideal candidate will have a strong foundation in artificial intelligence, machine learning, data engineering, and cloud technologies, with the ability to architect scalable and high-performing systems.
Key Responsibilities:
- Design and implement end-to-end AI/ML architectures for large-scale enterprise applications.
- Lead AI strategy, frameworks, and roadmaps in collaboration with product and engineering teams.
- Guide the development of machine learning models (e.g., NLP, Computer Vision, Predictive Analytics).
- Define best practices for model training, validation, deployment, monitoring, and versioning.
- Collaborate with data engineers to build and maintain robust data pipelines.
- Choose the right AI tools, libraries, and platforms based on project needs (e.g., TensorFlow, PyTorch, Hugging Face).
- Work with cloud platforms (AWS, Azure, GCP) to deploy and manage AI models and services.
- Ensure AI/ML solutions comply with data privacy, governance, and ethical standards.
- Mentor junior AI engineers and data scientists.
Required Skills & Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.
- 6+ years of experience in AI/ML, with at least 2+ years in an architecture or lead role.
- Strong experience with AI/ML frameworks: TensorFlow, PyTorch, Scikit-learn, etc.
- Deep understanding of LLMs, transformers, GPT models, and fine-tuning techniques.
- Proficiency in Python and data processing libraries (Pandas, NumPy, etc.).
- Experience with cloud-based AI services (AWS SageMaker, Azure ML, Vertex AI, etc.).
- Knowledge of MLOps practices, CI/CD for models, and model monitoring.
- Familiarity with data lakehouse architecture, real-time inference, and APIs.
- Strong communication and leadership skills.
Preferred Qualifications:
- Experience with generative AI applications and prompt engineering.
- Knowledge of reinforcement learning or federated learning.
- Publications or contributions to open-source AI projects.
- AI certifications from cloud providers (AWS, Azure, GCP).
Why Join Us?
- Work on transformative AI projects across industries.
- Collaborate with a passionate and innovative team.
- Flexible work environment with remote/hybrid options.
- Continuous learning, upskilling, and growth opportunities.
Job Description:
- Extensive experience in Appian BPM application development
- Knowledge of Appian architecture and its objects best practices
- Participate in analysis, design, and new development of Appian based applications
- Team leadership and provide technical leadership to Scrum teams
- Must be able to multi-task, work in a fast-paced environment and ability to resolve problems faced by team
- Build applications: interfaces, process flows, expressions, data types, sites, integrations,
- Proficient with SQL queries and with accessing data present in DB tables and views
- Experience in Analysis, Designing process models, Records, Reports, SAIL, forms, gateways, smart services, integration services and web services
- Experience working with different Appian Object types, query rules, constant rules and expression rules
Qualifications
At least 6 years of experience in Implementing BPM solutions using Appian 19.x or higher
- Over 8 years in Implementing IT solutions using BPM or integration technologies
- Certification Mandatory- L1 and L2 a
- Experience in Scrum/Agile methodologies with Enterprise level application development projects
- Good understanding of database concepts and strong working knowledge any one of the major databases e g Oracle SQL Server MySQL
- Appian BPM application development on version 19.x or higher
- Experience of integrations using web services e g XML REST WSDL SOAP API JDBC JMS
- Good leadership skills and the ability to lead a team of software engineers technically
- Experience working in Agile Scrum teams
- Good Communication skills
Job Title: Lead Data Engineer
📍 Location: Pune
🧾 Experience: 10+ Years
💰 Budget: Up to 1.7 LPM
Responsibilities
- Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
- Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
- Optimize SQL queries for performance and cost efficiency.
- Ensure data quality, reliability, and security across pipelines and datasets.
- Implement Snowflake best practices for performance, scaling, and governance.
- Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
- Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.
Role - Principle Engg | Reactjs
Skill Set - Programming
- React js with Redux,
- HTML5/CSS3
- Javascript
- Mobx (optional)
- Storybook (optional)
Unit Test Cases: Jest & Enzyme
Code Quality:
- SonarQube Knowledge
- Strong hold on unit-testing frameworks and writing effective unit tests
Other Important Skills:
- Engineering & Design Skills
- Good analytical & problem solving skills
- Ability to ship features end to end without much guidance
- Experienced with Agile methodologies
Soft Skills
- Good communication skills
- Zeal to learn new technologies & Methodologies
We are looking for a highly skilled computer programmer who is comfortable with both front and back-end programming.
Full-stack developers are responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties.
Full-stack developers will be required to see out a project from conception to the final product, requiring good organizational skills and attention to detail.
Full Stack Developer Responsibilities:
- Developing front-end website architecture.
- Designing user interactions on web pages.
- Developing back-end website applications.
- Creating servers and databases for functionality.
- Ensuring cross-platform optimization for mobile phones.
- Ensuring responsiveness of applications.
- Working alongside graphic designers for web design features.
- Seeing through a project from conception to finished product.
- Designing and developing APIs.
- Meeting both technical and consumer needs.
- Staying abreast of developments in web applications and programming languages.
- Degree in computer science.
- Strong organizational and project management skills.
- Proficiency with fundamental front-end languages such as HTML, CSS, and JavaScript.
- Familiarity with JavaScript frameworks React.js.
- Proficiency with Ruby on Rails
- Familiarity with database technology such as MySQL or Oracle or PostgreSQL.
- Excellent verbal communication skills.
- Good problem-solving skills.
- Attention to detail.
- 5 working days
- Flexible timings
- Work from home granted
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.





