MuseRAG++: A Deep Retrieval-Augmented Generation Framework for Semantic Interaction and Multi-Modal Reasoning in Virtual Museums

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Virtual museums offer new opportunities for cultural heritage engagement by enabling interactive and personalized experiences beyond physical constraints. However, existing dialogue systems struggle to provide semantically adaptive, factually grounded, and multimodal interactions, often suffering from shallow user intent understanding, limited retrieval capabilities, and unverifiable response generation. To address these challenges, we propose MuseRAG++, a unified retrieval-augmented framework that integrates deep user intent modeling, a hybrid sparse-dense retrieval pipeline spanning text, images, and structured metadata, and a provenance-aware generation module that explicitly grounds responses in retrieved evidence. Unlike traditional methods relying on benchmark datasets, we evaluate MuseRAG + + through a qualitative user study within a functional virtual museum prototype, focusing on engagement, usability, and trustworthiness. Experimental results demonstrate substantial improvements in retrieval and generation metrics compared to strong baselines, while user evaluations confirm enhanced factual accuracy and interpretability.

Related articles

Related articles are currently not available for this article.