BEE: Belief-Value-Aligned, Explainable, and Extensible Cognitive Framework for Conversational Agents
Chen Yang, M. Gross, R. Wampfler
Proceedings of the 25th International Conference on Intelligent Virtual Agents (IVA) (Berlin, Germany, September 16-19, 2025), pp. 1-9
Abstract
Recent advances in large language models have enabled virtual agents to exhibit increasingly believable social behaviors. However, creating social agents that remain consistent with a defined profile and explain their reasoning remains challenging. We introduce a cognitive framework designed to address these gaps. Our framework features a graph-based memory module ("concept pool") and a "decision-making process" inspired by human cognition. The concept pool contains an agent's beliefs, values, and background stories for context-dependent retrieval. The decision-making process uses the concept pool to produce belief-value aligned responses of the virtual agent and intuitive, human-readable explanations of the reasoning. To evaluate the effectiveness of our framework, we created two virtual agents based on historical figures and compared them to baseline agents. Our evaluation combined quantitative assessments of belief-value alignment with a user study (n=48) examining explainable agency. Results show that our framework exhibits model-agnostic improved belief-value alignment and produces more detailed, relevant, and understandable explanations. By grounding virtual agent behavior in structured memories and cognitive principles, our framework offers a compelling step toward more coherent and socially intelligent virtual agents.