Interactive Embodied Agents for Cultural Heritage and Archaeological presentations

Francisco José Serón Arbeloa, S. Baldassarri, E. Cerezo


In this paper, Maxine, a powerful engine to develop applications with embodied animated agents is presented. The engine, based on the use of open source libraries, enables multimodal real-time interaction with the user: via text, voice, images and gestures. Maxine virtual agents can establish emotional communication with the user through their facial expressions, the modulation of the voice and expressing the answers of the agents according to the information gathered by the system: noise level in the room, observer’s position, emotional state of the observer, etc. Moreover, the user’s emotions are considered and captured through images. For the moment, Maxine virtual agents have been used as virtual presenters for Cultural Heritage and Archaeological shows.


Multimodal interaction; Virtual agent; Ambient intelligence; Virtual worlds; Cultural heritage; Archaeology

Full Text:



BALDASSARRI, S., CEREZO, E., SERON, F. (2007): An open source engine for embodied animated agents.In Proc. Congreso Español de Informática Gráfica: CEIG’07, pp. 89–98.

BERRY, al, (2005). Evaluating a realistic agent in an advice-giving task. In International Journal in Human-Computer Studies, Nº 63, pp. 304-327.

BOFF, E. et al, (2005). An affective agent-based virtual character for learning environments. Proceedings of the Wokshop on Motivation and Affect in Educational Software, 12th International Conference on Artificial Intelligence in Education. Amsterdam, Holland, pp 1-8.

BURLESON, W. et al, (2004). A Platform for Affective Agent Research. Proceedings of the Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, New York, USA.

CEREZO, E., BALDASSARRI, S., SERON, F. (2007): Interactive agents for multimodal emotional user interaction. In Proc. of IADIS International Conference Interfaces and Human Computer Interaction, pp. 35–42.

CASELL, J. et al (eds), (2000), in Embodied Conversational Agents. MIT Press, Cambridge, USA.

El-NASR, M. S. et al, (1999). A PET with Evolving Emotional Intelligence. Proceedings of the 3rd Annual Conference on Autonomous Agents. Seattle, USA, pp. 9 – 15.

GRAESSER, A. et al, (2005). AutoTutor: An Intelligent tutoring system with mixed-initiative dialogue. In IEEE Transactions on Education, Vol. 48, Nº 4, pp. 612-618.

KASAP, Z. and N. MAGNENAT-THALMANN (2007): “Intelligent virtual humans with autonomy and personality: State-of-the-art”, in IntelligentDecision Technologies. IOS Press.

MARSELLA S. C et al, (2000). Interactive Pedagogical Drama. Proceedings of the 4th International Conference on Autonomous Agents. Barcelona, Spain, pp. 301–308.

MIGNONNEAU, L. and SOMMERER, C. (2005). Designing emotional, methaforic, natural and intuitive interfaces for interactive art, edutainment and mobile communications, in Computer & Graphics, Vol. 29, pp. 837-851.

PRENDINGER, H. and ISHIZUKA, M., (2005). The Empathic Companion: A Character-Based Interface that Addresses Users’ Affective States. In Applied Artificial Intelligence, Vol.19, pp.267–285.

ROSIS, F. et al, (2003). From Greta’s mind to her face: modelling the dynamics of affective status in a conversational embodied agent. In International Journal of Human-computer Studies. Special Issue on Applications of Affective Computing in HCI, Vol 59, pp 81-118.

YUAN, X. and CHEE, S. (2005). Design and evaluation of Elva: an embodied tour guide in an interactive virtual art gallery. In Computer Animation and Virtual Worlds, Vol. 16, pp.109-119.

Abstract Views

Metrics Loading ...

Metrics powered by PLOS ALM


  • There are currently no refbacks.

Creative Commons License

This journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Universitat Politècnica de València

Official journal of Spanish Society of Virtual Archaeology

e-ISSN: 1989-9947