Comparing Emotion Models Used in Agent Architectures

A Cognition Briefing

Contributed by: Stefan Rank, Austrian Research Institute for Artificial Intelligence

Introduction
Research on affective control architectures for situated agents is fragmented into modelling aspects of emotion for specific purposes. The variety of posited functionalities of emotional processes in humans contributes to the fragmentation of the field of affective architectures, as do the varied aims of their modelling and application. These range from simulations for psychological experiments (e.g. [WehrleScherer2001]); over improving human-machine interaction (e.g. with character-based interfaces [EggesETAL2004] [PrendingerIshizuka2005]); to fundamental improvements in the coordination of (single and multi-) agent behaviour in dynamic and unpredictable environments [StallerPetta2001] [OliveiraSarmento2003] [DelgadoMataAylett2005].

In the attempt to operationalise emotion mechanisms, the design of affective architectures in principle can feed back to the underlying psychological theory by revealing implicit assumptions that need to be explicated for implementation. For many of these projects, it is, however, hard to determine what specific roles of affect are actually targeted, and to discriminate their contributions from side-effects of the implementation and the particular physical embodiment. This in turn makes it difficult to compare specific designs to other approaches in similar application domains.

For this reason, we relate the functionality of emotional mechanisms in situated agent architectures to classes of application scenarios: while certain requirements are applicable to all affective architectures (e.g., design parsimony, or the requirements of flexible combination and integration of functional competences), most of them are rooted in the target scenarios of use. A scenario-based characterisation thus improves comparability by making applicable criteria explicit.

The ideas mentioned here have also been put forward in [RankPetta2006].

Scenarios for Affect
Scenario-based design and evaluation is an established concept in usability [Nielsen1993] [Cooper1999]: specific user stories allow for iterative refinement of the actual requirements of a system. A scenario is a prototype that handles one user achieving one specific goal and thereby focuses on specific functionalities and a certain depth of the system. The analogy we use for affective architectures consists in regarding human affective behaviour as the system — a scenario is then the part that an agent architecture should be able to reproduce. We thus use scenarios to capture the emotional potential of an envisioned use of a system: those characteristics that help constrain which kinds of emotional phenomena can occur, and which cannot — and therefore can only be simulated or portrayed. Note that in most of today’s applications of character-based interfaces, the expressed emotions are only portrayed, i.e. the simplified interaction could not support a deeper model of emotion. This section provides details about the characteristics we propose.

The basic characteristics of an application scenario are the motivation for building the system, its purpose, and the details of a possible deployment. Motivations may range from specific data about humans or animals that the architecture should model (e.g. [GratchMarsella2004]) to explicit hypotheses and open empirical questions that need to be tested, and might also include specific engineering goals, e.g. the improvement of behaviour selection in robots [MoshkinaArkin2003]. A characterisation of the purpose of the system positions it along a spectrum between real-world applications and the creation of virtual entities that can be used in controlled experiments for the scientific validation of (e.g. psychological) theories. This empirical and scientific context is closely connected to the envisioned mode of evaluation, possibly including explicit performance functions, but also less concrete design criteria that the system should meet at a social level. As to the details of deployment, a crucial point is the characterisation of the system’s interaction qualities: this includes the user interface as well as the interaction between the agents and their environment. The user interface can be regarded as a special case on the spectrum of agent-environment interaction that ranges from sequenced binary decisions and sensations (e.g., in the Prisoner’s Dilemma) to the complexity of human interaction in the real world. User interfaces can be relatively minimalistic as in virtual worlds or as complex as potentially in robotic applications. In the case of affective interactions, ontologies derived from folk psychology or a particular emotion theory are needed to describe the interactions possible in a scenario. We emphasise that these characterisations are mostly formulated from an external perspective: the objects; properties; relations; and processes described do not imply their actual use or reification in architectures tackling the scenario (symbolic models that directly operationalise folk psychological terms are one possibility, but usually lack grounding [Norling2004]). Interactions can be described informally as typical scenario scripts that illustrate the possible activities, including tool use and social relations, as well as the utilisation of second-order resources, complemented by negative scripts that explicate interactions that fall outside a given scenario. A more formal description of interaction qualities could also include all agent tasks possible; agent-local performance measures (e.g., the amount of collected resources per time in a simulation); the average number of conflicting long-term or short-term tasks; and further qualitative behavioural criteria such as coherence, variedness, or believability in virtual character applications. Even though hard to quantify, the latter often form an essential part of scenario descriptions. Another part of a scenario description is the characterisation of the environment as presented to the agent. This comprises the intrinsic limitations; dynamics; and regularities of the interactions. For simulations, this includes properties such as being time-stepped or asynchronous, with the implied differences for the possible interactions and mechanisms (cf. the well-known PEAS characterisation in [RussellNorvig2003]). It is apparent that the interface to its world differs substantially from a robotic agent to a virtual one, but even in simulated environments a range of sensorimotor interactions is possible, including simple choices; artificial life simulations; and simulated physics. The scenario should also specify the numbers of agents and agent types (including interacting humans) in terms of typical and hard or practical limits. For practical reasons, references to related scenarios are also helpful.

Our overall tenet is that architectures targeting related scenarios will benefit from analyses framed by scenario characteristics. This is especially true for target scenarios calling for several emotional competencies.

References

  • [Cooper1999] Alan Cooper. The Inmates Are Running The Asylum. SAMS publishing, 1999.
  • [DelgadoMataAylett2005] Carlos Delgado- Mata and Ruth Aylett. Having it both ways—the impact of fear on eating and fleeing in virtual flocking animals. In Bryson J.J. et al., eds., Modelling Natural Action Selection, AISB, 2005.
  • [EggesETAL2004] Arjan Egges, Sumedha Kshirsagar, Nadia Magnenat-Thalmann. Generic personality and emotion simulation for conversational agents. Computer Animation and Virtual Worlds, 15(1):1–13, 2004.
  • [GratchMarsella2004] Jonathan Gratch and Stacy Marsella. Evaluating a General Model of Emotional Appraisal and Coping. In Hudlicka E., Canamero L., eds., Architectures for Modeling Emotion: Cross-Disciplinary Foundations, Papers from the 2004 AAAI Spring Symposium, AAAI Press, Menlo Park, CA, USA, pp.52–59, 2004.
  • [MoshkinaArkin2003] Lilia Moshkina and Ronald C. Arkin. On TAMEing Robots. In Proc. IEEE International Conference on Systems, Man and Cybernetics, October 5–8, 2003, Hyatt Regency, Washington D.C., USA, 2003.
  • [Nielsen1993] Jakob Nielsen. Usability Engineering. Academic Press, 1993.
  • [Norling2004] Emma Norling. Folk Psychology for Human Modelling: Extending the BDI Paradigm. In Jennings N. et al., eds., Proc. of the third International Joint conference on Autonomous agents and multiagent systems, 19–23 July 2004, New York City, NY, USA, IEEE Computer Society Press, Washington D.C., USA, Vol.1, pp.202–209, 2004.
  • [OliveiraSarmento2003] Eugenio Oliveira and Lu´is Sarmento. Emotional advantage for adaptability and autonomy. In Rosenschein J.S. et al., eds., Proc. of the second international joint conference on Autonomous agents and multiagent systems, 14–18 July 2003, Melbourne, Australia, ACM Press, New York, NY, USA, pp.305–312, 2003.
  • [PrendingerIshizuka2005] Helmut Prendinger and Mitsuru Ishizuka. The Empathic Companion: A Character-based Interface that Addresses Users’ Affective States. Applied Artificial Intelligence, Educational Agents and (e-)Learning, 19(3– 4):267–285, 2005.
  • [RankPetta2006] Stefan Rank and Paolo Petta. Comparability is Key to Assess Affective Architectures. In Cybernetics and Systems 2006, Proceedings of the EMCSR, pp.643-648, 2006.
  • [RussellNorvig2003] Stuart Russell and Peter Norvig. Artificial Intelligence—A Modern Approach. Pearson Education, London2, 2003.
  • [StallerPetta2001] Alexander Staller and Paolo Petta. Introducing Emotions into the Computational Study of Social Norms: A First Evaluation. Journal of Artificial Societies and Social Simulation 4(1), 2001.
  • [WehrleScherer2001] Thomas Wehrle and Klaus R. Scherer. Toward Computational Modeling of Appraisal Theories. In Scherer K.R. et al., eds., Appraisal Processes in Emotion: Theory, Methods, Research, Oxford University Press, Oxford New York, pp.350–365, 2001.