


|
A Cognition Briefing Contributed by: Stefan Rank, Austrian Research Institute for Artificial Intelligence
Introduction
In the attempt to operationalise emotion mechanisms, the design of affective architectures in principle can feed back to the underlying psychological theory by revealing implicit assumptions that need to be explicated for implementation. For many of these projects, it is, however, hard to determine what specific roles of affect are actually targeted, and to discriminate their contributions from side-effects of the implementation and the particular physical embodiment. This in turn makes it difficult to compare specific designs to other approaches in similar application domains. For this reason, we relate the functionality of emotional mechanisms in situated agent architectures to classes of application scenarios: while certain requirements are applicable to all affective architectures (e.g., design parsimony, or the requirements of flexible combination and integration of functional competences), most of them are rooted in the target scenarios of use. A scenario-based characterisation thus improves comparability by making applicable criteria explicit. The ideas mentioned here have also been put forward in [RankPetta2006].
Scenarios for Affect
The basic characteristics of an application scenario are the motivation for building the system, its purpose, and the details of a possible deployment. Motivations may range from specific data about humans or animals that the architecture should model (e.g. [GratchMarsella2004]) to explicit hypotheses and open empirical questions that need to be tested, and might also include specific engineering goals, e.g. the improvement of behaviour selection in robots [MoshkinaArkin2003]. A characterisation of the purpose of the system positions it along a spectrum between real-world applications and the creation of virtual entities that can be used in controlled experiments for the scientific validation of (e.g. psychological) theories. This empirical and scientific context is closely connected to the envisioned mode of evaluation, possibly including explicit performance functions, but also less concrete design criteria that the system should meet at a social level. As to the details of deployment, a crucial point is the characterisation of the system’s interaction qualities: this includes the user interface as well as the interaction between the agents and their environment. The user interface can be regarded as a special case on the spectrum of agent-environment interaction that ranges from sequenced binary decisions and sensations (e.g., in the Prisoner’s Dilemma) to the complexity of human interaction in the real world. User interfaces can be relatively minimalistic as in virtual worlds or as complex as potentially in robotic applications. In the case of affective interactions, ontologies derived from folk psychology or a particular emotion theory are needed to describe the interactions possible in a scenario. We emphasise that these characterisations are mostly formulated from an external perspective: the objects; properties; relations; and processes described do not imply their actual use or reification in architectures tackling the scenario (symbolic models that directly operationalise folk psychological terms are one possibility, but usually lack grounding [Norling2004]). Interactions can be described informally as typical scenario scripts that illustrate the possible activities, including tool use and social relations, as well as the utilisation of second-order resources, complemented by negative scripts that explicate interactions that fall outside a given scenario. A more formal description of interaction qualities could also include all agent tasks possible; agent-local performance measures (e.g., the amount of collected resources per time in a simulation); the average number of conflicting long-term or short-term tasks; and further qualitative behavioural criteria such as coherence, variedness, or believability in virtual character applications. Even though hard to quantify, the latter often form an essential part of scenario descriptions. Another part of a scenario description is the characterisation of the environment as presented to the agent. This comprises the intrinsic limitations; dynamics; and regularities of the interactions. For simulations, this includes properties such as being time-stepped or asynchronous, with the implied differences for the possible interactions and mechanisms (cf. the well-known PEAS characterisation in [RussellNorvig2003]). It is apparent that the interface to its world differs substantially from a robotic agent to a virtual one, but even in simulated environments a range of sensorimotor interactions is possible, including simple choices; artificial life simulations; and simulated physics. The scenario should also specify the numbers of agents and agent types (including interacting humans) in terms of typical and hard or practical limits. For practical reasons, references to related scenarios are also helpful. Our overall tenet is that architectures targeting related scenarios will benefit from analyses framed by scenario characteristics. This is especially true for target scenarios calling for several emotional competencies.
References
|