Extended Cognition: On the Bundaries of Cognitive Systems

A Cognition Briefing

Contributed by: Marcin Milkowski, Polish Academy of Sciences

Introduction
The claim that the cognitive systems can extend into the environment so as to enhance their cognitive capacities opposes the classical tradition of research on cognitive systems, as represented by Jerry Fodor’s principle of methodological solipsism (Fodor 1980). Extended cognition presupposes that cognitive systems are embodied and embedded in their cognitive niche. It is a strong form of the externalism: the environment of the cognitive system is not only important in explaining the cognitive processes, it is an active part of these processes. This is why the proponents of the extended mind thesis, Andy Clark and David Chalmers (Clark & Chalmers 1998), call their claim 'active externalism'. In what follows, I take that their claim is not only about natural minds but about all kinds of cognitive systems, natural or artificial.

To some extent, extended cognition is not a revolutionary change in the view of cognitive systems. Researchers from different traditions have long remarked the special use of the cognitive tools that become extensions of the body (see esp. Hall 1969:3-4), and that some extensions, such as language and writing, dramatically extend the possible experience and overall cognitive abilities. Even one of the founding fathers of classical view on cognitive systems, Herbert Simon, noted that agents can use their environment as memory. According to Simon, ants use features of their environment as a kind of memory that extends far beyond their tiny bodies:

An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself (Simon 1996:52).

It is important to note that the extended cognition is not simply a cognitive process that involves anything external physically to the agent. The fact that agents use tools cognitively is far from controversial, and the core component of the extended cognition is a different claim, namely that cognition is literally distributed spatially outside the agent. This might involve several forms such as:

  • social cognitive phenomena, as observed in natural cognition
  • cognitive hybrids, such as implants or other enhancements to the basic agent structure
  • indirect instrumental cognition, such as in scientific observation using specially developed tools
  • historical cognition, as based on the remnants of past events that can be part of the cognitive agent history
In such cases, various forms of interaction between the extensions and the core agent are at play. They must be robust so as to count as extensions, that is the parts of the cognitive system that are its extension must interact with it relatively frequently and must be indispensable for some forms of cognition. More often than not, such interactions are already quite well described in cultural anthropology, field linguistics, sociology of knowledge and other empirically-oriented human sciences. In this way, extended cognition reintegrates the embedded and embodied accounts with the social, cultural and historical research on one hand, and with the on-going work on cyborgs, insect-based robots (such as RoboRoach) and other technological enhancements to human and artificial cognition.

How far can agents extend?
Opponents of Clark & Chalmers' view raised several objections (see Menary 2005):

  • It is hard to find a principled way of delineating the boundaries of cognitive systems on this view. For some, it is not a bug but a feature of the account. Yet, if there is no entity without identity, and identity is not clear in such cases, it might be argued it should be avoided.
  • Cyborg cases and social cases are different. The first case involves intervention in the architecture of the agent, and counts as extension, whereas socially shared cognitive processes involve tools rather than extended parts of agents. These are tools as they don't change the basic architecture but it is architecture that allows acting using them.
It can be argued that both kinds of objection deal with conceptual or terminological problems. For practical purposes, it is important that extended systems should be built as interacting with other systems, using external processes and situated in their niches.

Building extended systems
For practical reasons, it might seem a fruitful strategy to build a very simple artificial agent that is adapted to its cognitive niche, and is able to actively use it. It doesn’t involve building huge databases for inferences in a classical, cognitivist paradigm, so it’s less resource-intensive. On the other hand, it is still far from clear that artificial systems extended in the environment will tackle classical problems of AI better, for example be better chess-players. Embedded and situated systems are not meant to be general-purpose, so this is something to be expected. We know there are some embedded general-purpose cognitive systems, which are human beings, and that they have evolved from simpler ones, so eventually the extended cognition, as biologically-inspired should work. But it could turn out that it works only in evolutionary timespan, requiring thousands of years of artificial selection...

The best way of trying if something works is by trying it, so the extended cognition paradigm must be simply put into practice.

References
Andy Clark & David Chalmers (1998), “The Extended Mind,” ''Analysis ''58:10-23, 1998.

Jerry Fodor (1980), “Methodological Solipsism Considered as a Research Strategy in Cognitive Science,” ''Behavioral and Brain Sciences'', 3: 63-73.

Richard Menary (ed.) ''The Extended Mind ''Ashgate 2005.

Herbert Simon (1996), ''The Sciences of the Artificial'', MIT Press (3rd ed.)

Edward T. Hall (1969), ''The Hidden Dimension'', Anchor Books.