Cognitivism

A Cognition Briefing

Contributed by: Brendan Wallace, University of Glasgow

Introduction
Cognitivism, the philosophy or theory of cognition that was dominant in psychology (roughly) between the late 1960s and the mid 1980s, has a number of immediate intellectual antecedents. Some of these were: Alan Turing's work on the theory of computation (specifically his definition of a ‘Turing Machine’ (Turing, 1950)), the Transformational Grammar of Noam Chomsky (Chomsky, 1966), and Newell and Simon's work on artificial intelligence (Newell and Simon, 1972). Surprisingly little of this early work was actually done by psychologists, for reasons that will be discussed below.

The essence of cognitivism is the idea that cognition (i.e. ‘thinking’: either ‘consciously’ or ‘unconsciously’) is the product of (or ‘is’) algorithm-like rules, processed in ‘internal’ mental space. Some (not all) cognitivists go onto argue that these rules are actually identical to (rather than being similar to) the algorithms used by digital computers. Despite the fact that the immediate creators of cognitivism were active in the 1950s and 1960s, as Hubert Dreyfus has pointed out, the idea that cognition (and, therefore, rationality) consists of rule following is really a very old idea: it can be traced back to Plato (Dreyfus and Dreyfus, 2002). Normally, cognitivists go on to argue that the key way in which ‘thoughts’ or ‘cognitions’ are produced is by rules acting on representations.

Cognitivists were encouraged in this view of cognition by Shannon’s contemporary work, which laid the foundations for modern day information science (Shannon, 1948). In the 1940s and 1950s Shannon wrote a number of seminal papers which provided a neutral and objective way of defining ‘information’ (he achieved this by mathematising the concept and excluding any discussion of meaning). Given the power of this new concept, it was an obvious temptation for psychologists to see whether Shannon’s concept of information could be adapted in a psychological context (Wallace, 2007). And so, in the 1950s and 1960s, psychologists such as Berlyne and Hick attempted to quantify, for example, the amount of information that human beings accessed from a visual scene, and to make objective predictions as to how much information a human's visual system could ‘ process’ (Berlyne, 1960: Hick, 1952). These early psychologists were behaviourists, a point which is sometimes missed: and their efforts foundered, because, of course, human beings do not process information in Shannon’s sense: they understand meaningful information, which is not the same as Shannon’s definition (indeed, it is the opposite: to repeat, Shannon could only provide an objective definition of information by eliminating meaning). Nonetheless the concept of information was incredibly successful in one specific area: it was useful in terms of assessing how much information (and this really was using the word in Shannon’s sense) digital computers could process. Given this success, Berlyne and Hick’s failure was overlooked, and psychologists began to posit the idea that human beings processed information in the same sense as digital computers do. (Tallis, 2004)

Now it can be seen why the immediate forefathers of cognitivism were not psychologists. Cognitivism was adopted as an ‘as if’ theory. ‘Let us proceed’ (thought psychologists in the 1960s) ‘as if human cognition was digital information processing (i.e. of the sort carried out by digital computers) and see where it gets us’.

The ontological status of this claim is unclear, although it should not be forgotten that many psychologists (and, indeed, philosophers) genuinely and literally believed in the 1950s and 1960s that the human brain really was an information processing digital computer, indistinguishable from the standard desk top PC with which we are all now familiar. This theory, the Computational Theory of Mind (CTM) is not as fashionable as it once was, however, and nowadays cognitivists tend to argue that the digital computer is merely a useful metaphor for the brain.

But there are a number of problems with this claim. It is of course true that there are similarities between digital computers and the human brain: this is because there are similarities between all objects in the world and all other objects. After all, like digital computers (and buses and rocks and cups and tables) the human brain is a material object, the brain is about the same size as a desk-top PC, and of course a digital computer can be programmed to mimic some human behaviours, in the same way that the computer can also be programmed to mimic the behaviour of many other objects including animals, plants and buses.

What is needed, therefore, to prove the effectiveness of the metaphor is to indicate that in some non-trivial way, the digital computer is like a human brain. And here of course we run into a fundamental problem: since we don’t, currently, know how the brain works, it is not clear how we can make that claim. Therefore the ‘brain = digital computer’ idea is very different from the claim that (for example) ‘water = H20’ where the ontological status of both terms (on either side of the ‘equals’ sign) is fairly clear, although in the heat of the argument many cognitivists blur this distinction.

Given that the metaphor idea is a tricky one to pin down, in an attempt to bring clarity to the debate we have an alternative: to backtrack thirty or forty years and look at the arguments of those who claimed that the human brain really is a digital computer. This has the advantage of being a clear scientific claim (the idea that the brain is merely metaphorically a computer may well be unfalsifiable in Popper’s sense, perhaps one of its attractions). However, this latter claim (i.e. the CTM theory) would involve claiming that, like a digital computer, the brain works via algorithms, that it has a memory store similar to (or identical to) computer memory stores, that it does not need a ‘body’ (as digital computers don’t), that it doesn’t need a ‘society’ (as digital computers don’t) that it functions in a determinist manner (as digital computers do), and so on. It would also mean that (since this is the way ‘abstract’ models of computers work) that human beings cognize, fundamentally, using the basic laws of logic.

Empirical Evidence
How does the empirical evidence stack up for these claims? Over the last 40 years we have accumulated a huge amount of data on human behaviour, most of it devastating for the CTM. To take the most basic claim first: it does not seem to be true that human beings cognise via algorithms, or algorithm-like rules. Work following in the tradition of Activity Theory) or unembodied (cf embodiment).

Computer as Metaphor
This brings us back, therefore, to the idea that the brain is metaphorically like a digital computer and, to repeat, this is the claim that most modern cognitive psychologists are happiest with (or at least were, until cognitivism began to go out of fashion in the 1990s).

But in claiming that there are non-trivial comparisons to be made between the human brain and a digital computer we have the problem of defining the phrase ‘non-trivial’. Human beings are good at some things (walking, talking, making conversation, building digital computers and so on). Digital computers are good at other things (retrieving large amounts of data, running pre-written (by human beings) computer programmes and so forth).

Digital computers can mimic human beings in trivial ways: i.e. in the very few ways in which human beings are a bit like digital computers. In order to make the claim that digital computers are, in some non-trivial ways like human brains, what we would expect is for the digital computer to mimic things that human beings are good at. And this is why artificial intelligence has been so important in the story of cognitive science. If computers could mimic doing tasks previously thought of as being ‘uniquely human’ then this would render the whole debate moot: who cares (it could be claimed) whether or not computers were ‘really’ like humans, if computers mimic human behaviour with a reasonable degree of precision (for example by passing the Turing Test) there would clearly be significant enough similarities, and the metaphor would therefore justify itself (whether or not one might see it as being 'true' in some absolute sense). Indeed the whole concept of the Turing Test was to make this point. When one could no longer tell the difference between the behaviour of a digital computer and a human being, then, self-evidently there would be non-trivial similarities between computers and humans, and the use of the metaphor would have been demonstrated, regardless of whether brains were, in some Platonic sense ‘really’ digital computers.

And in the early days, this promise looked as if it was going to be fulfilled. This made the stagnation of GOFAI in the 1970s, and its ultimate collapse in the 1990s so particularly hard to bear. Indeed, it might well be argued that cognitivists are still living in 'denial' about the collapse of GOFAI and what it means for their theory of cognition.

Collapse of GOFAI
Before discussing the collapse of GOFAI it would be wise to remind ourselves of just how optimistic the early pioneers were that the problems they were attempting to solve were not only solvable but would be solved relatively soon.

‘1965, Herbert Simon: "Machines will be capable, within twenty years, of doing any work a man can do"

1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."’

Given that these predictions have not come true, and that there is no particular reason (apart from blind faith) to think that they are going to come true anytime soon, we should look at the reasons as to why these predictions failed.

There are many problems that GOFAI ran into: this is not an exhaustive list.

The Qualification Problem
Whereas digital computers (and, according to cognitivism, human brains) use deterministic algorithms to cognise, human beings do not. Instead, they use rough or fuzzy heuristics, always with room for exceptions. For example, when people talk about a ‘bird’ they mean, very roughly, a thing with feathers that flies. But of course not all birds fly, and not all of them have feathers. Teaching digital computers to use rules that are constantly ‘qualified’ has been a much harder task then was originally anticipated.

The Frame Problem
It was difficult to specify (in a rule bound fashion) precisely what was the basic context (or ‘frame’) within which rules would have to be interpreted. For example, in any given situation (which is normally dynamic: constantly changing) we need a series of meta-rules to specify which rules are necessary: but these in turn need other rules to specify them and so on: leading to a situation of infinite regress.

The Ramification Problem
Take Asimov’s famous Rules (or laws) of Robotics. For example: Rule One: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’ But in an infinite universe, how could one calculate whether or not one’s actions would allow a human being (or anyone) to come to harm? For example if I buy you an alcoholic drink, the drink may get you drunk and you may come to harm. Even if it doesn’t one could argue that alcohol has a deleterious effect on the health. On the other hand, if I don’t buy you a drink, then perhaps you would buy it yourself: perhaps you can’t afford it, or perhaps you will ask someone else to buy one, who might take offence, who might start a fight…. And so on. Essentially the Ramification Problem seems to imply that only God (who apparently has infinite data stores and who can see the consequences of all things) could ever be conscious:..... if one assumes that cognition is rule following.

As a result of all these problems, GOFAI had more or less ground to a halt by the 1990s. But this was a disaster for the Cognitivist Orthodoxy. If the logical arguments for the truth of the CTM were all flawed (as argued above), then all cognitivists were left with was the 'digital computers are at least metaphorically like brains' argument. But here the 'proof is in the pudding'. Given that digital computers don't really seem to be able to mimic any high level human behaviour, the metaphor seems to be reduced to a trivial observation that some things are a bit like other things, and the whole cognitivist enterprise breaks down.

CONCLUSION
Luckily, new theoretical approaches had become available by this time. The work of Brooks in robotics had demonstrated that ‘rational’ thought (i.e. rule following) was not necessary for complex behaviours. Moreover, more ‘ecological’ studies of human behaviour (e.g the work of Barker and Gibson) have provided huge amounts of empirical data which provide more realistic models of human behaviour than were ever produced by the cognitivists.

At the time of writing the field of psychology is in upheaval, with no real dominant paradigm having emerged to challenge cognitivism. Suffice to say there are three major aspects of human cognition that any ‘follow up’ theory to cognitivism is going to have to explain before it can begin to be taken seriously.

Situatedness
Working in the ‘ecological’ and ‘environmental’ schools of psychology (associated with J.J. Gibson and Roger Barker, respectively) psychologists have produced large amounts of data that indicate that the behaviour of human beings is radically ‘situated’: which is to say that behaviour (and therefore, presumably, cognition) is context specific, not context free. There would seem to be a dynamic (‘dialectical’) relationship between the human being and the ‘external’ environment. Indeed, it is not at all clear that one can meaningfully use the word ‘external’ when talking about human beings (cf Extended Mind Thesis).

Sociality
Work by philosophers such as the later Wittgenstein, and psychologists in the Activity Theory tradition (most famously Vygotsky, but also Luria and Le’ontev) have produced empirical work to demonstrate that human beings are intrinsically social beings (as opposed to being essentially individuals who happen to find themselves in a society). This would seem to suggest that the basic assumption (for example) of the Turing Test (i.e that a single artificial agent could be conscious) is simply false: consciousness and cognition can only emerge from a language bound community of such agents (c.f. Voloshinov and Bakhtin for numerous discussions of what 'language bound' might mean in this context).

Embodiment
Work by Mark Johnson and George Lakoff has indicated that the basic forms of cognition are structured and constrained in a non-trivial manner by our embodied selves. Again this suggests that to cognize, ‘one’ must have a body. A disembodied conscious agent (for example, HAL in 2001) is simply not possible.

Finally one might note that the first two of these concepts emphasise something else that cognitivism ignored: the essentially dynamic nature of human cognition (as opposed to the idea of static, rule following, ‘module bound’ human agents). This fits in well with new thinking in neuro-psychology about neuro-plasticity and neuro-genesis. 21st century artificial intelligence will have to create not only embodied (i.e. robotic) and social artificial agents, but also agents that in some shape or form manage to recreate themselves as they learn, which it itself a dynamic, not passive, process. Perhaps the very new sciences of evolutionary computation, developmental robotics and evolutionary robotics will go some way to solve these problems by providing a new paradigm or model for psychology, to replace now intellectually bankrupt cognitivism.

BIBLIOGRAPHY
Berlyne, D. (1960). Conflict, Arousal and Curiosity. New York: McGraw Hill.

Chomsky, N. (1966) Cartesian Linguistics. New York: Harper and Row.

Dreyfus, H. and Dreyfus, S. (2002). 'From Socrates to Expert Systems'. Philosophy, 24, 1.

Hick, W.E. (1952) 'On the Rate of Gain of Information'. Quarterly Journal of Experimental Psychology. 4, 11.

Newell, A., and H. A. Simon. (1972) Human problem solving. Englewood Cliffs, NJ: Prentice Hall.

Riesbeck, C., and Schank, R. (1989). Inside Case-Based Reasoning. New York: Lawrence Erlbaum.

Shannon, C. (1948) 'A Mathematical Theory of Communication'. Bell Systems Technical Journal, 27, pp 379-423 and 623-656.

Tallis, R. (2004). Why the Mind is not a Computer. London: Imprint Academic.

Turing, A.M. (1950). 'Computing Machinery and Intelligence'. Mind, 59, 433-460

Wallace, B., Ross, A., and Davies, J.B., (2003). 'Information Processing Models: Benefits and Limitations'. In PT McCabe (ed) Contemporary Ergonomics 2003, Taylor & Francis: London, pp.543-548.

Wallace, B., Ross, A., Davies, J., and Anderson, T. (2007). The Mind, the Body and the World. London: Imprint Academic.