A Cognition Briefing Contributed by: Brendan Wallace, University of Glasgow
Introduction
The essence of cognitivism is the idea that cognition (i.e. ‘thinking’: either ‘consciously’ or ‘unconsciously’) is the product of (or ‘is’) algorithm-like rules, processed in ‘internal’ mental space. Some (not all) cognitivists go onto argue that these rules are actually identical to (rather than being similar to) the algorithms used by digital computers. Despite the fact that the immediate creators of cognitivism were active in the 1950s and 1960s, as Hubert Dreyfus has pointed out, the idea that cognition (and, therefore, rationality) consists of rule following is really a very old idea: it can be traced back to Plato (Dreyfus and Dreyfus, 2002). Normally, cognitivists go on to argue that the key way in which ‘thoughts’ or ‘cognitions’ are produced is by rules acting on representations. Cognitivists were encouraged in this view of cognition by Shannon’s contemporary work, which laid the foundations for modern day information science (Shannon, 1948). In the 1940s and 1950s Shannon wrote a number of seminal papers which provided a neutral and objective way of defining ‘information’ (he achieved this by mathematising the concept and excluding any discussion of meaning). Given the power of this new concept, it was an obvious temptation for psychologists to see whether Shannon’s concept of information could be adapted in a psychological context (Wallace, 2007). And so, in the 1950s and 1960s, psychologists such as Berlyne and Hick attempted to quantify, for example, the amount of information that human beings accessed from a visual scene, and to make objective predictions as to how much information a human's visual system could ‘ process’ (Berlyne, 1960: Hick, 1952). These early psychologists were behaviourists, a point which is sometimes missed: and their efforts foundered, because, of course, human beings do not process information in Shannon’s sense: they understand meaningful information, which is not the same as Shannon’s definition (indeed, it is the opposite: to repeat, Shannon could only provide an objective definition of information by eliminating meaning). Nonetheless the concept of information was incredibly successful in one specific area: it was useful in terms of assessing how much information (and this really was using the word in Shannon’s sense) digital computers could process. Given this success, Berlyne and Hick’s failure was overlooked, and psychologists began to posit the idea that human beings processed information in the same sense as digital computers do. (Tallis, 2004) Now it can be seen why the immediate forefathers of cognitivism were not psychologists. Cognitivism was adopted as an ‘as if’ theory. ‘Let us proceed’ (thought psychologists in the 1960s) ‘as if human cognition was digital information processing (i.e. of the sort carried out by digital computers) and see where it gets us’. The ontological status of this claim is unclear, although it should not be forgotten that many psychologists (and, indeed, philosophers) genuinely and literally believed in the 1950s and 1960s that the human brain really was an information processing digital computer, indistinguishable from the standard desk top PC with which we are all now familiar. This theory, the Computational Theory of Mind (CTM) is not as fashionable as it once was, however, and nowadays cognitivists tend to argue that the digital computer is merely a useful metaphor for the brain. But there are a number of problems with this claim. It is of course true that there are similarities between digital computers and the human brain: this is because there are similarities between all objects in the world and all other objects. After all, like digital computers (and buses and rocks and cups and tables) the human brain is a material object, the brain is about the same size as a desk-top PC, and of course a digital computer can be programmed to mimic some human behaviours, in the same way that the computer can also be programmed to mimic the behaviour of many other objects including animals, plants and buses. What is needed, therefore, to prove the effectiveness of the metaphor is to indicate that in some non-trivial way, the digital computer is like a human brain. And here of course we run into a fundamental problem: since we don’t, currently, know how the brain works, it is not clear how we can make that claim. Therefore the ‘brain = digital computer’ idea is very different from the claim that (for example) ‘water = H20’ where the ontological status of both terms (on either side of the ‘equals’ sign) is fairly clear, although in the heat of the argument many cognitivists blur this distinction. Given that the metaphor idea is a tricky one to pin down, in an attempt to bring clarity to the debate we have an alternative: to backtrack thirty or forty years and look at the arguments of those who claimed that the human brain really is a digital computer. This has the advantage of being a clear scientific claim (the idea that the brain is merely metaphorically a computer may well be unfalsifiable in Popper’s sense, perhaps one of its attractions). However, this latter claim (i.e. the CTM theory) would involve claiming that, like a digital computer, the brain works via algorithms, that it has a memory store similar to (or identical to) computer memory stores, that it does not need a ‘body’ (as digital computers don’t), that it doesn’t need a ‘society’ (as digital computers don’t) that it functions in a determinist manner (as digital computers do), and so on. It would also mean that (since this is the way ‘abstract’ models of computers work) that human beings cognize, fundamentally, using the basic laws of logic.
Empirical Evidence
Computer as Metaphor
But in claiming that there are non-trivial comparisons to be made between the human brain and a digital computer we have the problem of defining the phrase ‘non-trivial’. Human beings are good at some things (walking, talking, making conversation, building digital computers and so on). Digital computers are good at other things (retrieving large amounts of data, running pre-written (by human beings) computer programmes and so forth). Digital computers can mimic human beings in trivial ways: i.e. in the very few ways in which human beings are a bit like digital computers. In order to make the claim that digital computers are, in some non-trivial ways like human brains, what we would expect is for the digital computer to mimic things that human beings are good at. And this is why artificial intelligence has been so important in the story of cognitive science. If computers could mimic doing tasks previously thought of as being ‘uniquely human’ then this would render the whole debate moot: who cares (it could be claimed) whether or not computers were ‘really’ like humans, if computers mimic human behaviour with a reasonable degree of precision (for example by passing the Turing Test) there would clearly be significant enough similarities, and the metaphor would therefore justify itself (whether or not one might see it as being 'true' in some absolute sense). Indeed the whole concept of the Turing Test was to make this point. When one could no longer tell the difference between the behaviour of a digital computer and a human being, then, self-evidently there would be non-trivial similarities between computers and humans, and the use of the metaphor would have been demonstrated, regardless of whether brains were, in some Platonic sense ‘really’ digital computers. And in the early days, this promise looked as if it was going to be fulfilled. This made the stagnation of GOFAI in the 1970s, and its ultimate collapse in the 1990s so particularly hard to bear. Indeed, it might well be argued that cognitivists are still living in 'denial' about the collapse of GOFAI and what it means for their theory of cognition.
Collapse of GOFAI
‘1965, Herbert Simon: "Machines will be capable, within twenty years, of doing any work a man can do" 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."’ Given that these predictions have not come true, and that there is no particular reason (apart from blind faith) to think that they are going to come true anytime soon, we should look at the reasons as to why these predictions failed. There are many problems that GOFAI ran into: this is not an exhaustive list.
The Qualification Problem
The Frame Problem
The Ramification Problem
As a result of all these problems, GOFAI had more or less ground to a halt by the 1990s. But this was a disaster for the Cognitivist Orthodoxy. If the logical arguments for the truth of the CTM were all flawed (as argued above), then all cognitivists were left with was the 'digital computers are at least metaphorically like brains' argument. But here the 'proof is in the pudding'. Given that digital computers don't really seem to be able to mimic any high level human behaviour, the metaphor seems to be reduced to a trivial observation that some things are a bit like other things, and the whole cognitivist enterprise breaks down.
CONCLUSION
At the time of writing the field of psychology is in upheaval, with no real dominant paradigm having emerged to challenge cognitivism. Suffice to say there are three major aspects of human cognition that any ‘follow up’ theory to cognitivism is going to have to explain before it can begin to be taken seriously.
Situatedness
Sociality
Embodiment
Finally one might note that the first two of these concepts emphasise something else that cognitivism ignored: the essentially dynamic nature of human cognition (as opposed to the idea of static, rule following, ‘module bound’ human agents). This fits in well with new thinking in neuro-psychology about neuro-plasticity and neuro-genesis. 21st century artificial intelligence will have to create not only embodied (i.e. robotic) and social artificial agents, but also agents that in some shape or form manage to recreate themselves as they learn, which it itself a dynamic, not passive, process. Perhaps the very new sciences of evolutionary computation, developmental robotics and evolutionary robotics will go some way to solve these problems by providing a new paradigm or model for psychology, to replace now intellectually bankrupt cognitivism.
BIBLIOGRAPHY
Chomsky, N. (1966) Cartesian Linguistics. New York: Harper and Row. Dreyfus, H. and Dreyfus, S. (2002). 'From Socrates to Expert Systems'. Philosophy, 24, 1. Hick, W.E. (1952) 'On the Rate of Gain of Information'. Quarterly Journal of Experimental Psychology. 4, 11. Newell, A., and H. A. Simon. (1972) Human problem solving. Englewood Cliffs, NJ: Prentice Hall. Riesbeck, C., and Schank, R. (1989). Inside Case-Based Reasoning. New York: Lawrence Erlbaum. Shannon, C. (1948) 'A Mathematical Theory of Communication'. Bell Systems Technical Journal, 27, pp 379-423 and 623-656. Tallis, R. (2004). Why the Mind is not a Computer. London: Imprint Academic. Turing, A.M. (1950). 'Computing Machinery and Intelligence'. Mind, 59, 433-460 Wallace, B., Ross, A., and Davies, J.B., (2003). 'Information Processing Models: Benefits and Limitations'. In PT McCabe (ed) Contemporary Ergonomics 2003, Taylor & Francis: London, pp.543-548. Wallace, B., Ross, A., Davies, J., and Anderson, T. (2007). The Mind, the Body and the World. London: Imprint Academic.
|