Philosophy of artificial intelligence

State and explain Lucas’s argument against the possibility of AI. what do you think is the best reply to Lucas’ argument?

There's a specialist from your university waiting to help you with that essay.
Tell us what you need to have done now!

order now

Godel suggested that the mind was a computerised mechanism. He suggested that the mind was merely a formulation of logic that was associated with a system and structure of language as representative of the world. This implied that intelligence was a learning process that was based upon accepting and rejecting hypothesis about the world through a set of formula that was deemed either provable or un-provable within the system of logic (Godel, 1934). This idea was backed up by cognitive research based upon the human capacity and nature of learning. Bruner et al, devised a test to see how it was the human mind constructed categories of logic, believing it to be by way of Godel’s hypothesis acceptance and rejection (Bruner et al, 1956). He used a variety of shapes in a variety of conditions – some sharing the same number of shapes, some sharing the same colour of shapes and some sharing the same number of borders surrounding the shapes. From the results of his experiment, Bruner claimed that ther were two forms of learning that were apparent. These were regarded as successive scanning, which entertained one hypothesis at a time and conservative scanning, which sought to eliminate classes of hypotheses such as border, number of shapes and colour similarity and dissimilarity (Bruner et al, 1956). This growing belief in the mind as a mathematic translator of the meaning of experience provided the foundation for Turing who surmised that artificial intelligence was a form of intelligence that could learn according to the coded principles of mathematic equations and could be understood as mimicry of human behaviour. He subsequently suggested that responses through a rejection and acceptance of truths that accords to the conceptual framework were all that the human mind consisted of. This idea of the mind as a programmed agent, rejecting the truths of logical and mathematic equations was fundamental to Godel. To Godel, the structural reality that an intelligent being saw before i implied that Artificial intelligence could be created in accordance to that structure and that human life, or perhaps experiential living, was merely a reaction to certain stimuli based upon a structural code of predetermined logic – just as it is with a computer simulation.

Unhappy at this model of the cognitive mind or with the notion of intelligence as being founded upon formula and theorem, J.R. Lucas, argued that Godel’s theorem posed many problems in his view that the mind was like a computer. Speaking of the limitations that the quantitative artificial brain may encounter in terms of acceptance and un-acceptance of certain truths according to its programming, Lucas suggested that

‘All that Godel has proved is that a mind cannot produce a formal proof of the consistency of a formal system inside the system itself: but there is no objection to going outside the system and no objection to producing informal arguments for the consistency either of a formal system or of something less formal and less systematized. Such informal arguments will not be able to be completely formalized: but then the whole tenor of Godel’s results is that we ought not to ask, and cannot obtain, complete formalization.’ (Lucas, 1961)

Rationale was provided for Lucas’s approach with the development of the Chinese room experiment by Searle. Searle indicated that even though an artificial intelligence could recognise, incorporate and subsequently mimic the external behaviours required to appear human (or emotionally intelligent) that this did not necessarily indicate any evidence of an awareness of what this behaviour meant or symbolised to other humans – in essence, it did not understand the true human meaning. He used the example of an English speaking human going inside the mechanical mind of a robot and using certain symbols as a coded ’representative’ for the instruction of an unknown language i.e. Chinese (Searle, 1980). He then indicated that although the human had a form of code to illicit a response to the language of Chinese he did not actually know what the meaning or significance of what he was doing related to. Essentially, it was simply a response according toa pre programmed code. Following this criticisms of artificial intelligence as a mechanical process involving a pre programmed innate knowledge of the environment and of human behaviour which had led to Searle‘s Chinese room experiment, Lucas reasoned that,

‘Complexity often does introduce qualitative differences. Although it sounds implausible, it might turn out that above a certain level of complexity, a machine ceased to be predictable, even in principle, and started doing things on its own account, or, to use a very revealing phrase, it might begin to have a mind of its own. It might begin to have a mind of its own. It would begin to have a mind of its own when it was no longer entirely predictable and entirely docile, but was capable of doing things which we recognized as intelligent, and not just mistakes or random shots, but which we had not programmed into it.’ (Lucas, 1961)

This seems to define what is human and what is machine. For Lucas, he does not dispute the theoretical idea that artificial intelligence can become as like humans. However, he does make the distinction between a mechanical automaton and an autonomous mind that thinks free of systematic code that perceives experience through an acceptance of logical truths and rejection of unfounded abstraction. Bringing into context the notion of the human mind as being a determinant for the structure of knowledge rather than a logical interpreter of that knowledge, Lucas reasoned that if, unlike Turing had suggested, a mechanical mind could begin to think free of it‘s programmed code then,

‘It would cease to be a machine, within the meaning of the act. What is at stake in the mechanist debate is not how minds are, or might be, brought into being, but how they operate. It is essential for the mechanist thesis that the mechanical model of the mind shall operate according to “mechanical principles”, that is, that we can understand the operation of the whole in terms of the operations of its parts, and the operation of each part either shall be determined by its initial state and the construction of the machine, or shall be a random choice between a determinate number of determinate operations’ (Lucas, 1961)

However, although his argument backed up by Searle’s Chinese room experiment gave reasonable rationale for a rejection of a mechanical intelligence based upon the ability of the subject to see outside of a logical structure, which was not necessarily pre determined or pre programmed, it did accord to the sentimental notion of liberal humanity. In reaction to this notion French philosopher Jean Baudrillard noted some crucial factors in the reality of humanities cultural condition that could be seen as contradicting this liberal freedom that Lucas prescribed. Suggesting that the current moral reality that figured as so crucial to Lucas’ rationale, was being replaced by ‘a hedonistic morality of pure satisfaction, like a new state of nature at the heart of hyper civilisation’ Baudrillard prescribed the notion of the hyper real as being a simulation that was beyond that of a logical code that applied to a structure of knowledge and instead deterred from idelogical frameworks that informed a notion of liberal humanity (Baudrillard, 1968, p.3). He suggested that,

‘A whole imagery based on contact, a sensory mimicry and a tactile mysticism, basically ecology in its entirety, comes to be grafted on to this universe of operational simulation, multi-stimulation and multi response. This incessant test of successful adaptation is naturalised by assimilating it to animal mimicry. , and even to the Indians with their innate sense of ecology tropisms, mimicry, and empathy: the ecological evangelism of open systems, with positive or negative feedback, will be engulfed in this breach, with an ideology of regulation with information that is only an avatar, in accordance of a more flexible patter.’ (Baudrillard, 1976, p.9)

However, what Baudrillard does is implement the idea of a simulated code that works by replacing the notion of humanistic ideology that once informed the gap sophisticated and complex gap between the subject and the environment, such as social exchange and communal ideas. By doing this Baudrillard then shows gave example of how this simulated code informed a new humanity and shaped intelligence to be un-conformist to a life according to the meaning supported by the notion of humanity, but instead created an imaginary life that was understood and identified with by its relationship to the values apparent within an external code being communed – essentially, placing life itself as a simulated relationship of the subject and his / her own choice of object. This meant that essentially the human emphasis on the mysteries of the human mind emphasised by Lucas were just as questionable and as determinist as the artificial intelligence that Godel prescribed. This can be seen as the fundamentaly crucial contemporary reply to Lucas’ argument for artificial intelligence.


Baudrillard, J., (1976) Symbolic Exchange and Death Taken from: The Order of Simulacra (1993) London: Sage.

Bruner, J, S., Goodnow, J, J., and Austin, G, A., (1956) A Study of Thinking New York: John Wiley and Sons.

Godel (1934) Original Proof Applies Taken from his Lectures at the Institute of Advanced Study, New Jersey: Princeton.

Lucas, J, R., (1961) Minds, Machines, and Godel Philosophy, 36, 112-127.

Searle, J, R,. (1980) Minds, brains, and programs. Behavioural and Brain Sciences, 3, (3), 417-457.

Turing, A, M., (1950) Computing Machinery and Intelligence, Mind, pp. 433-60, reprinted in The World of Mathematics, edited by James R. Newmann, pp. 2099-2123.