Can Computers Think?

OKHU’ET
5 min readApr 10, 2021

--

This “essay” was written as a part of my admission application to TUM. The interpretations of the ideas presented in this article were realized within the boundaries of my limited knowledge and might not reflect the original intent and thought of the authors perfectly.

Alan Turing (1950), in his famous paper, puts forth the question “can machines think?” and suggests that the discussion of this question requires defining what a machine is and what it means to think. Instead of discussing definitions and attempting to answer this particular question, Turing changes the question and asks, “can a conceivable digital computer play against a human and win the imitation game?”

Daniel Dennett (2004) argues heatedly that the imitation game, or the so-called Turing Test, in its original definition by Turing, is underestimated in its difficulty for computers. Rightly so. In the original Imitation Game, unlike in its variations where extra constraints are introduced like in the earlier versions of the Loebner Prize competition (Dennett, 2004), the human judge is allowed to ask questions with arbitrary degrees of generality. Consider the scenario proposed by John Searle (1980) in his phenomenal paper “Minds, Brains, and Programs.” The following story is presented to the computer contestant:

A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip. (p. 417)

Then, it is asked, “did the man eat the hamburger?” Although this question is very easy to answer for humans, this is an extremely difficult task for a computer. Answering this question requires general knowledge of restaurants, hamburgers, and the idea behind leaving a tip, and an understanding of how the way these factors are presented in the story relates with the reasoning of the human character.

When John Searle confronted the philosophical debate around the thinking machines, it was obvious to him that the idea of an intelligent computer was a frivolous one. He argues, through his Chinese room thought experiment, that no conceivable digital computer can ever be intelligent. He explains that even if a computer, or a Turing machine, is able to answer all the questions proposed to it and passes the Turing test despite its difficulty, in doing so, it is merely following the set of rules presented to it, i.e. it is executing a program, and although its outputs are indistinguishable from those of an intelligent human, the computer does not have the slightest understanding of either the questions or the answers.

Put it differently, a computer can imitate mental processes but this does not necessarily make it intelligent. This is to say that intelligence arises only from the properties of biological constituents of brains. So, a non-biological machine that is built in a way to bring forth the feats of intelligence cannot produce or explain intelligence. Gottfried Wilhelm Leibniz (1714) expresses similar concerns in his windmill analogy. Leibniz imagines a machine capable of cognition enlarged to be of the size of a mill. He argues that one can enter such a machine and, upon an investigation of its mechanical parts, cannot find anything that can explain cognition.

Perhaps, this reasoning is challenged in the most intuitive way by Daniel Dennett’s (2002) metaphor of zombanks. A zombank (zombie bank) is an imaginary financial institution that performs all the functions of a traditional bank but is not a real bank since it lacks an invisible essential property of real banks. The fallacy here is in the definition of a real bank, since a bank can be defined completely based on its functions and services. It is absurd to think that a financial institution like a zombank does not count as a real bank because of an undefined mysterious property that it lacks. Similarly, the idea of a machine that reacts and interacts in the same way as an intelligent being but which is not intelligent because it lacks some mysterious property is absurd.

This brings us to the functionalist approach to our original question. From a functionalist perspective (e.g. Putnam, 1975), the mind is a functional system and it can be designed by identifying its functional parts and their relations with each other. This approach discards the requirement of biological essence. It claims that it suffices to implement a system capable of performing functions similar to those of our brains, irrespective of whether it is a biological or a non-biological system. Thus, the functionalist view allows the idea of implementing programs that models the human mind that can be run and tested on computers.

When John McCarthy and others proposed the Dartmouth Summer Research Project on Artificial Intelligence in 1955, they conjectured that “…every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Without delving into the further debate of simulation versus replication, this conjecture, perhaps, is the most productive in the advancement of the philosophy of mind. In Joscha Bach’s (2009) words, implementing programs “…may soon become a prerequisite to keep philosophy of mind relevant in an age of collaborative and distributed expertise” (p. 16).

Different approaches and ideas can be converted into formal models and implemented as a program. Such a program can be tested against the available cognitive data and its validity simply depends on whether it can explain the data. Any mismatch between the predictions of the program and the data indicates weaknesses in the model implemented by the program and the gaps in the understanding of the implementer. Also, a program is free from the confines of the imagination of a single implementer as anyone else can understand utterly and contribute to it.

Computer science is the best thing that happened to the philosophy of mind. It allows us to step aside from the philosophical turmoil and make measurable improvements. I suggest that we postpone answering the question “can machines think?” and focus our efforts on building our “Leibnizian mill.” Then we can ask our mill to give us the answer to that question, if we still have not figured it out ourselves in the process of building it.

References

Bach, J. (2009). Principles of Synthetic Intelligence: Psi: An architecture of motivated cognition. Oxford University Press.

Dennett, D. C. (2002). Introduction. In The concept of mind.

Dennett, D. C. (2004). Can machines think? In C. Teuscher (Eds.), Alan Turing: Life and legacy of a great thinker (pp. 295–316). Springer. https://doi.org/10.1007/978-3-662-05642-4_12

Leibniz, G. W. (1714). Monadologie (R. Latta, Trans.).

McCarthy, J., Minsky, M. L., Rochester N. & Shannon C. E. (1955). A proposal for the Dartmouth summer research project on Artificial Intelligence.

Putnam, H. (1975). Mind, language, and reality. Cambridge University Press.

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–445.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433–460.

--

--

OKHU’ET
OKHU’ET

Written by OKHU’ET

“Fikir adamı… təkmilləşdirir!” — Heç kim, belə söz deyən olmayıb.

Responses (2)