Alan Turing’s Imitation Game In Alan Turing’s “Computing Machinery, and Intelligence” Turing proposes a test to distinguish whether a machine is intelligent or not in an unconventional way. His test is an alternative to the questions “Can machines think?” and offers an alternative where it is not possible to get misdirected by ambiguous terms. Turing argues that it is a sufficient test in the way that all machines that pass the test are intelligent, but not all intelligent machines will pass the test. In this essay I will address one particular objection, ‘The Argument from Consciousness”, namely how I do not feel that this argument in successful in it’s attempt to undermine Turin’s hypothesis. Many people have objections to Turing’s test, objections that he counters in the latter part of his paper. In this particular objection Professor Jefferson brings up the idea that a machine cannot be deemed intelligent because it is unable to feel. A machine cannot compose art from thoughts or feelings, nor can a machine feel pleasure, guilt, or grief. Jefferson claims that until a machine is able to do these things, it cannot be considered as intelligent as the human brain. He claims that Turing’s test does not …show more content…
That is considered a solipsist point of view, meaning that people cannot know anything outside of their own minds, and is an idea that most people reject. Turing goes on to say that if it is possible for a machine to write a poem, and defend it’s choices in a viva voce, then it would sway those against the solipsist viewpoint, because a machine would be able to do the same things a human could, things we associate with feelings and would discredit the idea what we can’t know other things are able to ‘think’ and ‘feel’ through the work the computer creates. None of this hurts Turing’s argument that his test is
To go full circle on the first point we decided that machines must take into account the concept of making rules and reflecting on past rules. The second point was to see how thinking and self-awareness is achieved at the height of the process of rulemaking. The concept of language and symbols Searles is very important as an example as it sheds light on why machines can’t ever be human even with human brains as Searles claim they need to be a theoretical StrongAI. Language was created by mapping things and association of learned objects. To truly learn a language we have to understand and experience what it stands for. Language is an expression of results based on tests on objects that are now definable. It is a shortcut for learning information. Therefore it’s important that AI does more than think in a correct pattern as our experience of inputs comes from our interactions with one another and what we learn from language and not on our own in solitude. The man the Chinese Room cannot have those things and therefore cannot learn a language in the method he is prescribed. This wraps in the thoughts of the second point as well to feel self-aware we identify ourselves in relation to other people. This shows our society is also one big brain that is made up of variety of rooms or people in this case all making decisions and rules for their
In Part V of his Discourse, Descartes continues his argument for reasoning being the essence of humanity by arguing that non-human animals and machines cannot reason, and therefore reasoning must be solely attributed to humans. Descartes presents two tests to determine whether or not a being is human or non-human, both based on the adaptability of the being’s responses. He first asserts that since machines and non-human animals cannot communicate via spoken
Descartes, believed that machines have organs that they could use to answer questions. What I understood from this point, was that machines could only answer questions that the humans have programmed or prompted them to ask. However, if the machine has not been programmed to answer a question, it would not be able to answer. In this argument, Descartes uses speech arrangement as an example. He believed that machines cannot arrange a random or unprogrammed statement the way the youngest and weakest of man can. Thus, machines can only respond if they have some sort of artificial intelligence that permits them to program the instructed statement (Descartes-Paragraph 1). A classic example, would be with the grammar section of Microsoft word. If I type my last name Madiebo, the computer, automatically underlines my name in red, suggesting it is a misspelt word. The computer, also goes as far as giving me suggestions on how to spell my own name. The only reason the computer does this, is because it has not been prompted to store my name in the computer dictionary; thus it sees my name as an error. If my name is eventually added in the dictionary, that error would never pop up. The lowest human being on the other hand, would recognize the word Madiebo as my
I will support this thesis by taking a few of the objections that were argued against the mindedness of a computer in Turing’s paper, and explaining how they can be modified to allow passing of the test
John Searle starts with two claims of programmed computers being able to have a process where they would understand knowledge and the claim of computers understanding how the human mind works. Searle then states that these claims are rather untrue or without reason.
While there are modern philosophers who believe that computers are capable of thought, there are also others who disagree. In order to determine whether a computer meets Plato’s criteria of thought, it is important to identify the relevance of the arguments made to Plato’s ideas of intelligible realities, to the sensible imitations of the forms, and to the concept of thought.
John Searle’s Chinese Room Argument against Strong Artificial Intelligence (AI) is not a successful criticism of functionalist theories of mind because it fails to address the rebuttals formulated for the theory systems presented by Strong AI and functionalism. The paper will discuss this through the precursor of Functionalism, Token Mind-Brain Identity Theory; Turing Test; Searle’s Chinese Room Argument and criticisms against his argument involving the System and Robot reply. The Token Mind-Brain Identity Theory is a precursor of functionalist theories of mind, in which postulates that similar mental states are multiply realized in a variety of physical brain states. In terms of Multiple Realizability, this concept would be deemed problematic
In the arena of artificial intelligence research there is a large debate about the possibility of developing a program, which if installed correctly into a computer, would actually produce a mind. There are many arguments one could put forth to support either side of this debate. However, one of the most influential arguments against the possibility of artificial intelligence is The Chinese Room Argument, developed by John Searle. Searle makes some very strong claims about artificial intelligence which seemingly disprove the possibility of developing such a program. While Searle’s argument is quite convincing, there are some fundamental flaws within it which render it inadequate.
The philosophical question of whether machines are able to think has been a central question, debated by philosophers for many centuries. There have been various positions and beliefs of many different prolific philosophers. But more specifically by Rene Descartes who, rejects the idea that machines can in fact think, and Alan Turing who proposed a behavioural test, which dealt with the question whether or not machines were able to think. John Searle’s Chinese Room argument also has a strong position upon this question. In this paper, I will argue that Searle’s argument is sufficient because information-processing machines do not have intentionality, therefore do not think.
Meaning that like a mechanical machine the mind can only run programs and give unique outputs based on those programs. It claims that simply running a program is enough for one to assume thinking and understanding. In order to better understand computational theory, one could illustrate how it can be used to deduct an episode of logical reasoning. For instance, if someone was to hold the belief that chickens are birds, and birds can fly, then they would logically conclude that chickens can also fly, not matter how untrue this statement may be. One could come to many similar conclusions with this form of reasoning, however, their truth will never be attestable, merely the logic of the reasoning. Logical reasoning can also be viewed as a connection between much smaller primitive processors. One may believe that when reasoning logically one is using very simple and fundamental processes, in this case beliefs, that would lead you from point A to point B. This would mean that thinking, like intelligence would be in this case a computational capacity that is based on the interconnection between many, much smaller, primitive processes. Therefore, while rationality can be explained using the computational theory, it does not entail the correctness of the theory
Some questions, such as What is consciousness, can an artificial machine really thinks? is the brain just some neurons, or there are also some intangible stimuluses which make human different from other thinking processors. These questions made some Theoreticians to look for answers. They knew our brain is somehow like a complicated computer, so they asked if a computer could think such as we do. Alan Turing, and John Searle had different opinions about the subject based on their experiments.
Rene Descartes “Methods on the Method” focuses on distinguishing the significant difference of the human mind, and how he does through separating how neither animals nor machines don’t possess the same mental capabilities as the human rational. For Descartes distinguishes the human rational apart from non-humans, even though he does agree the two closely resemble each other because through their sense organs. However, it is because the mechanical lacks a sufficient aspect of the mind that makes it necessary for them to be on par with humans. Throughout Descartes “Discourse on the Method,” he argues that the significant differences between humans and both non-humans is they’re limited ability to respond to the world through external causes which react to their sense organs they recognize. This significantly creates a dividing ‘line’ which separates humans from non-humans. Furthermore, this paper will at first distinguish the difference between the human and non-human mentality in regards from Descartes “Discourse on the Method”. Then in which I will theorize a modern AI that could possess the concept of a human mind. As well as theorize a powerful AI that lacks the ability to understand its own intelligence. Lastly, I will argue why I believe there are no such machines that could possess the fundamental ability to understand knowledge, rather they are merely entities that react to the world instead of acting with it.
Artificial Intelligence is far more complicated than the mechanical process of thinking. There are multiple things in these world from phones to computers that can be considered Artificial Intelligence. However, the description for an A.I. vary dramatically due to the fact that intelligence itself is not easy to describe design within itself. The Touring test states that you could have an imitation game going on with a computer and if you do its considered to have intelligence. To counter act that point, if one can only imitate and copy someone else’s thoughts and patterns is it considered to be its own entity? Indeed, using language as the bases of intelligence is a good step however it does not answer the conception of a mind. Artificial Intelligence can imitate but since it cannot create its own distinctive answer, therefore Lady Lovelace state that it is lacking conscious and cannot be the basis of a mind. If we look at a human mind, one can input data from various sources and create mental images that our sentient mind can handle and process appropriately. With that in mind, we sound and process exactly like a computer or an intelligent program, where we differentiate from is how we make an output. This is where the scale for a weak A.I. and a strong A.I come into play. A weak A.I. can input data per its program but that is where it is limited, while a Strong A.I. can use think for itself possible outside its program. For Example, a human can be on a train
In film, Ex Machina, the plot revolves around a robot, Ava, and its ability to pass the Turing Test. More importantly, the film raises the question of whether or not Ava is conscious. I will argue that Ava is conscious. To do this, I will first define and explain what consciousness is. I will then argue that the best metaphysical model to understand and determine consciousness is Chalmer’s type-a materialism. Then, since consciousness can only be physically understood, we can only use behavior type tests to determine if something is conscious. will explain testing methods that allow us to determine whether or not something is conscious. Finally I show that Ava passes these test and must be conscious.
The Imitation Game was the name of the test Alan Turing contrived to whether machines could