Is Artificial Intelligence Possible?
In Mind Design, each author makes use of specific examples to persuade their viewpoints upon the readers; but unfortunately, each author completely overlooks the holistic outcome of artificial intelligence (AI) research. For example, some philosophers, with little scientific training, strongly believe that artificial intelligence cannot be created; while some scientists, with picturesque notions and without enough scientific research, have equally strong beliefs to the contrary. As a result, philosophers denounce AI research too quickly, and scientists deceive themselves too easily. Therefore, the viewpoints of philosophers and scientists need to be further analyzed from an unbiased and more scientific point of view about the possibility of success in AI research.
First of all, several ambiguities need to be clarified. The definition of a computer is just an electronic machine that can solve mathematical problems through a set of specific rules and can process coded information (Funk 130). And the criteria for intelligence is thought, memory, heuristic inspiration, cognition, and emotions (this particular criterion of intelligence is still under debate). It can be inferred that computers already have some of the necessities of intelligence, such as: thought (controlled strictly by programs) and memory (even more precise than humans). At the present time, it is obvious that computers are not able to have imaginative inspirations, consciousness, and emotions. Some scientists even point out that robots that are controlled by computers follow the criteria for life if stretched far enough, such as: energy use, metabolism, movement, response to external stimuli, growth, and genetics. While clarifying certain ambiguities about computers and intelligence, this concludes that computers are probably technically alive, yet they are not completely intelligent. Therefore, there is still a great deal of AI research needed to be completed, since there are quite a few differences between the limited intelligence of computers and that of mankind’s intelligence.
As usual, philosophical writers are again using incongruent comparisons to ridicule AI experiments. But occasionally, these philosophers have valid viewpoints, using incomplete explanations, to renounce AI experiments. For example, Putnam believes that passing the Turing Test does not justify true artificial intelligence, because the computer does not have a psychology or an understanding of the language it uses (Haugaland 211). Obviously, Putnam only gives a vague and imprecise explanation; a computer passing the Turing Test does not need to be cognitive, heuristic, nor emotional; but rather, a very large database that can mimic a person's linguistic skills such as ELIZA (Haugeland 283). Therefore, passing the Turing Test is an invalid method of determining AI. Another example of a quick and strict verdict is how Pylyshyn explains how only people have the complexity of psychology. Pylyshyn philosophically debates whether a "computational mechanism” is comparable to psychology that he defines as how human minds think (Haugaland 68). Of course, Pylyshyn concludes without any substantial evidence that computers are not intelligent, because they think differently from people. Likewise, Minsky and Dreyfus only point out the difference between a computer's mathematical thought processes and a human's heuristic thinking abilities by just contrasting their linguistic and "information retrieval" capabilities. Even worse, both Haugeland and Fodor reasoned that cognition is another characteristic separating computers from intelligence without giving an entirely clear explanation. On the hand, Searle amd Davidson make the same conclusion by scientifically comparing the workings of the human mind and computer programs in over a dozen concrete examples; but of course, very little information about the mind is truly known (Haugeland 282), This concludes that even though philosophers sometimes have valid viewpoints, using incomplete and erroneous scientific evidence, they might not understand the importance of AI experimentation with regard to learning more about the human mind and the usefulness of so-called specific AI programs (see next paragraph).
Completely opposite to most philosophers, scientists are limited to the physical constraints of what computers can do. Presently, the computer industry has made considerable advances in the field of connectionism, especially in parallel computers over lineal computers. Using these more advanced computers, scientists are now able to construct specialized programs to do certain tasks. As a result, scientists are quick to respond that AI research has brought about significant breakthroughs with the use of better computers and self-programming environments (such as LISP and ProLog). Simon points out this step is a logical stage in the beginning of Al research, yet he does not continue by explaining possible stages in the future (Haugeland 36-37). As a matter of fact, Simon equates that the symbolic understanding of the brain and computers is quite similar, even though very little is known about the exact details of the brain. Oppositely and perplexing, one of the most promising theories in Al research is modeling in the neural network of the human brain but again "... detailed knowledge of the neurophysiology of the brain…" will be first needed. As a result, these experiments will have to wait until there is more information concerning the human brain structure (including chemical reactions) and thought "waves" (possibly the electrolytes). A second and more realistic theory, discussed by Haugeland, is called "wetware" that can biologically combine organisms (most likely humans) and machines. This would result in a cybernetic organism with man's intelligence and a computer’s precise memory and computational powers. This theory has obvious advantages; but again, technology has not yet risen as high as scientists and science-fiction writers' imaginations. As result, almost all of the future AI experiments, that have aspiring hopes, are hindered by the lack of technological advances and neurophysiological knowledge.
Even though, “the philosophy of cognitive science" has increased drastically in the past few decades, it still eludes philosophers and scientists even more because of the newly erupted AI controversy. Of course, McDermott's simple rectification seems to easily straighten out the problem that the natural stupidity of people will always prevent the construction of artificial intelligence (Haugeland 143-160). Indirectly implying that people will never scientifically or technologically reach a point that mankind will be able to either construct an artificial neural network or biologically merge computers and people’s minds. This is obviously a narrow-minded and unscientific viewpoint that is not accepting the possible full potential of mankind. And in contrast to most educational books, Mind Design was more biased in favor of the philosophical viewpoints, thus unfairly influencing the authors' viewpoints upon the readers.
Now that philosophers and scientists have learned considerably more about artificial intelligence, the holistic viewpoint of AI research highlights how little mankind really knows about intelligence. But the fact remains that mankind has already learned a lot through AI research and that progress in Al research will only become more controversial.
- Funk and Wagnalls Standard Desk Dictionary. United States. Dun and Bradstreet Corporation, 1981.
- Haugeland, John, Ed. Mind Design. Massachusetts: MIT Press 1988.
by Phil for Humanity