There are two types of artificial intelligence (AI). Strong AI is a research program to get computers to to think like humans (some include consciousness as the goal), and weak AI is to get computers to act smart, doing some tasks that humans perform. Some weak AI programs are called expert systems. Others include making mathematical proofs, perceptional devices (e.g. self driving cars), and chess playing (e.g. IBM’s Deep Blue).
I think that Strong AI is not going to happen anytime soon. And, there is a good possibility that researchers will not accomplish it for quite sometime—maybe never. There are a number of reasons for why I think this. One is that research, except for parallel distributed processing (PDP) work under the influence of language, and language does not equate with thought, or so I am coming to believe. Another is the sheer complexity of what they are trying to produce. And finally, I think the need for it is off-based; why do you need a machine to think like a human, if we already have humans? I will try to address each of these reasons and possibly more in this blog. I will also have a bit to say about weak AI.
The language objection will take some hashing out. I first want to explore that it is a possibility that we do not think in language. This is the main reason I feel that the language route is not going to succeed. This is so either in building knowledge bases or in using computer code to program strong AI.
I will discuss the not thinking in language first. The first thing I want to point out is language’s fluidity. As I speak or write I am am rarely concentrating on word choice or order. I do not often pay attention to how my sentences even hang together, even whole paragraphs. Of course, there are times when I pause to for a word or a phrase or for what to write or say next as a whole. There are even times when I lose my whole train of thought. This is not a contradiction. This is why. I think that language is the translation of thought, but not thought itself. So, at times the translation has to wait for a thought, or the translation itself takes extra time for the brain to work it out.
One more thing about the fluidity of language I will say. It can get even worse when you move away from English, the only language I know. If you observe someone speaking a language like Spanish, the speakers are flying compared to most English speakers. I do not know if Spanish language users can write faster than someone writing English, or if the fluidity of their language is limited to Spanish.
I have the feeling that language is actually a sense organ. But, instead of allowing us to sense things from without, it senses our thoughts. It is similar to the sense of body position, thought to arise from activity in the cerebellum and brain stem. Now the thing of it is is language, unlike other sensory apparatuses, also has an output function—we speak. This allows us to tell others what we are thinking. Language is very important in communicating with others. But, it is also how we communicate with ourselves.
Self-consciousness is the ability to be aware of ourselves. Now other creatures have this ability, or so it is thought, based on animal experiments. In these experiments chimpanzees, elephants, and dolphins have marks painted on their heads, which they are then able to see in a mirror. Unlike other animals they appear to be aware that the spot is on themselves. They touch the spot, except dolphins I suppose (what they do to indicate their awareness I do not know). This is the main evidence, and I suppose it is as good as it goes. I did find out an interesting aspect of these experiments. The animals that are able to see these marks as on themselves need to have experience with mirrors in the first place.¹
Does this mean that other animals are not aware of themselves? I do not think so. If you observe animals and how they navigate their world it seems obvious to me that they are aware of what their bodies are doing. When I observe Baxter (my cat) trying to maneuver around an obstacle, he is focusing on where his paw is going and whether it is going in the right place to accomplish his maneuver. Of course, this proves practically nothing. But, I think it is at least possible that they have a minimum amount of self-consciousness.
I digressed into this exploration of self-consciousness because I feel that humans have a unique form of it. This is the ability to be aware of are thoughts, and this is exactly what I think language allows us to accomplish. Again, this is not exactly proof of my surmise, but it would provide a role for language separate from thought, which is what I am arguing for.
Another notch in my argument is that we think in other ways that do not even involve language. We think in images (i.e. visualization). Now, I am not a very visual thinker. As a matter of fact I am fairly poor at. I once tried to visualize a complicated cake recipe—Martha Washington’s Chocolate Mousse Cake. It involved a sponge cake, a mousse, and a ganache. I went around for days seeing myself doing each step and assembling it all together. Well, the cake was a failure. The sponge cake was over baked for starters, and while the mousse was very good, when I went to top the cake with the ganache it started to flow over the cake at a quick pace, and then down the sides of the cake and over the lip of the plate, onto the counter, and down the front of the dishwasher. I suppose with some practice I could be a better visual thinker, but this experience put me off that way of thinking.
Despite my own lack of ability to think visually, there are others that are excellent at it. A good many top notch athletes claim to go over their routines or procedures with their mind’s eye. I have heard that the skier Lindsey Vonn does this, and she is the top downhiller in the sport. And, there are plenty of other athletes that claim that it improves their performance, and they have the victories to back it up.
I think, but do not know for certain, that many chess experts are able to go through their moves visually, thus seeing the board and pieces many turns in advance. Not only this, but myriad of alternatives too. Of course, visual artists must think visually as well, and actors are suppose to use visualization as well to learn their parts.
But, visual thinking is not the only nonverbal form of thinking. While, I was previously discussing the idea that we do not think in language with another person, the person I was talking to made me aware that some people are able to think musically. There is something in their thinking that allows them to marry sounds with sounds. I cannot even imagine how they do this because I have no musical talent myself. I enjoy lots of different types of music, but I cannot not even keep a beat.
The other major area of thinking that is outside of language is numerical or mathematical thinking. This may not be exact, however, because mathematics could be considered a symbolic discourse, and human language is almost exclusively symbolic. But still, there is the fact shown by experiments that babies acquire a number sense before they develop language ability. Actually, other animals have a number sense as well, and they definitely have no symbolic language capabilities.* Score one more point for animal thinking. This indicates that mathematics is not solely in the symbolic camp in the sense of language.
Then, there appears to be the structural or functional areas of the brain itself that speaks against equating thought with language. The known language areas, Wernicke and Broca, are located differently in the brain than where thought is supposed to come from—the prefrontal cortex. Certainly, there are many connections between these language areas and the prefrontal cortex, which may explain how language comes so quickly to our minds.
In support of my notion that we do not think in language I presented the fluidity of language production, not to mention language comprehension, which I did not even discuss; I presented different types of thought that do not appear to be connected to language—visual, musical, and mathematical (there are certainly other areas as well); and I presented the structural and functional divisions of the brain. I will fully admit that this does not rise to the level of absolute proof (probably not even close), but I do think it is enough evidence to make it at least plausible that we do not think in language. Does this go against Occam’s Razor? I do not think so because the entities are already there; I am not inventing any new entities here.
And, I have to say that plausibility arguments are about the best one can hope for in philosophy, and in particularly, the philosophy of mind. Of course there is need of evidence and the philosophy of mind should be informed with science as much as possible. A philosophy of mind that is in contradiction with science is bound to be wrong. The only area of philosophy where a plausibility argument may be surpassed is in logic, but there are many different types of logics. It is only after you accept a particular set of axioms and rules of reasoning that logic can become precise. So, deciding on which logic may be more applicable to a particular area of investigation requires more than exactness.
I have gone into my argument for not thinking in language because AI is mainly a linguistic affair, except PDPs, but there are other issues with these that I will leave for the moment. Outside of PDPs most AI research is based in some manner on knowledge bases. These knowledge bases our mainly sets of words used to define particular knowledges. So, if we do not think in language using a linguistic knowledge base misses the point in how we think. If the goal is weak AI, this is not a problem, but for strong AI it seems to me to be a strong stumbling block.
Finally, almost all computer programming is done with high level computer languages. So, if you are using a form of language (admittedly a restricted one), I think again that you are not working with the right medium. It is true that ultimately these high level languages get translated into machine language, which is just a series of 1s and 0s. But, if mathematical thinking is solely symbolic this may lead to the same types of problems that higher level languages in my opinion have.
At one point there was a computer language called LISP that used a set of lists to program intelligence into computers. But, LISP programs have shown no ability to mimic general human intelligence, which is the goal of strong AI. So, you can see how a language approach is very problematic to say the least.
Is the case close against the language approach to strong AI? Probably not. One reason is that at some level researchers may learn to overcome the language issue. And another is that I could be wrong, and we really do think in language. If you think that anything the brain does is thinking, than I maybe barking up the wrong tree if this is so. But language thinking may still be different from reasoning. If so, than I may still have a point.
Well, what of PDPs? I will say at the start that they do some pretty amazing things. There are PDPs that recognize faces and emotional expressions, parse grammar, and interpret sonar readings. For those that do not know about what PDPs are or how they work, I will attempt to explain them a little bit here.
First, the rational is that to get a computer to function like a human brain it needs to mimic those brain structures. PDPs are structured so that there are three layers: an input layer, a intermediate layer, and an output layer. As far as the brain structure is concerned the input layer is information received from the senses, or possibly other brain systems; the second layer is where synaptic strengthening and weakening take place; and the output layer is the resulting analysis. After a first pass the output layer is to compared to the input layer, and if a fit does not occur (which is the case for many cycles) the middle layer (synaptic weights) are adjusted, and then the output is again compared to the input.
But, most PDPs are actually programmed on serial computers, so this actual mimicking is in likeness only. Therefore, they are written in computer code (language) like other AI programs. There are dedicated connection machines, but I do not know a whole lot about them, but it is my hunch that they still do not solve the problem. Some code, at least, must be required.
One question I have about PDPs, is do they really mimic neurons and synaptic junctions in the brain? I am not so sure. Where in the brain is output compared to input? Certainly not in such a simple way as in the description of PDPs given above. They are supposed to cover the phenomena of resonance (multiple systems working at the same time) and recursion (synaptic connections connecting with earlier ones in the neural pathway), but do they really capture these aspects of the nervous system? Again I do not think so. These functions in the brain are massively connected, so such simple networks just will not do.
The main thing about PDPs in my opinion is that they are relatively simple programs or devices. Thereby, they come nowhere close to actually mimicking a whole brain, let alone a simple brain system. At the current time, I do not believe they can capture the complexity of a real brain or brain system. They do not even come close to simpler brains. So, the complexity issue is the main reason I think that PDPs are not going to be a solution to Strong AI—at least anytime soon.
Speaking about the complexity issue, some philosophers and brain scientists believe that the mind is an emergent phenomenon. While I am not fully on board with this notion, if it is such a property, than even a conglomerate of PDPs may not be able capture human intelligence. And, if it is an emergent property how do you learn how to produce it.
In the last paragraph I said I was not fully convinced that the mind is an emergent property of a complex brain. This is because I think that reduction is still possible. At least it has not been shown to be impossible, and I think the onus is on the emergent property enthusiasts to show that mind is actually an emergent property in this case, or in any other case of emergent properties, really.
While I think reductionism is still possible, I will admit that it fails to supply an explanation of mind on many different levels. For example, there is the phenomenological level. Not as in the philosophy that goes by that name, which I find to be confused (or as I often say in such cases—gobbledegook), but in the sense of what something feels like. In the case of pain a reduction to brain states is possible as far as cause is concerned, but that reduction would not relate the agony of a broken leg.
Now, while I am in sympathy with eliminative materialists² in their reductionist approach to brain states, it is far less believable that we would or even can give up folk psychology, which explains the mind in terms of what we feel, think, believe, want and other such terms (also called intentions in philosophy). The eliminative materialist wants to eschew such explanations. Folk psychology is called the intentional stance by the philosopher Daniel Dennett, who believes it does real explanatory work; not just as far as humans are concern, but other animals, and even machines (at least in a metaphorical manner).
When you move away from personal psychology and into the sociological, reductional explanations are even further removed from what people want from an explanation. Why do we act alturistically to others might have a reductionist explanation as far as brain states are concerned, but leaves out even the evolutionary approach, which is popular in some scientific circles, let alone why someone chooses to help another in a particular situation.
So while I do not necessarily believe that emergent properties are part of the explanation for the human mind, the reductionist approach is not fully applicable either. But still, complexity remains. So, how are you going to figure out how to produce a humanlike mind from a computer? This remains a mystery. Because even if you could connect many PDPs together in a brain like manner, there is still no guarantee that you will produce a mind.
Now on to my final critique. What do we need to have a human mind from a computer for if we already have humans with them? Or are we after a super human mind? If we cannot even make a mechanical human mind, how are we suppose to make a super mind? So, besides feasibility do we actually need the fruits of a completed strong AI agenda?
Whatever the answers are to the above set of questions, there is still reasons for strong AI researchers to continue their work. For one we do not know if strong AI is doable or not, and if it is, there may be reasons to have computer minds if and when we get there. While at this stage of research there does not seem much of chance in my mind to succeed, I or anyone else can know the end game.
Another reason to continue strong AI research is you do not know what may be discovered along the way. It might have a bearing on understanding the human mind. It might create a breakthrough in computer design or programming. Why would we want to close off the search, if we cannot see all that may be accomplished?
Finally, if nothing else it serves to foster human curiosity and intellectual endeavors. Who would want to put humans off from intellectual pursuits? Well, there may be some (they want to defund science research), but I am definitely not one of them.
I will leave you with some remarks about weak AI. This field is where the payoffs are coming from all the time. While it may have started out with the goal of a computer beating a grand master at chess, there are now programs or expert systems that are assisting humans in assessing various domains.
Some of these expert systems are used in diagnostics both in medicine and industry (think of IBM’s touted Watson). And there are the consumer systems that keep track of big data on your online searches and buying preferences. I know some people have big qualms about all this data being stored, but if you have nothing to hide, why should it matter, when these systems are very convenient.
Another type of expert system is personal service systems like Alexa, Cortana, and Google Voice. While, I do not go in for that sort of thing, I imagine for some people it simplifies their lives to a good degree. I have heard that these systems store what you say at all times, but again if you are doing nothing to hide, why should that matter.
That is really all I have to say right now. There maybe a future blog on the implications of AI on philosophy of mind. For now, I have argued that because there is a chance that we do not think in language that using language in any form will not get us to strong AI; the complexity of the brain is also a large hurdle to jump, either because mind could be an emergent property or we will not be able to obtain the necessary level of complexity exhibited by the brain to succeed; and do we even need to capture humanlike minds in a computer? I say by all means let the research continue, and let weak AI reign for now.
¹ See Marc Hauser’s Wild Minds
² For more on eliminative materialism see parts of my blog – Why Are People Afraid of Their Brain?
*I play around with the idea that Baxter counts to two. He almost always waits until both me and Bette are in bed before getting on the bed himself (one-two).