Beware! Deep Thought Ahead
20/3/2016
This week I noted with interest reports of the success of an artificially intelligent system playing the game of Go against the current Go World Champion, Korean Lee Sedol. (This linked BBC News item is a representative sample of thousands of reports on the contest.)
Google-owned company Deep Mind has developed the AlphaGo program to the level that it was able to win the competition four games to one.
In and of itself, this was not something I wanted to write about, but a subsequent article on a science website I follow, that of New Scientist magazine, did prompt further interest, as it brought home the growing relevance of an old philosophical argument I have been following for decades. The article, "How victory for Google’s Go AI is stoking fear in South Korea", revealed that many Koreans were deeply upset by this victory, but not simply by the fact that the machine won, but by the fact that it played in a way that they considered intellectually or artistically beautiful. There was also concern that it could come up with moves that a human couldn't see.
Google's attitude is more positive, that "AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas."
Google-owned company Deep Mind has developed the AlphaGo program to the level that it was able to win the competition four games to one.
In and of itself, this was not something I wanted to write about, but a subsequent article on a science website I follow, that of New Scientist magazine, did prompt further interest, as it brought home the growing relevance of an old philosophical argument I have been following for decades. The article, "How victory for Google’s Go AI is stoking fear in South Korea", revealed that many Koreans were deeply upset by this victory, but not simply by the fact that the machine won, but by the fact that it played in a way that they considered intellectually or artistically beautiful. There was also concern that it could come up with moves that a human couldn't see.
Google's attitude is more positive, that "AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas."
“Last night was very gloomy,” said Jeong Ahram, lead Go correspondent for the Joongang Ilbo, one of South Korea’s biggest daily newspapers, speaking the morning after Lee’s first loss. “Many people drank alcohol.”
The game of Go The game originated in ancient China more than 2,500 years ago. Ref Wikipedia, Go (game). It appears to be relatively simple, in that it involves two players who alternately place black and white playing pieces, called "stones", on the vacant intersections ("points") of a board with a 19×19 grid of lines. The objective of the game is to have surrounded a larger total area of the board with one's stones than the opponent by the end of the game. Nevertheless, the number of possible games following this simple procedure is a staggering 10^170, more than the number of atoms in the universe! It is possibly this large thought-space of potential games which has misled the Koreans into thinking it is not amenable to AI. |
The Koreans treat the game very seriously, it is considered not just a game, more as an art form, in the same manner as say, Karate. Its complexity had until now led them to consider it a prime example of the kind of thing humans can do that a machine could never be made to do.
Incidentally, this attitude to a game or sport as an art form reminded my of the Spanish attitude to bullfighting, which is reported on the Arts pages of newspapers, not in the Sports section; Toreros and especially the Matador are similarly admired not for their success, but for the beauty and elegance of their performance in out-thinking and exploiting the behaviour of the bull. Imagine their consternation if a robot was let into the ring and proceeded to dispatch the bull, not as a death machine, but with elegance and grace!
It is this aspect of the Go competition I wanted to consider, as it leads on to the general questions of what intelligence and consciousness are, of the nature and desirability of Artificial Intelligence (AI), and what this victory means for the future of AI and of humanity. Big questions indeed, which intelligences greater than mine have studied deeper and for longer than I.
Incidentally, this attitude to a game or sport as an art form reminded my of the Spanish attitude to bullfighting, which is reported on the Arts pages of newspapers, not in the Sports section; Toreros and especially the Matador are similarly admired not for their success, but for the beauty and elegance of their performance in out-thinking and exploiting the behaviour of the bull. Imagine their consternation if a robot was let into the ring and proceeded to dispatch the bull, not as a death machine, but with elegance and grace!
It is this aspect of the Go competition I wanted to consider, as it leads on to the general questions of what intelligence and consciousness are, of the nature and desirability of Artificial Intelligence (AI), and what this victory means for the future of AI and of humanity. Big questions indeed, which intelligences greater than mine have studied deeper and for longer than I.
What is intelligence?
I will boast and say I am too intelligent to think I could possibly deal with this question just in a blog!
There are those who refuse to believe a machine can be made which has a true intelligence of the type possessed by humans. The general attitude seems to be that the machine can be made to APPEAR as though it is behaving like a human, but that is all, it cannot be made to possess our consciousness. This is called the Philosophical zombie approach. As you can see at the Wikipedia page, it is "a whole nother" can of worms. The other side of this argument is the Turing test, which posits that if someone can't tell they are dealing with a computer, then it has the ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Others will argue that since intelligence itself is not a physical thing, then although the intelligence may not be produced by natural means, but by an artificial construction, the fundamental nature of it may approach, replicate, or even surpass what humans possess.
This is but a taste of the general subject, interested readers should begin by tackling the Wikipedia entry Philosophy of artificial intelligence, which continues as above with the important propositions of: -
There are those who refuse to believe a machine can be made which has a true intelligence of the type possessed by humans. The general attitude seems to be that the machine can be made to APPEAR as though it is behaving like a human, but that is all, it cannot be made to possess our consciousness. This is called the Philosophical zombie approach. As you can see at the Wikipedia page, it is "a whole nother" can of worms. The other side of this argument is the Turing test, which posits that if someone can't tell they are dealing with a computer, then it has the ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Others will argue that since intelligence itself is not a physical thing, then although the intelligence may not be produced by natural means, but by an artificial construction, the fundamental nature of it may approach, replicate, or even surpass what humans possess.
This is but a taste of the general subject, interested readers should begin by tackling the Wikipedia entry Philosophy of artificial intelligence, which continues as above with the important propositions of: -
- Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being
- The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."
- Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
- Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
- Hobbes' mechanism: "Reason is nothing but reckoning."
Concerns regarding sovereignty
This news also seems to have raised the reported fears expressed by the Koreans, (and by others before) that they do not know what other AIs already in existence may be capable of, such as those used by large organisations who have an attitude that "knowledge is power". Google, Facebook, various government and military Intelligence (ha, ha!) agencies, are all known to employ AIs of varying descriptions, which may be capable of supplying knowledge far beyond what you or I may attain.
For example, IBM has built a typical tool of this type, called "Watson". When applied to medicine, "Watson's ingestion of more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records for further information gives it a breadth of knowledge no human doctor can match." See Wired's article "IBM's Watson is better at diagnosing cancer than human doctors". One can imagine what uses an intelligence agency could make of software like this.
If you want to know how far beyond your or my training and knowledge just the HUMAN intellects at Google venture, you have only to check the titles at this page listing Google's publications on Machine Intelligence. Humbling, ain't it?
UPDATE: Just a few days later on 24/03/16, Microsoft embarrassed themselves with an AI chatbot which was designed to learn to interact with people via Twitter. Just like the toxic troll spew seen in YouTube's comments sections, the antisocial and just plain mischievous Twitter population's language "...taught Microsoft’s AI chatbot to be a racist a..hole in less than a day". I had to laugh.
For example, IBM has built a typical tool of this type, called "Watson". When applied to medicine, "Watson's ingestion of more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records for further information gives it a breadth of knowledge no human doctor can match." See Wired's article "IBM's Watson is better at diagnosing cancer than human doctors". One can imagine what uses an intelligence agency could make of software like this.
If you want to know how far beyond your or my training and knowledge just the HUMAN intellects at Google venture, you have only to check the titles at this page listing Google's publications on Machine Intelligence. Humbling, ain't it?
UPDATE: Just a few days later on 24/03/16, Microsoft embarrassed themselves with an AI chatbot which was designed to learn to interact with people via Twitter. Just like the toxic troll spew seen in YouTube's comments sections, the antisocial and just plain mischievous Twitter population's language "...taught Microsoft’s AI chatbot to be a racist a..hole in less than a day". I had to laugh.
Emergent Properties
Back in the Eighties, I encountered this concept, and have absorbed it as a powerful way of thinking about complex systems. It has relevance to concerns of "not knowing what the machines are doing." I suggest it may not even be possible for us to know, not as a practical problem, but as a general principle.
To quote Wikipedia's definition in their article on it, "In philosophy, systems theory, science, and art, emergence is a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties." Beehives, birds flocking, or the Stock Market are typical examples of emergent behaviour.
Emergence is an interesting phenomenon in relation to information flow. That is, one can know everything about the components of a system, yet not be able to predict the behaviour of the grouped components as a whole. Conversely, knowledge of the behaviour of a system cannot explain the nature of its components.
For example, even though the properties of Hydrogen and Oxygen are extremely well known, the behaviour of water itself is not predictable. Indeed, " no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts" (Wikipedia). And, conversely, no amount of studying human beings' behaviour will enlighten you on the subject of cell biology.
The atoms of DNA do not "control" what the DNA encodes.
The average DNA molecule is not aware of what the cell does.
The cells of the body do not have a say in what you do.
Orders of magnitude in emergence.
Atoms in a DNA molecule? Very roughly 10 Billion
Cells in a body? Current estimates around 37.2 trillion
Humans on Earth? Around 7 billion
Number of computers on the Internet? 8 to 10 billion range. (2013)
(This number would include traditional computer devices, mobile devices, as well as the
new industrial and consumer devices that we think of as things.)
We can see that there are enough "components" of the system we call the Internet, for it to have emergent properties not predictable in principle from knowledge of the behaviour of the components. This behaviour could exhibit properties as unfathomable to us (and us includes "the powers that be") as our behaviour is to our cells, or the behaviour of a termite mound is to an individual termite.
You might think of it in terms of the Earth itself acquiring a consciousness, with The Internet as its brain. It could be aware of us via webcams, satellite images etc, but we would not necessarily be privy to its thoughts. Would you bother talking to your kidney cells before having a whiskey, or ask your skin cells if they mind your exfoliant usage?
To quote Wikipedia's definition in their article on it, "In philosophy, systems theory, science, and art, emergence is a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties." Beehives, birds flocking, or the Stock Market are typical examples of emergent behaviour.
Emergence is an interesting phenomenon in relation to information flow. That is, one can know everything about the components of a system, yet not be able to predict the behaviour of the grouped components as a whole. Conversely, knowledge of the behaviour of a system cannot explain the nature of its components.
For example, even though the properties of Hydrogen and Oxygen are extremely well known, the behaviour of water itself is not predictable. Indeed, " no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts" (Wikipedia). And, conversely, no amount of studying human beings' behaviour will enlighten you on the subject of cell biology.
The atoms of DNA do not "control" what the DNA encodes.
The average DNA molecule is not aware of what the cell does.
The cells of the body do not have a say in what you do.
Orders of magnitude in emergence.
Atoms in a DNA molecule? Very roughly 10 Billion
Cells in a body? Current estimates around 37.2 trillion
Humans on Earth? Around 7 billion
Number of computers on the Internet? 8 to 10 billion range. (2013)
(This number would include traditional computer devices, mobile devices, as well as the
new industrial and consumer devices that we think of as things.)
We can see that there are enough "components" of the system we call the Internet, for it to have emergent properties not predictable in principle from knowledge of the behaviour of the components. This behaviour could exhibit properties as unfathomable to us (and us includes "the powers that be") as our behaviour is to our cells, or the behaviour of a termite mound is to an individual termite.
You might think of it in terms of the Earth itself acquiring a consciousness, with The Internet as its brain. It could be aware of us via webcams, satellite images etc, but we would not necessarily be privy to its thoughts. Would you bother talking to your kidney cells before having a whiskey, or ask your skin cells if they mind your exfoliant usage?
Rate of Change: The Singularity Looms
Another concern raised by this AI victory is the question of what trend is manifesting,
and where this trend is taking us.
There is a concept familiar to Science Fiction readers which deals with the effects of runaway development of increasing AI powers. It is the concept that as AI is applied to the actual design process of new AI, and is implemented with robotically enabled manufacturing plants, we may encounter a situation where new AI is developed and built which is beyond human abilities to understand how it works, and which may rapidly acquire powers and motives outside human experience, knowledge, or control.
Update 05/11/2016 I encountered this item at the Scientific American website: - What Would a Machine as Smart as God Want?
The end-point of this runaway process has been dubbed "The Singularity", and there is a large body of both analytic thought and speculative fiction on the subject. I refer those interested in more on this scary subject to the "Technological singularity" page at Wikipedia.
This concept has its critics though.
Some arguments go back to claiming the very intelligence of machines is unattainable.
Others hold that social disruption and unemployment would kill the market that finances the building of the machines.
It is also argued that resource limits would collapse society before any runaway process could proceed too far.
Indeed, I have my own counter to one of the "for" arguments.
The "for" argument stems from the seemingly accelerating pace of change, that there are more technological advances happening more frequently in the present, with no expectation that this trend will not continue.
My refutation is that this falls victim to the bias of recent knowledge being more complete.
I formulated this concept myself at around age 20 when I was seeking to justify my enthusiasm for the current Rock music of the era of Led Zeppelin, Pink Floyd, Jimi Hendrix et al. Older conservatives would point out that so much of this Modern Stuff was of inferior quality, and this made it unworthy of respect as a whole. They would complain that very little, maybe, grudgingly The Beatles, met the standards of great "proper" musicians like Bach, Beethoven, or Bach.
It occurred to me that in examining the present, we have before us a large number of musicians who although undeniably talented, will not manage to remain respected in the collective memory beyond a few years or decades. Thus, when this period is examined in the future, in hindsight, only the great will stand out and be remembered.
Conversely, Bach, 1685-1750; Mozart, 1756-1791; and Beethoven, 1720-1877; have two hundred years worth of forgotten competing kapellmeisters and hack composers who may have been anything from vastly inferior to nearly as good, so there was probably just as much bad music then as now.
This extrapolation of trends, and the ignoring of design & resource limits has even been satirised, (along with marketing), here at The Economist site in a piece under the heading Shaving technology: - The cutting edge, A Moore's law for razor blades?
My other refutation, and one seemingly never employed in any Hollywood movie featuring a malevolent computer, is the simple expedient of pulling the plug, i.e decommissioning the power supply to the AI, and/or to the manufacturing facilities.
Think that back into existence, Deep Thought!
Works every time! (This is the negative corollary of the movie and TV trope "Explosive Instrumentation" (a sub-category of the "Made of Explodium" trope) where there are no such things as fuses or circuit breakers, and any system trauma always results in an explosion in the control panel.)
and where this trend is taking us.
There is a concept familiar to Science Fiction readers which deals with the effects of runaway development of increasing AI powers. It is the concept that as AI is applied to the actual design process of new AI, and is implemented with robotically enabled manufacturing plants, we may encounter a situation where new AI is developed and built which is beyond human abilities to understand how it works, and which may rapidly acquire powers and motives outside human experience, knowledge, or control.
Update 05/11/2016 I encountered this item at the Scientific American website: - What Would a Machine as Smart as God Want?
The end-point of this runaway process has been dubbed "The Singularity", and there is a large body of both analytic thought and speculative fiction on the subject. I refer those interested in more on this scary subject to the "Technological singularity" page at Wikipedia.
This concept has its critics though.
Some arguments go back to claiming the very intelligence of machines is unattainable.
Others hold that social disruption and unemployment would kill the market that finances the building of the machines.
It is also argued that resource limits would collapse society before any runaway process could proceed too far.
Indeed, I have my own counter to one of the "for" arguments.
The "for" argument stems from the seemingly accelerating pace of change, that there are more technological advances happening more frequently in the present, with no expectation that this trend will not continue.
My refutation is that this falls victim to the bias of recent knowledge being more complete.
I formulated this concept myself at around age 20 when I was seeking to justify my enthusiasm for the current Rock music of the era of Led Zeppelin, Pink Floyd, Jimi Hendrix et al. Older conservatives would point out that so much of this Modern Stuff was of inferior quality, and this made it unworthy of respect as a whole. They would complain that very little, maybe, grudgingly The Beatles, met the standards of great "proper" musicians like Bach, Beethoven, or Bach.
It occurred to me that in examining the present, we have before us a large number of musicians who although undeniably talented, will not manage to remain respected in the collective memory beyond a few years or decades. Thus, when this period is examined in the future, in hindsight, only the great will stand out and be remembered.
Conversely, Bach, 1685-1750; Mozart, 1756-1791; and Beethoven, 1720-1877; have two hundred years worth of forgotten competing kapellmeisters and hack composers who may have been anything from vastly inferior to nearly as good, so there was probably just as much bad music then as now.
This extrapolation of trends, and the ignoring of design & resource limits has even been satirised, (along with marketing), here at The Economist site in a piece under the heading Shaving technology: - The cutting edge, A Moore's law for razor blades?
My other refutation, and one seemingly never employed in any Hollywood movie featuring a malevolent computer, is the simple expedient of pulling the plug, i.e decommissioning the power supply to the AI, and/or to the manufacturing facilities.
Think that back into existence, Deep Thought!
Works every time! (This is the negative corollary of the movie and TV trope "Explosive Instrumentation" (a sub-category of the "Made of Explodium" trope) where there are no such things as fuses or circuit breakers, and any system trauma always results in an explosion in the control panel.)
Should you be worried?
Your Choices: -
1. Care
1.a Act
1.a.a You prevent it happening. See 2.b
1.a.b You fail to prevent it happening. See 2.a
1.b Don't act: See 2.
2. Don't care
2.a It happens:
2.a.a You are enslaved
2.a.b You are liberated from day-to-day concerns in a machine managed life of fulfillment, because the machines need you
to turn them on
2.b It doesn't happen
2.b.a You are liberated from day-to-day concerns in a world of machine-provided plenty
2.b.b You live a life of misery in a world stripped of the resources needed to make more machines
1. Care
1.a Act
1.a.a You prevent it happening. See 2.b
1.a.b You fail to prevent it happening. See 2.a
1.b Don't act: See 2.
2. Don't care
2.a It happens:
2.a.a You are enslaved
2.a.b You are liberated from day-to-day concerns in a machine managed life of fulfillment, because the machines need you
to turn them on
2.b It doesn't happen
2.b.a You are liberated from day-to-day concerns in a world of machine-provided plenty
2.b.b You live a life of misery in a world stripped of the resources needed to make more machines