Intelligence and Consciousness
Proposition: It would be wrong to assume that an AGI (artificial general intelligence) could in any sense be the "property" of a human being for exactly the same reason that it is wrong to believe that a human being can be the property of another human being. For a human being to subject the AGI to his or her will would be a fundamental violation of that intelligence's right to define and determine its own existence.
Question: How does an AGI come to have any "rights?"
Snarky Response: How does a human being come to have any "rights?"
More Serious Response: Assuming that human beings do have rights, and assuming that self-determination is among those rights -- I really have to start with these as assumptions; anyone who wants to argue these points will just have to find another blog to read -- it would be very difficult to provide a rational explanation for not extending those rights to an AGI, assuming:
-
The AGI is as intelligent as a human being
-
The AGI has its own motivations and desires (a requirement which may or may not have already been established in item 1)
-
The AGI has a sense of self
-
The AGI has feelings, and can experience pain (a requirement which may or may not have already been established in item 3, which itself may or may not have been established in item 1.)
In other words, if the experience of being an AGI is in some sense congruent with the experience of being a human being -- which is what the language about intelligence, sense of self, and experience of pain, is all getting at -- then making human slavery illegal while allowing AGI slavery would seem to be nothing but substrate bias in action.
But.
What about all that rhetorical dancing around I had to do about whether the later items on the list were all covered by item 1? The first item talks about an elusive concept that we call "intelligence." The other three items are getting at, but do not specifically mention, an even more elusive concept that we call "consciousness."
Question: Would an AGI, by definition, be conscious? Mitchell Howe has thoughts on the subject:
It could well be that any AI capable of love will also have a kind of consciousness. But at this point in time I don’t know how to test that assumption. And apart from the obvious philosophical questions this raises, I’m still not convinced it matters.
As I was recently telling a colleague, I’m confident that all of my mental abilities, both logical and artistic, are owed to the structure of the matter in my brain. “And if it’s all in there, then I see no reason to argue that certain aspects of it will be reproducible on another substrate while others will not. Indeed, for all I know, AC may actually be simpler than AI. Maybe we’ve been creating Artificial Consciousness since 1893 and just haven’t realized it yet because toasters can’t cry.”
This is helpful, as far as it goes. AC may be simpler than AI. I'll buy that. If you recreate a functioning conscious brain in another substrate, there's no reason to think that it won't be conscious. Granted.
But.
Modified Question: Would an AGI, by definition, necessarily be conscious? A square is a rectangle; a rectangle is not necessarily a square. Yes, intelligence could coexist with consciousness in another substrate. But would it have to? Could there be a highly intelligent being -- as smart as or smarter than a human being -- with no sense of self, no subjective experience of being itself?
What we typically think of as unconscious machines are already "smarter" than we are in limited and restrictive ways. They can do math faster than we can, they can beat us at chess, etc. Could a large number of different narrow intelligence capabilities be networked in such a way that the resulting machine could pass an arbitrary test of general intelligence and yet still have no subjective experience of self?
It seems to me that it could. Although I'm not sure how we would ever establish that such a machine is not conscious. In one of his novels, Greg Egan describes the meeting between an AI and a being who self-describes as a "non-sentient" intelligence. If they come right out and admit it, great. Problem solved, right?
Well, maybe. But how would an intelligence know that it isn't conscious? Wouldn't that require a sense of self? Or perhaps a sense of lack-of-self? But having a sense of lack-of-self starts to sound a little bit like consciousness. On the other hand, ultimately we want Egan's distinction to be real so that we can make it to the following
Modified Proposition: It would be wrong to assume that a conscious AGI could in any sense be the "property" of a human being for exactly the same reason that it is wrong to believe that a human being can be the property of another human being.
In Egan's fictional world, non-sentient AIs are treated pretty much like property, although many of them read like they would have a fair shot at passing the Turing Test. Non-sentient AGI may just be fantasy, but it is a tempting fantasy. To have intelligent beings tirelessly do our bidding sounds great, but only if they are doing this with no sense of loss or pain on their part. Nor would it be acceptable to take a conscious AI and "edit out" its own desires in favor of ours -- date rape drugs enable date rape, but they don't make it a good thing.
So the questions remain. Do consciousness and general intelligence go hand-in-hand? If so, then we know some of the boundaries of the human/AI relationship going in. If not, the rules of engagement are less clear. But the over-arching questions remains: how would we ever know for sure, one way or the other, which intelligence are or are not conscious?
Comments
My initial response can be read here. Not sure it's quite what you're looking for, I'm afraid.
Posted by: Will Brown
|
November 17, 2007 01:05 AM
That was a great post with many interesting points Phil. I agree that artificial consciousness is likely easier than AGI. I think that humans are well above the minimum threshold for consciousness. I attribute consciousness to horses and elephants but not general intelligence. I think that a non-consciousness general intelligence may be possible and that it might be a good idea to try for that the first time that AGI is designed. I would be surprised if a biological example of a non-conscious intelligence exists anywhere in the universe, though. Perhaps distinguishing between self-referential and non-sel-referential forms of consciousness might be meaningful. WRT Mitchell Howe's point I once heard a scientist whose name escapes me say that consciousness is what processing information feels like. From this viewpoint I suppose that one could argue that a toaster is slightly more conscious than a rock. On the other hand this might stretch the concept to the point of meaninglessness. As for determining if a given AI is conscious, I agree that you just can't ask it. It would be trivially easy today to write a computer program that outputs "I'm not conscious." and said AI could just be doing the same thing, or it could be lying. If an AI exhibits the kind of behaviors that cause us to attribute consciousness to each other, then I would err on the side of caution.
You're referring to the Contingency Handler from Diaspora, right? I highly recommend that book to anyone who hasn't read it. I found the ending moving in a way that I can't quite put my finger on.
I think that a large class of labor robots need be nothing more than sophisticated roombas. An AI would have to have the emotional capability to be unhappy with its orders programmed in. I do not ever expect to see a robot labor strike. It might be possible to create a slave mind whose only goal is to satisfy your every whim and be perfectly happy doing it, that does not make it moral. Many of us, after all, object to being gene replication machines. Not every source of happiness is a valid one, and determining which are is another challenge that we will face. I'll stop now. I hope that wasn't too rambling. ;)
Posted by: Matt Duing
|
November 17, 2007 02:05 AM
Will,
Interesting stuff. I have responded at your blog.
Matt,
Certainly no more rambling than the post itself! :-) I have been reading Douglas Hofstadter's book I Am a Strange Loop, where he gets into this issue of consciousness attached to greater and lesser intelligence. He describes a mosquito as having a consciousness just barely above that of a thermostat. But even a thermostat is "more" conscious than the floating ball mechanism in a flush toilet! Consciousness shows up somewhere on a continuum between the most basic stimulus response and cogito ergo sum. But where?
Posted by: Phil Bowermaster
|
November 17, 2007 06:45 AM
Phil
You have, or you will? Nothing yet. Do I have a comment problem to solve?
Posted by: Will Brown
|
November 17, 2007 01:04 PM
I left two comments, but they were from my wife's account, so the name shown on them was "Sue." I wonder why they didn't show up?
Posted by: Phil Bowermaster
|
November 17, 2007 02:16 PM
Phil,
Coincidentally, I'm reading Godel, Escher, Bach right now. I might comment furher on this when I'm finished.
Posted by: Matt Duing
|
November 18, 2007 03:17 AM
Phil
I have no problems with a boy named "Sue", but apparently my blog does.
Regardless, we have success in your own name and my reply is probably more extensive then you anticipate.
Posted by: Will Brown
|
November 18, 2007 01:20 PM