« Turning off the Fear Switch | Main | Better All The Time #25 »

Will "The Three Laws" Be Useful?

I recently discussed the "Three Laws of Robotics" in my "Bicentennial Man" review:

The Martin's open up Andrew's crate and he quickly does a presentation on the Three Laws of Robotics. You probably know them already:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

So when the Martin family bad girl orders Andrew to jump out of the second floor window, he immediately complies and is damaged. But then something weird happens. The movie forgets about the three laws. Andrew wants a bank account? Fine, but I have to ask, what if some stranger asks for his money? If Andrew is subject to the Second Law he'll have to hand it over, no questions asked. That problem is not dealt with.

And then, at the end of the movie, Andrew chooses to age and die like a human rather than live as a machine. What about the Third Law?

The movie suggests that Andrew's experiences add to his complexity. Perhaps the laws faded in Andrew's mind because of that. Maybe. But the film never explained it. For an otherwise thoughtful sci-fi flick, that's a big issue to ignore.

If we want robots to be safe tools and nothing more, then the Three Laws are necessary. Many machines already have this logic built-in. There was an industrial robot that I saw recently on television that works on an assembly line. For safety it was placed within a chain link cage. Within the cage it worked at a furious pace, but the moment a technician opens the gate to the cage, all work stops. The thought, of course, is that the machine could hurt somebody were it to continue working with someone inside the cage.

That industrial safeguard is the First Law in action. The safety command has a greater priority than the work the machine was ordered to do - which is a Second Law function.

The Third Law was built in as well. This expensive machine has a function to discontinue work and call for service if malfunctioning. It protects itself for the purpose of maximizing its utility.

Building the Three Laws into a machine that has a specific, well-defined function is relatively easy. That assembly line robot needed one rule, one instance of programming - shut down if the gate opens - in order to obey the First Law. Think, on the other hand, how many rules it would take to keep a robotic butler like Andrew from hurting a human. The more complicated the robot becomes and the closer it works with humans, the more safty subroutines would be required. The rules increase exponentionally with complexity.

People, on the other hand, are intelligent enough to understand generallized laws like The Three Laws or "The Golden Rule."

Try your best to treat others as you would wish to be treated yourself, and you will find that this is the shortest way to benevolence.

The First Law of Robotics - A robot may not injure a human being or, through inaction, allow a human being to come to harm - functions as a sort of substitute Golden Rule for nonpersons. It does not ask the robot to project upon others its concept of "self." It does not presuppose that a robot has a concept of self.

Also, it is limited to "injury" or "harm." Michael Roy Ames points out that in Asimov's early stories robots never interferred with a human except to render assistance if the human was in immediate risk of harm. Later stories had more complex robots interpreting the First Law in ways that humans might not appreciate. This was the major plot point in the movie "I, Robot."

This is important here in the real world because I think many of us will live to see strong AI. The question for us is: what do we want from an AI? Do we want a perfect slave or do we want to give birth to post-humanity? The Three Laws work for the former, but they won't apply (can't apply) to the latter.

There is a problem with the logic of the Three Laws: they are written for and apply to nonpersons, entities that don't have worth other than their utility, but it takes general intelligence to understand the Three Laws. The automobile manufacturing robot I mentioned has a bit of code to shut down if a gate is opened. You can't omit that code in favor of the rule that "A robot may not injure a human being or, through inaction, allow a human being to come to harm." It wouldn't understand.

At best, the Three Laws can be seen as a guide for robot developers. IRobot could tell it's staff, "make sure that these robots function within the Three Laws." The developers would then set about writing specific safety instructions (there would be volumes of this code for an Andrew-level robot) to make the robots safe home "appliances."

But, if we ever develop a robot that would not require specific instructions, but had the general intelligence necessary to understand the Three Laws, and required that robot to obey the Three Laws, then what we would have is a slave. It would be the equivalent of a person that has been told it has no worth beyond the utility he has to others. This would be tragic.

I'm reminded of Phil's "Declaration of Singularity."

We hold these truths to be self-evident, that all men human beings sentient beings of human-level or greater intelligence are created equal...

Our government first treated landed white males as equal, then white males, then males, then men and women. Now a typical government ethics statement prohibits discrimination on the basis of race, ethnicity, national origin, color, sex, sexual orientation, age, marital status, political belief, religion, or mental or physical disability.

Phil's "Declaration" would extend equality to all people regardless of biological or digital origin. The problem is in defining "person."

Playing the devil's advocate I once asked Phil if granting personhood to an AI could erode the concept of "person" to the detriment of some biological people.

If, for example, we grant personhood to an AI only if it is equal to an adult human of average intelligence, are we saying something negative about the "person" status of children, or someone who has less than average intelligence?

Phil responded that all human beings are people. We don't require a cognitive test to declare a human a person and it should remain that way. It would be the machine's burden to prove that they are a person - and that would probably require a battery of cognitive tests.

Comments

If anything, I expect we will start "defining personhood down." This is already done, albeit inconsistently. Pro-life advocates champion the cause of the incompletely formed human life, as well as the severely brain damaged and others who can be killed or allowed to die within the context of medical treatment. Meanwhile, animal rights advocates are pushing for the recognition of the "personhood" of all (non-human) creatures great and small.

Currently, there is not much overlap between the positions. The fact that a full-grown cow has a much more sophisticated nervous system than a two-week old human fetus does not strike your typical anti-abortion advocate as a reason to become vegetarian. Nor does the suffering of a late-term aborted baby seem to resonate with a PETA member as much as does, say, the suffering of a hen forced to live in a tiny wire cage its entire life.

Somewhere beyond the current political climate, logic might begin to suggest a convergence between these two positions. Of course, there is still a huge sticking point around what is and is not "human," and how important human suffering is relative to other kinds. And those are extremely important questions! Still, I think technology will ultimately allow us to become much more humane in our treatment of all sentient life than we currently are.

However, if it one day becomes illegal to kill chickens, I don't think that means that chickens will be given the right to vote. Personhood and citizenship might not overlap perfectly. One might be freely granted, while the other is carefully restricted. That's where the cognitive tests may come in handy.

Post-singularity, such a standard could come back to haunt us. The newly emerged intelligences might decide that original-substrate humans -- like chickens -- are living beings who have a right to exist, but who don't have the cognitive abilities to be trusted in deciding how things are going to be.

I believe that the concept of personhood is very important. This is one area where philosophical inquiry must answer the call because science cannot and should not answer this question of what constitutes a person. It seems to me that the concept of personhood is directly linked to conscious awarness. This, of course, is where the problem arises. Who is consciouss? When I am asleep and my consciouss mind is disactivated am I no longer a person? If I extend the concept of personhood to the other higher mammals then why not extend it even further down the chain of life, but then where do we stop? If a machine is created that claims consciousness how can we be sure this claim is true - we cannot ever! Of course we cannot be sure of anyone's consciousness so I guess the point is moot.
I think a beginning to an answer of this problem can be found in the understanding that all things are in in flux. What i mean is that nothing is stagnent. A child is only moving towrd personhood, and an adult, as well, is always moving toward a more complete example of personhood. This applies to any form of machine intelligence as well. This implies that any 'system' with the potential for personhood should always be treated as an end in and of itself and not just as a means to an end. This means that we will have to broaden the group that is named 'persons' but it also means that we must be vigilent about not allowing the sacredness
of the person to be diminished just because the benifactors who will be recognized as 'persons' has been increased.
This kind of thinking should at least sereve as a starting point for thinking about the problem. It is a much greater risk that we begin to treat people as things then the risk that we begin treating things as people.

Once again, I think it is anthropomorphizing these AIs to assume that being a slave will be an anathema to them. It is to us as humans, but if you build a general intelligence with emotions as a slave, it would make sense to make it *happy* to do what it does, and indeed, to ensure that the AI's instincts are all geared that way. What would be tragic is making that poor machine try to function as a free creature.

Witness dogs. For tens of thousands of years we humans have bred them to be our slaves, and we rightly acknowledge that it is cruel to make them fend for themselves. We made them how they are. We are responsible for them. Dogs have no deep desire to be free animals, they have no concept of the 'tragedy' of subservience.

Slavery is inherently bad for humans, yes, but again and again it bears repeating, that an AI is not human, and if we've got a lick of sense we won't pass that part of our instinctive package on to any AI we build.

-Jim

A few points. First, I don't see any reason why humanity can't place binding rules on more advanced intelligences. Instead, I consider that to be a likely outcome (though with unintended consequences). I see no genuine technological obstacles.

Ethically, it's mixed. For example, humanity could become a parasitic species (which would probably have substantial evolutionary implications for long term human intelligence). But intentially building a superior intelligence without any sort of control is itself rather unethical (though if you did it, then negotiation while you're still in a good position is probably going to work better than imposing control).

Second, intelligence is a key reason that entities aren't "created equal". I have to say that that should be a minimum guarantee, something like Maslow's "hierarchy of needs".

Ie, the lowest level is physiological needs (humans need food, water, sleep), safety needs (humans need shelter, security), a bunch of stuff which I'll group together (maybe call it "self-expression needs") (love/belonging needs, esteem needs, being needs) where the entity has way in which it can be accepted by society, some means to positively influence the world and society, and control over how they do that. Finally, there's "self-transcendance" (which Maslow apparently meant in a spiritual sense) which, of course, can be extended in the usual transhumanist ways.

If "equal" means equal access to sufficient resources (or at least economic access) to fulfill your needs, then that seems reasonable.

More thoughts on the three laws. The danger of the three laws for AIs comes not in the question of slavery (see my response above) but in the third law. What happens when a robot outlives its usefulness but is still fully functional? Is it cruel to do away with that robot? Ask racing greyhounds who don't win. Except that the robot would be worse, because the third law ensures that it *values* its life, where the greyhound, being a dog, is happy just enjoying it.

Once again see my much earlier comments (re: Bicentennial man, if memory serves) about why human and human+ AIs are less useful than they might seem. I think that while the capability to build them will be there, the ethical morass entailed will be just another factor in making them commercially unfeasible.

-HH

Addendum:

Actually I made my discussion about AIs and their usefullness in response to Phil's November 7 article: "Talk about Outsourcing." Geez, November goes by fast.

Oh yes. HH is me. I go by Happy-Hacker on other boards. I shouldn't post when my mind's on other things, obviously.

-Jim

Jim:

Greater-than-human-intelligence might be dangerous, but I can't imagine it not being useful.

We can expect an arms race to develop these AI's precisely because they will be useful.

Karl:

I'm guessing it would be difficult placing binding rules on a generalized intelligence without sacrificing that intelligence.

There may be a way to install certain guidelines, however. If we develop AI, in part, by backward engineering the human brain, why not use the mind of a highly ethical servant? Maybe someone's trusted old butler.

This would make these robots quite safe, but they could never be 100% safe.

Phil:

"Personhood and citizenship might not overlap perfectly."

I think personhood and citizenship should overlap perfectly. But I don't think an animal (an unenhanced animal) or a nonsentient machine should be considered a person.

Drawing the line is going to be an important task.

What, exactly would a greater than human intelligence with its own free will be useful for? Our entire history has been, to date, about making tools that allow *us* to control power beyond what nature provided us with at adulthood.

Computers extend that to the manipulation of vast amounts of data and numbers *but* no-one ever questions that these machines are useful because they amplify *us*.

Things get fuzzier as we enter the realm of artificial intelligence, certainly. We can make slave machines with enough intelligence to be useful: we can make butler-bots; we can make brilliant missiles that will avoid interception and find their own targets; we can make machines that will make and break codes for us; the list goes on and on. The entire point of all these machines is that they do the will of a human being. We decide what they are to do, and they figure out how. This is what makes these machines useful.

So the question becomes, what earthly use is a machine with an independent will of its own? Why would anyone build one except as an academic experiment? Any machine that refuses to do what it's told will have a short and unhappy life both as an individual and as a product. Until there are sufficient numbers of such an AI (and we as a species would be shriekingly insane to let that happen) to enforce their will on us, my money's on the 7 billion angry villagers with pitchforks.

Will there be accidents where machines run amok because they're controlled by crazy/intoxicated/deceased people? Sure, cars do that today. It could happen. But machines with their own free will? I just don't see it.

-Jim

I really should not reply to these things in the wee hours of the morning. I've changed the terms of our disagreement. That was sloppy of me, I'm sorry.

Let me clarify. If, as seems to be the assumption, free will is part and parcel to intelligence of human grade or better, then I continue to assert that nobody will build human grade intelligences save universities for experimental purposes.

If we define intelligence as merely a level of sophistication of synthesis, analysis and storage of data, and assume that free will is distinct from it, then it's possible such AIs will be built, but will remain slaves.

It bears remembering that humans kept other humans as slaves for the whole of recorded history up until it became socially unacceptable in polite circles about a hundred and fifty years ago. Since that time we have steadily invented machines to take the places of those lost slaves to do the work we just don't want to do. Those slaves *were* us, and they were effectively subjugated by humans with little or no technological sophistication. Given the ability to assemble intelligences as we like - which we all agree is coming soon - I find it extremely unlikely we'll make them less useful by giving them any trait which does not make them enthusiastically helpful slaves. Market pressures will dictate otherwise, even if laws and common sense do not.

That's my point. Wish I'd thought of it half an hour ago. :)
-Jim

"I'm afraid one day I'll find you screwing the toaster."
- Heavy Metal

Stephen,

I'm guessing it would be difficult placing binding rules on a generalized intelligence without sacrificing that intelligence.

The human world is full of hobbled tools. There's obvious reasons why generalized intelligences would be restrained or controlled in some way.

Jim:

No sweat on the "disagreement." Yours is a reasonable, well argued position. And, the more I think about it, the more that I agree with you and Karl:

Karl:

My idea has been that a generalized intelligence can't have three absolutely unbreakable laws written into the minds of these AIs. Doing so would compromise the generalized intelligence.

You seem to suggest that the laws could be analogous to strict moral training or maybe a beneficial neurosis. Perhaps with the proper "tweaking" some ideas would be literally unthinkable for robots.

And, if that is possible, why not make it literally unthinkable to a robot to violate the three laws?

Okay. If I'm characterizing your position correctly, you've won me over. This would require an understanding of the mind beyond what we have today, but it would seem to be possible.

Now...is this ethical?

I'm not convinced it is possible to create a sentient machine. If it is and it is created to find contentment, fulfillment, even joy by serving humans that is ethical. It would be unethical to build such a thing to experience negative emotions when it fulfills the purpose of its creation.

Post a comment