The Speculist: The Three Goals of Robotics

logo.jpg

Live to see it.


« Catching Up With Ramona | Main | Are Wii Having Fun Yet? »


The Three Goals of Robotics

Michael Anissimov outlines the four basic views on what any eventual Artificial General Intelligence will be like:

1. Low power, low controllability

2. Low power, significant controllability

3. Great power, low controllability

4. Great power, significant controllability

Michael then describes the fourth option in some detail:

The great power, significant controllability group primarily originates with Eliezer Yudkowsky of the Singularity Institute. As such I will call it the SingInst view. The SingInst view acknowledges that after a certain point, AI will become self-improving and radically superintelligent and capable, but emphasizes that this doesn’t mean that all is lost. According to this view, by setting the initial conditions for AI carefully, we can expect certain invariants to persist after the roughly human-equivalent stage, even if we have no control over the AI directly. For instance, an AI with a fundamentally unselfish goal system would not suddenly transform into a selfish dictator AI, because future states of the AI are contingent upon specific self-modification choices continuous with the initial AI. So, if the second AI is not the type of person the first AI wants to be, then it will ensure that it never becomes it, even if it reprograms itself a bajillion times over. This is my view, and the view of maybe a few hundred SingInst supporters.

Sounds pretty good to me. So the question is...what do we want to go into that unselfish goal system driving the AI? Interestingly, I think this exercise might bring us back to Asimov's Three Laws of Robotics.

Now, granted, folks like Michael and Eliezer and others promoting the SingInst view would be the first to tell us that the Three Laws are (take your pick) risible, unworkable, pretty much a relic of a less tech-savvy era. Here's a typical critique.

I'm thinking that the whole problem with the Three Laws might just have to do with how they're phrased. Asimov essentially gave us three (ultimately four; we'll get to that in a minute) commandments for robots. And like the original ten commandments, they are primarily set up in the negative. Thou shalt not this; thou shalt not that.

But if the trick is to create a positive goal system for AI's, the Three Laws might provide a good starting point. Let's start with the first law:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

No good. Too negative. Let's make it a positive goal:

Ensure the safety of individual sentient beings.

Moving quickly on to law number two:

A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

Many have pointed out that this law essentially enslaves the robots. No good. Let's try something like this:

Maximize the happiness, freedom, and well-being of individual sentient beings.

See? Better. Then there's law number three:

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law..

Hmmm...interesting. Plus, there's the fourth law that showed up in some of the later novels, which was given precedence over all the others as the Zeroth Law of Robotics:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This one is pretty good, but like the others it assumes a fundamental difference between human and machine intelligence. Why draw that line? The Three Laws need to be reworked not only as positive goals, but as goals that apply to us as much as they do the AI's. Zero and Three might be combined thusly:

Ensure the survival of life and intelligence.

So now we have three goals where before we had four laws. These goals suffer from many of the same problems as the original laws. They're kind of vague; there will no doubt be disagreements as to what they mean. But rather than defining them as limitations or exceptions to intelligent behavior, by stating them as goals we would be saying that AI's are systems designed specifically to do these things. By extension, we would be saying that humanity is a system whose purpose is carrying out those goals.

We can debate how well humanity has done so far at carrying out those goals. (I tend to think we've done pretty well, but that we have a long way to go.)

As for the vagueness -- yes, we will need to get very specific about what we mean by things like "safety," "intelligence," and "happiness" (Not to mention "life") and the tricky relationship between each of these and "freedom." But come to think of it, we really need to be figuring that stuff out, anyway. And with these three goals in place, we will eventually have help from beings that will have a clearer understanding of these concepts than we possibly can.

So I propose the following Three Goals of Artificial Intelligence:

1. Ensure the survival of life and intelligence.

2. Ensure the safety of individual sentient beings.

3. Maximize the happiness, freedom, and well-being of individual sentient beings.

Will they work? If not, what goals would work better? I'd be interested to see some discussion on this.


UPDATE: Welcome InstaPals! Glen quips:

We need progress fast, especially as natural intelligence appears to be in diminishing supply.

Scanning the headlines (or, worse yet, surfing channels to see what's on TV) it would be hard to argue with that assessment. But, astoundingly, there is substantial evidence to suggest that human intelligence is actually increasing. Arnold Kling has some thoughts on the subject, here. I covered it here, too, in a pilot for a show that apparently never got picked up.

Hard as it is to accept that people may be getting smarter, it is of course very good news that we are. We need all the intelligence we can muster if we are to

1) Continue to implement these goals ourselves, and

2) Develop the technology that will eventually take them over

I guess the trick in finding this increase in human intelligence is knowing where to look. By nature of his valuable pundit work, Glen spends a lot of time following what politicians and the media are up to. Not a lot of gains happening there, sadly.

Comments

The key, I think, is to avoid making the machines self replicating. Otherwise, any protections against them becoming selfish will eventually be for naught, as copying errors become subject to the principles of evolution.

I'd reverse the order of "ensure survival of individual sentients" and "ensure freedom of individual sentients". Leaving them in this order would result in a nanny society, where freedom was ruthlessly crushed, in favor of perfect safety (sort of like having Democrats in power, but worse).

"as natural intelligence appears to be in diminishing supply."

Glenn had just been dealing with a specific example of idiocy - Andrew Sullivan. Of course a single example doesn't prove a trend.

TJIC,

Freedom might be crushed in a political context where these were the goals, but only because survival/safety is adopted as the only actual value and freedom just becomes lip service.

I think it's telling that the Declaration of Independence talks about Life, Liberty, and the Pursuit of Happiness. In that order. Sheesh, what a New Dealer / Nanny-Stater that Thomas Jefferson was, eh? :-)

If we promote any one of those values to the exclusion of the others, we have a problem. But if we strive to define all three in such a way as to work together, maybe we'll start closing in on something.

A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

Many have pointed out that this law essentially enslaves the robots.

I find that statement to be ironic because as it turned out in the Asimov novels it was the humans that were being controlled by the robot R. Daneel Olivaw.

What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.

Also, as you point out, these goals are vague. More specific and useful from a programmer's perspective would be some kind of algorithm that takes human preferences as inputs and outputs actions that practically everyone sees as reasonable and benevolent. Hard to do, obviously, but CEV (http://www.singinst.org/upload/CEV.html) is one attempt.

The "Higher intelligence, dumber world" comes from two things, I think. The first, as one columnist pointed out, comes from the increasing sophistication of the world. You need to know more to get by in the world than ever before, and each deficiency is a glaring flaw that makes you look "dumb."

The second "problem" is the ever expanding scope of conversation. We have more memes, more participating individuals than ever before. And any forumite can tell you that the intelligence of his preferred forum is inversley related to the number of people posting there, ie, the more people post, the more you're likely to run into a jackass. Apply this to the news and internet writ large.

Of course, for every moron, you can meet a genius, and as a result, our world is advancing rapidly and things like Yahoo Answers or Wikipedia are helping more people than ever "figure things out," but people tend to notice the stupidity more than the brilliance, so everything "looks dumb."

phil...the concept u are reaching for is called recursive self-improvement.
mathematically it is semi-rigorous, and there has been quite a body of wvrk done on it. ;)
heres a link.
http://quantumghosts.blogspot.com/2006/09/friendly-ai-possible-friendly.html

Phil,

I'm afraid I have to side with TJIC with regard to the priority structure embodied in the Goals as presented.

The sort of guidance we are envisioning must, at the most fundamental level, be logically exclusive and compelling or the AI in question could 'reason' its way around the strictures imposed (much like Asimov's "Zeroth Law" robots) and, eventually, the value of the Goals in limiting AI evolution to the high-power, high-controllability track is lost.

Each Goal must have a clear and unbreakable priority over the others that follow it and thus, in the order stated, collective continuity trumps individual safety ("The needs of the many outweigh the needs of the few, or the one."), individual safety (broadly construed, 'stasis') trumps individual liberty ('free will'), and happiness ('utility', a notoriously slippery concept for economists and philosophers to get a firm intellectual grip on) trumps both individual liberty and individual well-being (allowing potentially self-destructive behavior on the individual level insofar as that behavior doesn't exceed the standard established for 'safety' in Goal 2).

Whole series of books could be written whose plots hinge on the resolution of internal conflicts among the Goals and creative resolutions thereof, but, as currently constituted, they lead, logically, resolutely, and inescapably (if they have the 'teeth' necessary to prevent a sufficiently creative or evolutionary AI from eventually transforming into the powerful, uncontroallable, and ultimately selfish and dominating, type) to the ultimate expression of the 'precautionary principle' and an eventual 'lotus eater' static state for all sentients.

dumb, michael
phil's rulebased system is woefully obsolete already.
read my linkage.
it is like......how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.
welcome to the 21st century, dudes.

phil read the literature please before commenting on Friendly AI.
your rulebased system is already obsolete.

Well, class, I just don't know what I'm going to do with you. I'm afraid I'm going to have to keep you all after school so we can spend some time trying to understand the difference between rules and goals...not the same thing, I'm afraid, and I was about as adamant as I could possibly be as to which one I was writing about. Matoko, you have to stay later for your disrespectful attitude towards other students. But, hey, it could be worse -- at least I'm not counting off for spelling and punctuation!

BTW, not everyone agrees that recursive self-improvement alone will get us there. In the piece by Eliezer Yudkowsky that Michael Anissimov linked above, Eliezer argues that recursive self-improvement on its own might prove very dangerous.

Michael A. -

What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.

Actually, that's kind of where I'm going with this. The more I look at these goals, the more I think they're really goals for us. If we play our cards right (and something like CEV might get us there) these might eventually be goals we share with these new intelligences. It makes me wonder...have we (humanity or individuals) had some version of CEV that we've been running all along?

Asimov's laws focus on individual encounters. Your goals are more like a constitution.

Since sentience is a matter of degree, at what point do the desires/needs of robots become of equal value to those of humans? I.e., at what point does society emancipate them? And after that point, does it become criminal to produce a less-than-maximally sentient robot in the same way that it would be to induce brain defects in a fetus?

how rulebased systems an table lookups were subsumed by genetic algorithms and relational databases in software.

matoko, relational databases are tabled databases by definition, hence, one does table lookups when performing a variety of relational database operations (or at least it appears that way to the user). I gather object oriented or functional databases (the latter is the current version of rules-based systems) are state of the art these days, but I haven't been keeping up.

Second, genetic algorithms (GAs) solve a sort of problem mostly orthogonal to anything else you mention. Ie, optimization problems. At its most related, I imagine one could use GAs to optimize queries and other high level operations on the database. Search optimization is a serious problem and poor searching methods can slow down database queries by a considerable amount.

Post a comment

(Comments are moderated, and sometimes they take a while to appear. Thanks for waiting.)






Be a Speculist

Share your thoughts on the future with more than

70,000

Speculist readers. Write to us at:

speculist1@yahoo.com

(More details here.)



Blogroll



Categories

Powered by
Movable Type 3.2