« Aubrey de Grey on 60 Minutes | Main | Tipler Weighs In »

Interesting Discussion

The God and The Singularity post has generated some interesting discussion in the comments. I would like to address a few of the points raised by a shift-key-challenged reader named eisendorn. His issues provide a good opportunity for clarification and amplification on some of what I wrote in the initial entry.

Eisendorn writes:

i view the topic of this discussion with suspicion, and i see a fundamental problem in your reasoning, namely that you seem to automatically equate moral goodness with christian values and belief systems.

Actually, it's more the other way around. I assert that Christian values and belief systems are predicated on an idea of goodness. I don't think that a belief or value is good because it's Christian. I think that if a belief or value is Christian (or Jewish, or Muslim, or for that matter Hindu or Buddhist -- where the idea of "God" might be very different from what's found in those first three, or altogether absent in the case of many Buddhists) then it is an attempt to reflect or to conform with an ultimate good.

Eisendorn continues:

while this may seem to be sound reasoning from a US citizen's point of view, it certainly doesn't seem all that appropriate thinking on an more global scale. after all, a solid part of the world population will not agree with that assumption to start with.

So making an association between Christian thought and some notion of the good reflects a narrow, US-centric worldview? This will no doubt come as a shock to all those Catholics in places like Poland or Mexico. Who would have guessed that they're all just pawns of the American agenda?

Here are a couple of quotes which I think are relevant to the discussion at hand. They come from one of the great leaders of the 20th century:

Infinite striving to be the best is man's duty; it is its own reward. Everything else is in God's hands.

I saw that nations like individuals could only be made through the agony of the Cross and in no other way. Joy comes not out of infliction of pain on others but out of pain voluntarily borne by oneself.

Sheesh, why couldn't this guy keep his bigoted, fundamentalist, US-authored opinions to himself? Oh, that's right. He wasn't remotely American and he wasn't a Christian. In fact, he had a pretty low opinion of many (if not most) Christians. The idea that there is an ultimate good, that this good emanates from God, and that Christianity attempts to manifest or reflect this good through its beliefs and practices is not a strictly American, nor indeed even Christian, idea.

Eisendorn continues:

futhermore, i think you will have quite a hard time instilling the notion of an all-powerful, all-mercyful god into a piece of software (especially one written in java as singinst suggests :P), and without it, christian ethics necessarily crumble. it may well be that there is a common denominator in christian morality and a supposed "perfect" ethic system for a benevolent ai; however, a humanitarian approach to safe ai design would certainly offer better possibilities (without being chained by a belief system), and this is a different vector of thought entirely.

I'm certainly not suggesting that we should try to program a computer to "believe" in God or to subscribe to any set of religious beliefs. I'm from the school that says these things are only worthwhile if one comes to them freely and of one's own (or God's) volition. But I think it would be equally problematic to attempt to introduce a purely relativistic moral scheme from whole cloth. Even Asimov's Three Laws represent some kind of moral postulates to proceed from. What would this alternative "humanitarian approach" entail? Telling the AI that while we think it's bad to kill people, and we hope that it comes to the same conclusion, we recognize that it would be judgmental of us to insist that killing people is wrong in any absolute sense? From a strictly practical standpoint, that leaves more wiggle room than I'm comfortable with.

Moreover, I would humbly suggest that anyone who pits "Christian" and "humanitarian" as stark alternatives, exclusive of each other, is proceeding from at least as strong a set of biases as he has inferred from reading my blog entry.

Here's why I should never bother attempting humor:

i also have a remark about the debate itself, namely that it is obvious where the participants hail from. in hardly any other (western) country would you find anyone being afraid of getting into trouble at a church camp or having struggles with their mothers over their, well, wider points of view.

Yes, it is unbelievably repressive here in the US. If my mother reads any of this, she will probably have my face removed from all the family photos. Then the Thought Police will come pounding on my door. It's a shame, but it's how things are. You've got us there.

this, paired with the earlier notion of you guys suggesting a soft takeoff to best happen within the US gives me the creeps, to be honest. do you really consider yourself and your nation to be the paragon of ethics in the world? neither your governments nor your major companies ethical policies suggest so, at least not from a european (actually, rest-of-the-world) view. to be honest, having a recursively evolving ai instilled with american/christian ethical values around on this planet looks no different to me than hard takeoff. but bear with me, i may be prejudiced by impartial news coverage.

One of the principles I've tried to enforce (not always successfully) at The Speculist is that we don't "do" politics or religion. Obviously, by initiating a series of entries on God and the Singularity, I have decided to put the second restriction aside. But I'm not giving up on the first! Anybody who wants to get into an argument about whether America is paragon of virtue or an evil opressor is welcome to find one of the half a zillion or so blogs where these things are discussed endlessly and have at it. *

But to clarify -- I think a soft takeoff is more likely to occur in a setting where people are working on building a friendly AI. Right now, that's the US. If Japan were one-tenth as interested in the Singularity as they are in robotics, they would probably be the prime contender. (And I still wouldn't count them out.) Where I have written about the importance of where the Singularity occurs, the choice presented has been the US vs. China. Could a soft takeoff occur in China? Yes. Could a hard takeoff occur in the US? Sure. A US corporation or defense contractor could stumble upon strong AI and launch a hard takeoff while trying to corner a market or build the ultimate weapon (to give just a couple of far-fetched examples). So, then, does it ultimately matter where the Singularity occurs?

Yes.

I maintain that we have a better chance of a soft takeoff if the Singularity takes place where people are purposefully working to create friendly AI. I know of folks in the US working to do this. I don't know of any in China.


* Which is not to say that I don't have an opinion on the subject, or that I don't think it's important. I do and I do.

Comments

phil:

please accept my gratitude for featuring me in one of your posts.

i regret that, in hot blood, i kicked of a europe vs. us debate, and i
would like to steer the discussion (which i regard to be of utmost
importance) away from this course. still, allow me to add two further
points to this thread:

- if you would try to measure the degree of secularity of a society, i
would guess that the us would be ranked very low in the list of
western countries (together with poland, for that matter). i am not
trying to suggest that this would disqualify the us as a suitable
candidate for singularity takeoff location; i merely wish you to
keep it in mind (in case you tend to agree) for the further course
of debate.
- secondly, i know next to nothing about your educational or spiritual
background, but judging from your writing, i would not consider you
to be a typical bible-belt redneck (if such an archetype does exist
outside of cliches). then again, we would not want singularity
ethics to be inspired by that type of person. the question is
whether we have a choice in the matter.

if you would, for a moment, consider the takeoff to occur in the
softest matter possible. an evolving ai would most certainly want to
draw on as much information as possible, and a good starting point would
be the internet (also consider the internet to "wake up", as suggested
by some researchers [i am writing offline, so i can not supply you
with quotes or references, unluckily]). if we allow the singularity ai
to draw on this information, what image would it get from humanity as
a whole and from your or my nation in particular? specifically, if it
encounters religion, what conclusions would it make? i dare to claim
that the concept of "god" would be incomprehensible to such a
system. given that, how would it interpret the complex behavioural
patterns (i.e. ethics) we humans constructed? no matter how great the
tendency (at least here in europe) to make christianity more popular
by claiming things as "man is the partner, not the servant of god",
most religious systems i know of confront the believer with some sort
of punishment if rules are not adhered to: you go to hell, or not to
nirvana, or whatever. lacking this constraints, there are no rational
reasons not to kill a human - at least not from the religious point of
view.

and this is, in my opinion, the stark difference between humansim (or
any secular morality) and religion (be it christianity or whatever
else): the outcome may be the same - thou shalt not kill - but the
_why_ makes the difference. and the strictly religous point of view -
you do not want to be punished - will not provide an ai with
enough incentive to be kind towards humanity. from the humanist point
of view, i could argue that random killings would result in anarchy
and collapse of society, which is why we need law. an ai could
probably be pushed in the same direction by injecting the notion of
biodiversity worth preserving. i got this idea while reading on the
orion's arm website (excuse me resorting to science fiction), where
they describe and evolved "ai god" (as they call it) protecting
humanity for the simple sake of preserving its uniqueness in the
universe, thus fighting entropy to some degree. i would guess that
this is the kind of thinking an ai would take on, rather than trying
to interpret the meanings of hell, sheol, limbo or whatever.

lastly, i wish to comment on the ghandi quote you provided. doing your
best and relying on god for the rest is most likely not the kind of
thinking that will take us toward the singularity (and probably also
not what god intended in the first place, if you ask me). for that
matter, i would rather advocate creating our own "god" or rebuilding
ourselves to be one. this after all, would be the strongest form of
humanism i could think of.

wbr, eis.

I'm also from Europe (I prefer the Netherlands, since 'Europe', formally, does not exist) & I disagree. I think the US is a secular as Europe. Europe is exceptional and it is precisely that which has spawned (sorry Speculists) the US. If you mean 'US is as atheist as Europe', it is clearly wrong. The problem I think is the expression of religious feelings. It's not true that this is in any sort of decline across Europe. It differs from the US, but not by much, I gather. One Dutch philosopher thinks society has lost its soul, religious roots of society are dead and people are therefore looking for leadership.
This, to me, is a prime example of academic stupidity. Society has a thousand-and-one souls and people are looking for leadership, precisely because religious roots are alive and well (as they should be) and seek new expression. I think you can summarize it like: 'so sayeth the Lord: enthousiasmos'. It certainly applies to the various forms of the quirky jewish-greek hybrid, called christianity.

Would an AI regard 'god' incomprehensible?
Yes and no. Suppose it (= the AI) is more intelligent than all of humanity combined. It will be able to observe us, then, like no one else.
The AI will probably conclude deities do not exist, but that the concept of god, faith or belief is nevertheless necessary for the wellbeing of humans, based on simple observation. Physics rules the world, but you can't deny 2000 years of history or what, say, Augustine or Aquinas, wrote about god.
The AI will not equate the singularity with the summum bonum (god as the highest good, where evil is absent), but as something different, possibly neutral.

eisendorn --

I would agree that the US is less secular than Europe. It is also less secular than it has been in recent memory. We cycle through periods of greater and lesser interest in spiritual matters. But even at our low points, Americans tend to be a highly religious people.

As for "typical Bible-belt rednecks," there are a couple of misconceptions about these folks that I think the typical European Weenie ;-) has. One is that they are all white supremacists; the other is that they have some interest in making other people conform to their worldview.

I grew up amongst people who would likely be classified as TB-BR's. Small-town Kentucky, mid 1970's. I remember one day when a classmate took me to one side and proudly showed me his Klan membership card. (How he got the idea I would be impressed by this, I don't know.) What's interesting about this -- looking back -- is that even 30 years ago, even in that very conservative setting, the kid knew not to broadcast his affiliation. As far as I know, he was the only kid in school to carry one of those loathsome cards. By the mid-1970's in the armpit of Kentucky, there were few things more disgraceful a person could be than a member of the Klan. A kid could show up at school and declare himself to be "a New York liberal" and he would be much more warmly received

By my senior year in high school, another friend (rebel type) declared himself to be an atheist, a communist, and a bisexual. So, naturally, he was immediately lynched.

No, not really. Nobody cared. One thing that's missing from the stereotypical picture of a TB-BR is the very strong live-and-let-live ethos. His classmates may have thought that my friend had bought himself a one-way ticket to the lake of fire, and they certainly didn't vote him Mr. Popularity or Most Likely to Succeed, but that was the end of it. If he wanted to ruin his life and go to hell, that was his business.

(In passing, I would invite you to compare the reaction of these fundamentalists to the Islamic fundamentalists with whom they are so often compared. What do you think would happen to a teenager at a typical Madras in Pakistan who made such a declaration?)

Over the past few years, there has been a strong movement among evangelical Christians in the US, under the heading of family values, to codify traditional values in law. I'm personally not a fan of the family values movement primarily because at heart it wants to give rights and special protected status to a group -- the family -- when in fact rights (in my view) can only belong to individuals. So although its aims are different, the Family Values movement is really more closely akin to a leftist movement -- where the goal is often to define and protect the rights of a race, a class, or some other group. I mention this movement only to point out that there are, indeed, folks in America with a religious agenda who want to bring about political change. And they have no doubt brought quite a few TB-BR's on board, but their approach and their ends are ultimately pretty far removed from the things that a TB-BR stood for.

As for whether I would want a TB-BR to be the mastermind behind the Singularity, probably not. But I can think of few people less likely to do it, anyway. A real TB-BR is a pretty slow technology adopter. Also, I can imagine some much worse choices. Better a TB-BR than Pol Pot, say, or that guy in Pakistan (sorry, I'm not deliberately picking on Pakistan -- it just keeps coming to mind) who recently murdered his four daughters to protect his "family honor." A more likely candidate (though still wildly unlikely) would be some group or individual from the aforementioned Family Values movement. And no, I wouldn't think that was a good thing.

For your other points, yes a computer would find the idea of God to be incomprehensible. That's not a bug; it's a feature. After all, one of the Christian creeds explicitly points this out as part of God's nature (see item 9). Or as my father once told me, "Any God you can imagine is -- by definition -- an imaginary God." Again, I don't think it's an issue of trying to make a computer "believe in God." It's about the kind of epistemology we give the new intelligence...or it decides to adopt for itself. How do we know what we know? Some will argue that we only "know" what can be ascertained via the scientific method. I would argue that we know many things that we can't prove. For example:

if a=b, then b=a

There is no mathematical proof for that. But try doing math (and therefore science) without it.

I also believe that we know the following in exactly the same way:

"Do unto others as you would have them do unto you."

Some variation of that idea is just about universal to human civilization. Is it an artifact of evolution? Maybe. Is it a bit of source code that underpins the universe coming to the surface (similar to "if a=b, then b=a?") Possibly. Is it just something somebody "made up" to keep the herd under control that somehow caught on? It could be that, too. In fact, it could be all three at once. Irrespective of where it came from, I think most human beings have the same sense that it is right as they do that the previous proposition is true.

I don't believe that people, even very religious people, follow that rule primarily as a means of avoiding punsihment. I believe they do it because they have a sense of what is good and what is right and they want to be aligned with that. (I also don't think many people follow it very well, including myself, but that's a different discussion.) I would hope that any new intelligence that comes along will at least be open to the idea that there really is such a thing as an ultimate good. Because if there's not a final good, if we can just make up what's right or wrong to suit ourselves, then we face the very real possibility of the new intelligence choosing its own definition of "the good" that isn't too healthy for human beings.


Rik --

Your raise some interesting points. One thing I didn't quite follow:

"Europe is exceptional and it is precisely that which has spawned (sorry Speculists) the US."

Are you apologizing to us Speculists on behalf of Europe for spawning America? No need, buddy. We're glad you did it!

Of course, together we owe the native Americans a major apology (to say the least), but I don't think that was your point.

phil,

thanks for the response. i feel that the matter has come down to belief rather than proof now, however: you consider the "zeroeth commandment" to be axiomatic, but no matter how optimistic i try to be, i do not. neither do i believe in teleological progress as being a fundamental characteristic of history itself. there is no purpose to life, unless you make up one for yourself. this does not make me a pessimist, though. i think we should draw strength in working towards soft takeoff singularity from this fact alone.
but, as you can see, there is no way for me to prove these points, and you will certainly not agree to all of them.

one last thing: you might not need it all that much, but be sure of my respect for your objective argumentation.

eisendorn --

Thanks. I've enjoyed the exchange very much. One more thought, on this:

"neither do i believe in teleological progress as being a fundamental characteristic of history itself."

Spend some time on this site, if you get a chance.

Post a comment