« Test Entry | Main | Hedging Our Bets »

Posthuman Ethics

Dale Carrico has some interesting ideas about what posthumanism should not mean, drawn from a rather bleak picture of what humanism was all about:

Clothed in the language of universality, the entitlements of the humanity proclaimed by humanists have never extended to more than a fraction of actual human beings. Assured of its location on a “natural” progressive trajectory attaining inevitably toward universal emancipation, humanism too readily accommodated contemporary injustices as temporary and, hence, somehow tolerable -- especially to those humanists who didn’t happen to suffer them. And, further, as the ethics of a questionably contrued "human race" and of the universal "civilization" problematically connected to this race, it grows ever more difficult to shake the troubling analogies between humanism and its debased technoscientific companion discourse: the "race science" that legitimized every brutal imperial, colonial, globalizing, ghettoizing, apartheid regime in modern memory.

Needless to say, these painful recognitions demand painful reckonings. It is this crisis of humanist conscience -- which is not really one crisis, so much as many different crises, arising out of a variety of concrete situations and taking a proliferating variety of consequential forms -- that more properly goes by the name "post-humanism."

Don't let Carrico's lofty academic style put you off. There is a lot of really interesting material on his blog. For a more digestible sampling of his work, see his witty if somewhat overwrought fisking of my Declaration of Singularity, in which he accuses yours truly of being a "a pampered privileged clueless straight white guy who has more money and crap than almost anybody else on earth" who somehow manages to believe that "there is some possible intelligible sense in which you are suffering from some kind of major, like, social exploitation that urgently demands the world's redress." This is, of course, inaccurate. As anyone who has met me will attest, I am an exceptionally good-looking pampered privileged clueless straight white guy who blah blah blah.

But I digress. My point is that one need not share Carrico's rather Berkleycentric damnation of humanism as just another arrow in Whitey's Quiver of Exploitation to agree that the emergence of a posthuman world in which exploitation, abuse, and destruction of human life occur at a scale even more appalling and horrific than anything the world has encountered up to this point is a very real one. Our recent (ongoing) discussion about The Friendliness Problem is really all about this same risk. And there is even more room for agreement; I find that I can sign on with his conclusion pretty much wholeheartedly:

In such an historical moment, especially, it seems to me disastrous to conceive post-humanism as a moralizing identification with some tribe defined by any idiosyncratic fetishization of particular technologies or other. Rather, we should think of it as an ethical recognition of the limits of humanism provoked by an understanding of the emerging terms of technodevelopmental social struggle and, hence, any ethical perspective arising our [sic] of this recognition that demands cosmopolitanism, democracy, and emancipation shape the terms of this struggle, come what may.

Where my heart is less than whole is around his use of the word "any." In fact, I agree that any ethical system that arises must incorporate those things, but (per Stephen's remarks in a recent comment) I don't think that any ethical approach is as good as any other, even if all ethical perspectives being discussed were to meet all the criteria described. For example, I can imagine a posthuman intelligence with very specific ideas about what those three things mean imposing them on their "inferiors" for "their own good." Even a vocabulary of emancipation and cosmopolitanism could be used in the service of the very exploitation that Carrico is looking to avoid.

Just to give one very wacky example, suppose a post-Singularity intelligence decided to "emancipate" us all from the limits of human sexuality by setting everyone up with a complete set of both male and female reproductive organs? Or another wacky example, what if the same being decided, in the name of cosmopolitanism, to provide many of us with disabilities so that we might better understand those who are already disabled?

I think Carrico would argue that his inclusion of democracy in the list of prime values would prevent that kind of exploitation from happening. (Come to think of it, he would probably reject such scenarios prima facie; he has no use for all this "singularity" nonsense.) But without a notion that some moral and ethical models are better than others, we're in a lot of trouble.

UPDATE, From the Letters that Crossed in the Mail Department: While I was being snarky, Dale Carrico was writing a very nice response to my comment on his blog. So that oughta teach me a thing or two, but let's be realistic here -- chances are it won't.


I don't have much patience for any discussions of post-singularity ethics, at least, anything that goes beyond the very general 'modified golden rule' ('do unto your inferiors as you would have your superiors do unto you.')

Should we demand that posthuman superintelligences be democratically accountable to us? That makes about as much sense as making our own civilization accountable to flatworms.

I'm not saying that the other side of the singularity is likely to be some sort of dictatorship. For one thing, I can't imagine anything more dull for a superintelligence than micromanaging the lives of billions jumped-up of monkeys. For another, my bet is that superintelligence is likely to result from a synergy between IA and uploading, not just the explosion of classical AI: ie, those superintelligences will be direct memetic descendants of us, and their morality can be expected to be an evolved version of our own. We're generally more ethically advanced than, say, the Yanomamo; I don't see any reason the trend wouldn't continue.

But, at the same time, once the superintelligences are active, there won't be anything to reign them in but the other superintelligences. If they choose to do things that would look horrific to us (an example would be forcible uploading, in order to rationalize energy efficiency and matter utilization) there wouldn't be a damn thing we could do about it.

It might be just Mr. Carrico's style, and his use of progressive buzzwords like 'social struggle' and 'emancipation', but it sounds to me like he's trying to load down posthumanism with twencen political baggage. I don't see that as a productive direction; it's simple too limiting. Which is why, once again, I'd prefer to just stick with the modified golden rule and hope that when something goes south (as it inevitably will, given the singularity will take place in the real world and not some idealized academic political fantasy) the results won't be too catastrophic.

Matt -

Should we demand that posthuman superintelligences be democratically accountable to us?

Um, yes?

That makes about as much sense as making our own civilization accountable to flatworms.

Well, if we need to do that for consitency's sake, fine. We're accountable to flatworms. Not sure how we'll rig up a ballot they can use, though...

The point is, even if we are flatworms by comparison to a potential superintelligence, we are still human beings. Whatever intelligences emerge -- whether they are the result of augmented human intelligence or something new created from whole cloth -- they are us, or at least our progeny. Any ethical standard that devalues the worth of human beings, even if it belongs to highly advanced beings, is an ethical leap backward, not forward.

It might be just Mr. Carrico's style, and his use of progressive buzzwords like 'social struggle' and 'emancipation', but it sounds to me like he's trying to load down posthumanism with twencen political baggage. I don't see that as a productive direction; it's simple too limiting.

Maybe what we need is a new political vocabulary -- something I don't think Carrico would be entirely averse to. The singularity being what it is, it's impossible to say how much or how little of our current moral, ethical, cultural, and political infrastructure will emerge on the other side. But that's all the more reason that we have to talk about this stuff now...while we still can.

I just don't see democratic accountability working, at least not for any unaugmented humans left in the population. The sad fact is, when you're dealing with entities orders of magnitude more complex, any rules or restrictions that are placed on them will be easily circumvented. Look how fragile our own democracy is now, when the ruling class consists of other monkeys.

Ever read Poul Andersons' Harvest of Stars? The AIs maintain an illusion of democratic control (technically they just advise the human elites) but in reality it's a benevolent dictatorship: their advice is so good it's always followed, and thus amounts to orders. Humans only think they're in charge; anyone with sense knows their political self-determination is a sham, and that they amount to a planet full of pampered housepets. It's always struck me as a logical failure state of any democratic system in the face of strong AI.

This is the whole reason for the friendliess problem: Yudkowsky realizes, I think, that once a superintelligence is up and running, it won't be accountable to us any more, and we'd better be damn sure it's got our best interests as part of it's core programming.

I doubt that human life will ever be devalued, at least in absolute terms. Like you said, that'd be a giant leap backward, ethically, and I doubt that posthumans would have backward ethics. I think they'd be likely to value life more than we do. Any one human life would be more precious to them than we hold human life now, but at the same time, they'd hold themselves to be still more valuable.

Of course, you also have to ask how many baseline humans will be left on the far side of the singularity? If there's a hard-take-off, with one godlike entity arising and not much else in between it and ten billion monkeys, then there's cause for worry. But if it's more gradual, coming about as a result of millions or billions of people progressively upgrading themselves (that's where my money is) then whether or not baselines still have voting rights is a matter of less concern, as there won't be all that many baselines left around anyways, and the majority of the population will be full citizens of whatever posthuman system arises on the other side. This is pretty much what Kurzweil envisions, and I think he's being sensible when he talks about MOSHs (mostly original substrate humans) who have their every need cared for, are allowed to do pretty much as they please, and generally have it better than any human ever has before ... yet at the same time, have been sidelined politically. They have a level of self-determination comparable to that of a child or a pet.

Post a comment