The Speculist: Sixth Day Ethics

logo.jpg

Live to see it.


« The Challenge of Smart Kids | Main | Better All The Time #31 »


Sixth Day Ethics

A couple of days ago Rand Simberg pointed to an article about a moth-cyborg.

Charles Higgins, an associate professor at the University of Arizona, has built a robot that is guided by the brain and eyes of a moth. Higgins told Computerworld that he basically straps a hawk moth to the robot and then puts electrodes in neurons that deal with sight in the moth's brain. Then the robot responds to what the moth is seeing -- when something approaches the moth, the robot moves out of the way.

Higgins explained that he had been trying to build a computer chip that would do what brains do when processing visual images. He found that a chip that can function nearly like the human brain would cost about $60,000.

"At that price, I thought I was getting lower quality than if I was just accessing the brain of an insect which costs, well, considerably less," he said. "If you have a living system, it has sensory systems that are far beyond what we can build. It's doable, but we're having to push the limits of current technology to do it.

Well, it would certainly push the limits of a grant. $60,000.00 versus $.05 at the bait shop is a pretty easy choice even when you aren't writing the check.

Rand's point was that the blurring of the distinction between our machines and biology will give us "some humdinger ethics issues." No doubt there will be serious ethical issues to consider. And there seems little doubt that the fields of bioengineering and AI will be combined in a variety of ways. But I don't think the "blurring" of the two fields will be the problem. Humanity, and the sentient beings that result from this, will have to navigate this ethical minefield even if these two fields of study are never combined.

As Phil ably pointed out with his "Declaration of Singularity," the substrate – the stuff we're made of – matters less than the being contained within the substrate. Whether they are engineered as digital, biological, or some hybrid, these beings will test our humanity as never before.

Civilized nations have outlawed the practice of slavery. By analogy, AI's or bioengineered beings of equal intelligence to people should not be subjected to slavery.

This ethical restraint is attractive because it is easy to understand and it provides a bright-line rule between right and wrong. If it's a person, you can't treat it like a slave.

But we'd have to establish how smart an engineered intelligence would need to be to be considered a person. And, conversely, would we be saying something negative about the personhood of humans who lack that level of intelligence?

But crossing even this bright-line could be tempting. Dumb servants are simply less useful than intelligent servants. What if humans could engineer beings who take joy in subservience? Rationalization for slavery was far too easy when the victims didn't care for it. Imagine how easy it would be for people to accept happy slaves, especially if they didn't look human.

What would be the outcome for humans? Would we become satisfied living like tyrannical despots with our every whim catered to by engineered sycophants? Historically humans have shown little ability to resist such lives when offered.

And there are still ethical issues with less intelligent engineered beings. It is legal to own animals and exploit animals in certain ways. But most societies prohibit cruelty to animals. It could be argued by analogy that we should prohibit cruelty to engineered creatures that are as sophisticated intellectually as the animals we protect against cruelty.

Perhaps the "bright-line" rules should be to protect AI's that are the equivalent of mammals.

Comments

Would we become satisfied living like tyrannical despots with our every whim catered to by engineered sycophants?

Bet we would. Especially if we didn't ever have to act like tyrants or despots, or even ever be particularly mean, for that matter. The whole thing could be set up to run so pleasantly that no one would ever need bother about whether it was really "right" in some abstract sense.

It's a disturbing thought.

Here's a wildly unrelated disturbing thought, picking up on the moth thing. What if we were able to distribute more complex functions across populations social insects? The world's first "A"I might end up being an ant hill or a termite colony!

To bee, or not to bee.

Ignoring the whole entymological pun thing, I think the ethical challenges presented by this technology will largely center around the more traditionally economic issue: does builder = owner?

Harking back to Al Fin's notion of grobyc vs cyborg, I think the original or foundational entity will necessarily force the answer to a large degree. By which I mean, if a human is modified in some fashion, the laws governing treatment of humans would still apply or provide the basis for new law derived from first principles if a radical enough change should warrant such. However, if the "original" entity is a device or even the moth example Stephen linked to, then I think Al Fin's grobyc classification might more ethically be applied. I don't yet have a sense of how that rationale might develop, but I suspect it will largely depend upon how closely humans identify such creations with their pre-existing relationship with animal pets.

If a purely inanimate entity is created, that further removes the construct from established ethical considerations and would almost certainly be assumed by many humans to logically fall into the classification of "Property".

If that last should prove to be our societal starting assumption towards artificial constructs, and I simply don't see any likely alternative given the incremental nature of their development process, then I stand by my earlier position that AI's won't have "rights" until they force we humans to acknowledge them. I think humanity's demonstrated willingness to classify ourselves by much less clearly definable distinctions then those under consideration here, leaves little hope for a different approach to the clearly non-human mechanisms we construct. No matter what we construct them out of, I'm very much afraid.

I can't tell you how much I wish I thought it would be otherwise ...

I would advocate restricting any "servant" systems to narrow AI that works really well. I would find generally intelligent "happy slaves" objectionable because I think it follows logically from my disdain for other simplistic, stupid goal systems for intelligent agents, such as maximization of genetic fitness.

Here's a seemingly frivolous case in point. Does our lack of empathy with a virtual character portend greater abuses of our future cyber-brethren? Or does the fact that some now feel bad about it indicate that there's some hope for us?

Post a comment

(Comments are moderated, and sometimes they take a while to appear. Thanks for waiting.)






Be a Speculist

Share your thoughts on the future with more than

70,000

Speculist readers. Write to us at:

speculist1@yahoo.com

(More details here.)



Blogroll



Categories

Powered by
Movable Type 3.2