« Writing Contests for Speculists | Main | Stillness Part VI, Chapter 61 »

Human Savants Revisited

Phil opened anew the debate about "what it means to be human" in his "Human Savants" post. Phil asked,

If given a choice, how many savants would sacrifice [their special savant talents] in favor of better socialization skills?

...If we reach a day in which individuals can pick and choose special mental abilities — even at a cost — wouldn't there be many who would gladly sacrifice normal social interaction in favor of such exceptional abilities?

I have no clue about what savants might choose to do, but I'd think that some "normal" people would definitely consider sacrificing social skills for math skills. Some people (particularly some of those who would especially value advanced math skills) are socially weak anyway. They might not think they were giving up much.

homer droolAs for me, I'll happily take augmentation when it comes (and is proven safe), but I'm not going to barter away my personality to recite pi.

Mmmmmm pie.

Like Phil, I'd hope that these advanced abilities are ultimately found not to be mutually exclusive. But, if they were, wouldn't it be great if we could go back and forth as needed? We could choose to operate at 120% of our normal socialization capability in social situations or 500% of our normal music/mathematical capabilities as the need arose. Our minds could be like a toolbox. We use the tools we needed as the needs arose.

The question "what does it mean to be human" will ultimately be rephrased, I think, to "what does it mean to be a person?"

In the future I think that the original question itself will smack of something akin to racism. Not unlike if I asked, "what does it mean to be white?"

Like Phil suggested, it will matter less how we came to be people than whether we are people. But there is some troubling philosophical/moral landscape to negotiate here. Only some machines could ever be defined as people - your typical pencil sharpener would not make the cut. So, for machines to qualify there would have to be some standard of intelligence or self-awareness that the machine would have to demonstrate before it could be considered a person.

Then the question would naturally arise - what about humans? Why should we continue to classify all humans as people if they can't meet the standards we hold machines to? If some humans are not people, how would they be treated?

What about when machines surpass humans? Would the bar for personhood be raised above humanity?

Had the movie "Bicentennial Man" been a little braver, that film would have dealt with this issue. It wouldn't be just bigotry that might prevent a court from granting person-hood to machines. There would be an argument that this would erode human dignity.

On the other hand, looking back on histories of master-slave cultures, neither the masters nor the slaves did well. The slaves were kept down and the masters relied on the slaves instead of developing their own skills. A permanent slave class of non-person robots might not be such a good thing for humans either.

We humans are limited by a recorded history of dealing with only one type of advanced intelligence - normal human intelligence. Even our geniuses, eccentrics and psychopaths are human. We are only beginning to understand the savants. And we really don't know what advanced computer intelligence will mean for society either.

When different species of humans met up in the past, one species always displaced the other. Hopefully there's room in this world for more than one kind of intelligent being.

Comments

I like the idea of being able to switch back and forth. There's a scene in Star Trek: First Contact where Data and Picard are about to face down the Borg for the first time. Data begins to observe his emotions and realizes that he's terrified. So he announces that he's going to "turn off his emotion chip." Picard tells Data that he envies him sometimes.

The good side of being able to switch back and forth between normal social interaction and enhanced -- or maybe it's better to say modified -- modes of mental operation is that we would be more functional in some areas and we woldn't be distracted by things that normally get in the way.

I wrote a while back that being able to get "in the zone" like that could prove helpful to sales people. A sales rep who can bump up her ability to speak and to think on her feet, and tone down her fear of rejection, is going to have a substantial advantage over the competition. The downside, of course, is that it could also prove quite helpful to criminals and/or government officials. How much easier it would be to commit appalling acts of violence if you can just switch off your capacity to be appalled. Or maybe closer to the lives of everyday people -- think how much easier it would be to dump somebody.

Yikes.

On the question of human dignity (or would that become "the dignity of a person?"), we're looking at a two-edged sword. It may, indeed, be an affront to human dignity to extend humanity to that which has not traditionally been viewed as human. But surely the greater error -- historically speaking -- has been the reverse: denying humanity to outsiders and slaves and other "inferiors." It is precisely on these grounds that Joshua Katz and co. take their stand in defense of zygotes as "fully human." To deny the humanity of a zygote, they argue, is to fall into the same line of thinking that once would have denied the humanity of a black person.

Obviously I don't agree. This is why I think you're right that the question must ultimately come down to personhood (or even the potential for personhood.) Neither a freshly fertilized egg nor a fetus at two months would necessarily pass a test for personhood, although the argument for recognizing the potential personhood of the fetus would be much stronger than that of the zygote.

Eventually we'll reach a point where some articial life-forms will pass any rational test for personhood. Will we recognize the personhood of a being who can think and feel, who has desires and dreams and fears, and who can express them to us? I think we will. But we'll be in a bit of a quandary.

To wit, if a sentient robot is a "person," what about a really advanced laptop? Okay, it's not. But what if it's running some software that has the capability to bootstrap itself to sentience? This is the question of potential personhood seen from a different angle.

One answer would be to say that any entity which could follow a developmental process ending in personhood must be recognized as a person. (That would be analagous to the "personhood begins at conception" argument.)If we end up adopting that one, no one would be able to own a computer past a certain level of sophistication.

But the question is -- would it be ethical to inhibit the development of a computer that could "go personal" if left to its own devices or helped along? If not, will history view us all as slaveholders for "using" our ThinkPads and Dells like they're some kind of "property?" And if it would be all right to so inhibit a computer's development, how would that be different from the current use of embryonic stem cells for research?

Post a comment