It's a New Phil, Weeks 85 and 86
The Transformation of Desire
Charles Harper's talk at the Singularity Summit had particular resonance for the New Phil series. Here's the abstract from the relevant portion of his talk:
People use power to pursue ends they desire. Therefore the increase of personal power calls for the transformation of personal desire. Science, however, knows next to nothing of the transformation of desire. Monks, hermits, fasters, counterculturals – the athletes of the spirit; these are the sorts of people who work on and know about the transformation of desire. A wise approach towards the development of superintelligence probably should include serious consideration on how to transform desire so that enhanced powers are not abused to serve un-enhanced desires. The transformation of desire for humans involves what in virtue ethics is called “habitus” – the formation of habituated character through devoted, willful practice within a space of real freedom. Virtue is not a matter of either knowledge or “programming.” And it also often is not limited to only individual lives. It occurs in group contexts such as families, teams, monastic orders, communities. Also, people who engage in the transformation of desire often are involved in worship and prayer. They seek inspiration and transformative power from God. In view of such issues, what would be the “transformation of desire” for a superintelligence?
The national tendency towards obesity is a small but revealing example of what Harper is talking about. The "power" in question here is both technological and economic.We can produce and have access to far more food than we actually need to eat. I've been chronicling an attempt to create my own habitus. Who knew?
Next time -- a long overdue (and dreaded) weigh in. Stay tuned.
It's a New Phil, Week 18
It's a New Phil, Week 19
I was on vacation during week 21 and did not post an entry, but thanks for noticing the blank space!
No entry for week 41.
It's a New Phil, Weeks 54 and 55
It's a New Phil, Weeks 74 and 75
Comments
Those who pray or meditate often report that they feel a connection with others. Their personal concerns are trivialized as they get outside themselves. This can motivate a person to behave altruistically.
I see no reason why a superintelligence couldn't be similarly motivated. Sounds like a pretty good route to friendly AI.
Risking severe corniness here - why not have the lessons of say... "A Christmas Carol" taught to such an intelligence.
'Business!' cried the Ghost, wringing its hands again. 'Mankind was my business. The common welfare was my business; charity, mercy, forbearance, and benevolence, were, all, my business. The dealings of my trade were but a drop of water in the comprehensive ocean of my business!'
Posted by: Stephen Gordon
|
September 12, 2007 09:42 AM
OTOH, getting what you want is often a good way for you to realize that you didn't want what you thought you did. Ie, transformation of desire through satiation. Technology is pretty good at delivering that.
Posted by: Karl Hallowell
|
September 12, 2007 04:37 PM
The transformation of desire via the god delusion is non sequitur
//
Posted by: Steve Gall
|
September 13, 2007 08:53 PM
Stephen --
Corny works for me. Of course, the danger of AI's who make mankind "their business" is that we get some busybody AIs deciding what's "best" for us for us.
Karl --
Or maybe, in some cases, too good. That's how we end up with obesity and other excesses.
Steve --
Assuming for the sake of argument that God is a delusion, the question of whether and how AIs might transform their own desires -- whether they would see this as a necessary or desirable thing to do, and what delusions might be involved on their end -- hardly strikes me as a non sequitur. Do you assume AI's will be by their very nature delusion-proof? I hope you're right, but it's a huge assumption.
Posted by: Phil Bowermaster
|
September 13, 2007 09:50 PM
"Delusion proof"
Seems like a reasonable definition of god. Seriously- no expectation out of line, no hope beyond reality (Go Cubs! This could be the year.)
It seems more likely that AI would start plenty delusion proof and aspire to the kind of omniscience that would approximate if not achieve delusion proof-ness. (Delusion proofity?)
Posted by: MDarling
|
September 28, 2007 11:40 AM