The Speculist: Closer Than We Think

logo.jpg

Live to see it.


« Shift Happens | Main | Review of "Rainbows End," part 1 »


Closer Than We Think

Ben Goertzel says the Singularity may get here sooner than many of us expect:

One of these years, one of these AGI designs—quite possibly my own Novamente system—is going to pass the critical threshold and recognize the pattern of its own self, an event that will be closely followed by the system developing its own sense of will and reflective awareness. And then, if we've done things right and supplied the AGI with an appropriate goal system and a respect for its human parents, we will be in the midst of the event that human society has been pushing toward, in hindsight, since the beginning: a positive Singularity. The message I'd like to leave you with is: If appropriate effort is applied to appropriate AGI designs, now and in the near future, then a positive Singularity could be here sooner than you think.

Goertzel says that with a Manhattan Project approach, we could be there in a decade or so, but that it will most likely take a little longer being driven by a few serious researchers trying "really, really" hard to make it happen. Like Kurzweil, Goertzel believes that better understanding of the human brain will lead us there, but he's not convinced that we need a full brain scan or significantly more powerful hardware.

This is a good overview for folks who haven't read much about AGI (artificial general intelligence.) There are some interesting thoughts in the comments as well. Read the whole thing.

Comments

Goertzel's ideas here reminded me of Eliezer Yudkowsky's recent lecture "The Intelligence Explosion" that he gave at the Singularity Summit.

Eliezer Yudkowsky was focused on the feasibility of Friendly AI.

Some have argued that any advanced AI will shoot past us. Therefore, there is no way for us to predict what it will do. It could just as easily be evil as good.

This is probably oversimplified, but Yudkowsky thinks that if we can engineer a friendly AI initially, it can then set the parameters for each upgrade. It would keep itself friendly as it improves.

An analogy, I suppose, is a good kid. Let's say we have a somewhat immature, but nice young guy who's 13-years-old.

Such a person will grow in intelligence and complexity, but if they are good at that time, they will not, generally, seek to get better at being evil. Instead, they learn skills that will help them be productive and helpful in the world.

That, I think, is the hope for with general AI. The challenge, I guess, lies in getting the "newborn" AI to become a decent adolescent.

Post a comment

(Comments are moderated, and sometimes they take a while to appear. Thanks for waiting.)






Be a Speculist

Share your thoughts on the future with more than

70,000

Speculist readers. Write to us at:

speculist1@yahoo.com

(More details here.)



Blogroll



Categories

Powered by
Movable Type 3.2