Economist Robin Hanson and futurist Brian Wang join us as we continue our special series leading up
Foresight 2010. The conference, January 16-17 in Palo Alto, California, provides a unique opportunity to explore the convergence of nanotechnology and artificial intelligence and to celebrate the 20th anniversary of the founding of the Foresight Institute.
About Our Guests
Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. Robin is a pioneer in the field of prediction markets, also known as information markets or idea futures. He was a principal architect of the first internal corporate markets, as well as the Foresight Exchange beginning in 1994, and of DARPA's Policy Analysis Market, from 2001 to 2003. Robin has diverse research interests everything from health incentive contracts, to Bayesian classification, agreeing to disagree, self-deception in disagreement, to growth given machine intelligence, and interstellar colonization. He blogs at Overcoming Bias.
|
|
Brian Wang is a futurist
who blogs about all things future-related at NextBigFuture.
He is the Director of Research for the Lifeboat Foundation and a member
of the Center for Responsible Nanotechnology Task Force. |
|
Comments
The following includes the previous transcript snippet, as well as the rest (only the interesting parts):
The podcast, from 12/22, was content-rich. Robin started out with comments about decision and prediction markets, but it really started getting interesting at about 32:30 in the podcast. Following is a transcript of most of Robin's comments from that point. I and hopefully other listeners will inject their opinions shortly, either here or on the Speculist blog. Robin and Brian Wong were the guests on the show:
"I find it fascinating that people get so worked up about this 'Is the future optimistic or pessimistic' thing, as if it's like taking sides on your favorite football team, or something.
You ask yourself, why does it matter how optimistic we are about the future? I mean, say we thought the future wasn't gonna come quite as fast as other people thought. Still, over the long run, we'll still have this wonderful future, but it won't get here quite as fast.
The rate of improvement, the exact rate of improvement, if it's changing by a factor of two - that seems a lot less important, than that we get there eventually."
"I think you should take seriously the various things that can go wrong, and say, 'yeah, we probably won't hit those things, but if we did, that would be really terrible, and what can we do about that?' Or what's the best thing to do about that? And then you get to the question of how is it best to avoid the various things that could go wrong? Then you get to people saying, 'well, if we just slowed everything down, maybe we could deal with these problems better'. And then you're getting on the other side, people saying, 'well...', like me really saying, 'if you slow things down, you're just gonna cause a whole bunch of more problems.'
But I mean, that seems to be the place to have the conversation. Not about overall optimism or pessimism, but the wisdom or prudence of being more slower, careful - or going gung-ho all the way."
"...our universe is 14 billion years old. So, this era we're in now, of rapid growth, is a very small fraction of the overall history of the past and of the future. So obviously, we're really taken with it. And this is our world, but it just can't last a long time, on a cosmic time scale."
"...and Drexler understood that. And lots of people understood that. And so it's interesting to think about 'well, what will things have to be like, when you reach those fundamental limits."
"..., and to see that growth rates would have to slow, and we'd have to be more at the limits of our capacity, and if we're in a competitive world - which I think is likely - you know, if we're evolving and competing, then we would be more closely adapted to our world, in the sense of finding it hard to have different behavior that would really have a competitive advantage"
"We're clearly not very well adapted to our current world. That is, we're apes who suddenly are thrust into this amazing growth spurt. And, we've had some selection and some adaptation over that period. But, for the large part, we are not adapted to this world. We're doing all sorts of things apes would do: looking at the world around us, and imagining this was the jungle or the forest, and dealing with things that way, but we're not making long-term plans, we're not trying to sort of optimize the future of the universe."
"...That's not the kind of creatures we were, and still are."
A short exchange with Brian, and then:
"Well, I think there are two big issues that we can think about now. One big issue, is 'do we make it at all?' If there's a substantial chance of some big disaster we were talking about, that would just destroy it all, then that's something that we could have some leverage over. To try to figure out how to avoid that. That's the existential risk story, so even if you think the chances are only 1 percent, that could be our one leverage on the future, to make sure that one percent doesn't happen."
"...One thing we could do about the long term future, is try to make sure we can make it there, by looking at whatever the risks are, and trying to minimize them."
"And the second big thing that we can do about the long term future, is to consider how much we want to have central coordination. It's a sensitive, dangerous thing to consider, but it is one of the things that will have an influence over the distant future. If somehow we make a world government, and it ends up being strong, then it can end up controlling all of the colonization that goes out from here, and could have a permanent mark on that, if it was powerful enough. I'm not sure that's a good idea, but it definitely is one of the big ways, we may have a mark on the future."
In response to "what if the big government does it wrong?",
Hanson replied,
"Absolutely. First of all, I want to say, it's a question that we should think long and careful about."
Responding to Hanson's suggestion that we think about World Government, Brian Wang asked why would we expect the people in power to give it up?
"Well, we're only a couple of people here, out of billions. So we should realize that our influence may be limited. But still, if we want to think about the question, that's the kind of question to ask. It could be, for example, that sometime in the next century, we will have a tentative world government, and then if that does badly, after that people say, 'no more of that, never again.' And that's how the influence will go, via this very formative, memorable example of how it didn't go very well."
The host asked Robin whether he sometimes thought of a singleton advanced AI as the world government, as opposed to human beings.
"Well, I think that's part of the range of options to keep in mind. But I think people vastly overestimate how easy it might be. "
"... but they underestimate how hard it is, to actually manage central coordination. We humans have had large amounts of experience trying to coordinate cities and states and nations, and larger things. And we've seen a lot about how it gets expensive, and how it gets hard. And so, you can call it an AI that's in charge of anything, but it's not clear that just calling it that, makes all these problems go away. I mean, it has to have some internal structure, and it has to have an internal way it's organized. And the question is, how does it manage those problems, and how does it overcome the great costs and conflicts that we've had, that we've tried to coordinate on a large scale. "
"I'm not gonna say anything is entirely clear, but for example, some people say, 'well, if you just have a bunch of clones of the same thing, and the entire government is run by clones of the same creature, then they won't have any internal conflicts, therefore they will all have peace and coordination.', or something like that."
Brian injected that the trend seemed to be that more numerous nations tend to be formed, rather than a trend toward consolidation, i.e. world government. Mr. Hanson responded.
"Over the centuries, the trend is more toward central government. No question, over the longer time scale."
"Nations have had more centralized government, nations have been taxing a larger fraction of income."
"They're doing more actions on a national level, rather than a regional or metropolitan area level. There just, over the last century, clearly more government. "
In response to Brian commenting on how much better life is with more options, and individuals have more control over their lives,
"I would say in the past that we've had government that were too big and too small, and a wide range of variety. I would say that one of the things that governments that were too big did, is they got involved in too many things. And one of the lessons that people learned is to back off on certain kinds of things. On the other hand, the government got involved in certain kinds of things, and they (people) liked it. And they kept doing more of it. "
The hosts asked about the role of the futurist in these things, and about what they (Brian and Robin) will be doing at the ForeSight conference.
"So the actual futurist, most business futurists, are focused on a relatively short time scale, about 3-10 years, or not much longer than that. So clearly most demand for futurism, that's sort of practical, is in that time scale."
"But I'm most interested in the longer time scale, that you know after 20-100 years or something, and out there most of the people who do that kind of futurism, are basically entertainers, unfortunately. That's the kind of mode they're in, science fiction, inspirational speakers, whatever else it is."
"And, I'm an academic, I'm a professor, and I know how much people love to see sort of odd, creative, contrarian points of view, but honest, I think what the future most needs, what understanding the future most needs, is just to take the standard points of view from our various academic fields, and then combine them. Not necessarily to be creative and contrarian, but just to take what computer scientists usually think that's sort of the most straightforward, conservative things. What computer scientists think, combine that with economists think, for example, and put those views together, to make our best estimate of what is likely to happen. And honestly, that doesn't happen today."
"That doesn't happen today, because when an economist looks at the future, when he thinks about computers, he doesn't use what computer scientists think about computers. He uses what he has read in the newspaper, about computers. So each academic discipline takes their own expert field, and they combine that with their amateur image of other fields. And when computer scientists talk about the future of artificial intelligence, or whatever, they don't go talk to economists about what they think. They make up their own economics, like most people do. They make up their own social science that seems intuitively right to them. And then they use that to make forecasts."
"...and that's basically how futurism fails, is that we don't combine expert (something) from multiple fields. That's the kind of thing I want to talk about, and describe some basic insights from."
Posted by: DCWhatthe | December 26, 2009 05:42 PM
Responses to Robin's comments on the podcast:
"You ask yourself, why does it matter how optimistic we are about the future?"
The question almost answers itself. Why not feel good about things, especially when there is some justification for your optimism? There are also some reasons to be in total despair, but that hardly seems productive. As long as the optimism doesn't blind you to managing your life and making contributions wherever you choose to, why not be optimistic and share that attitude? How does that hurt anyone?
"Still, over the long run, we'll still have this wonderful future, but it won't get here quite as fast."
Here, Robin is comparing degrees of optimism. Yes, there's some of that 'well, I'll bet you 5 bucks the Singularity will occur in March of 2034, and I'll take your bet that it will happen at midnight of 2045' in the futuristic rhetoric, but so what? Some people like playing these games. It seems like a harmless competition, and at least they are choosing subjects like how fast our lives will be enriched, as opposed to what country can beat this other country.
"But I mean, that seems to be the place to have the conversation. Not about overall optimism or pessimism, but the wisdom or prudence of being more slower, careful - or going gung-ho all the way."
It sounds a lot like Robin is trying to regiment consensual conversations between human beings. Some do indeed discuss the importance of how fast we should be moving in some area or other. And others like to treat it like a horse race, seeing whose prediction is closest to what happens. So what?
On this topic, Mr. Hanson seems too picky about things. He should let people be people, and let him search for those particular individuals who are interested in the discussions which he values the most.
"The rate of improvement, the exact rate of improvement, if it's changing by a factor of two - that seems a lot less important, than that we get there eventually."
True. But do we have to always talk about the things that are most important? Can't we choose to relax and just talk about whatever interests us?
I agree that most of us are not dealing with priorities, other than those that affect us on a personal level. But how would he 'fix' that human trait, assuming that it should be fixed at all?
"...our universe is 14 billion years old. So, this era we're in now, of rapid growth, is a very small fraction of the overall history of the past and of the future. So obviously, we're really taken with it. And this is our world, but it just can't last a long time, on a cosmic time scale."
This is a vague statement. It depends on what you mean by 'growth'. It's hard to agree or disagree with this, because the most important term isn't defined. He may well be correct, in the sense in which he's making the claim.
"...and Drexler understood that. And lots of people understood that."
Well, then, we should shut up. We wouldn't want to face the wrath of Drexler and lots of people.
"And so it's interesting to think about 'well, what will things have to be like, when you reach those fundamental limits."
"..., and to see that growth rates would have to slow, and we'd have to be more at the limits of our capacity, and if we're in a competitive world - which I think is likely - you know, if we're evolving and competing, then we would be more closely adapted to our world, in the sense of finding it hard to have different behavior that would really have a competitive advantage"
Again, I don't see the evidence for this type of claim. What makes him think our capacity isn't going to expand, as we miniaturize and cheapen the resources we have?
"We're clearly not very well adapted to our current world. That is, we're apes who suddenly are thrust into this amazing growth spurt."
Can't argue with that one. He's right.
"...But, for the large part, we are not adapted to this world. We're doing all sorts of things apes would do: looking at the world around us, and imagining this was the jungle or the forest, and dealing with things that way, but we're not making long-term plans, we're not trying to sort of optimize the future of the universe."
That's an awfully tall order. But aside from that, he's correct. We're very inefficient and wasteful, except when we try hard, or chance upon some insights. Like when one of us writes a blog entitled 'This is the Dream Time', or a book called 'The Black Swan' or a book entitled 'A Guide to Rational Living'. Sometimes, we get it right, and therein lies hope.
"...One thing we could do about the long term future, is try to make sure we can make it there, by looking at whatever the risks are, and trying to minimize them."
While it might be irrational to stand still in the midst of a crisis and do nothing, we also delude ourselves into thinking we can mold the future exactly the way we want, and suffer no unintended consequences. I don't exactly disagree with his point, but rather with the arrogant (in terms of human capacity in general, not Robin specifically) sentiment that we have the power to control anything that annoys or threatens us.
"And the second big thing that we can do about the long term future, is to consider how much we want to have central coordination. It's a sensitive, dangerous thing to consider, but it is one of the things that will have an influence over the distant future. If somehow we make a world government, and it ends up being strong, then it can end up controlling all of the colonization that goes out from here, and could have a permanent mark on that, if it was powerful enough. I'm not sure that's a good idea, but it definitely is one of the big ways, we may have a mark on the future."
Except for the possibly delusional belief that we can intentionally, consciously make a considerable positive difference in the lives of our distant relatives, Robin is right in asking for this conversation to take place.
Although some things about being human are understandable and tolerable for the time being, the knee-jerk reactions to central coordination (notice he didn't say 'control'), as well as the reflex responses to a suggestion of unfettered capitalism, gets in the way of resolving the big issues. Except during periods of stress, we should have grown out of the urge to react without listening & thinking, long ago.
The host asked Robin whether he sometimes thought of a singleton advanced AI as the world government, as opposed to human beings.
"Well, I think that's part of the range of options to keep in mind. But I
think people vastly overestimate how easy it might be. "
"... but they underestimate how hard it is, to actually manage central
coordination. We humans have had large amounts of experience trying to
coordinate cities and states and nations, and larger things. And we've seen a
lot about how it gets expensive, and how it gets hard. And so, you can call
it an AI that's in charge of anything, but it's not clear that just calling it
that, makes all these problems go away. I mean, it has to have some internal
structure, and it has to have an internal way it's organized. And the
question is, how does it manage those problems, and how does it overcome the
great costs and conflicts that we've had, that we've tried to coordinate on a
large scale. "
Robin has another good point here. When people speak about advanced
technology, they are justified in leaving details out, because there's no way
they can know those details. But one effect of this deliberate exclusion is
that advanced technology is treated as a black box, a form of magic, which is
capable of anything. The attitude is akin to 'advanced future technology can deal with infinitely difficult problems, because it is infinitely smarter than us'. It could be that some problems which are intractable for us, in 2009,
might still be difficult for more advanced brains.
Also, isn't it possible that a smarter brain, will be able to look at some social problems more clearly, and be able to tell us, 'sorry Charlie, but there is no way to predict how this works out. You have to either try it, or run a simulation.' Some social and political questions might be akin to
trying to query the state of some cellular automata, where you have to run a
program to that state, and read off the results.
The hosts asked about the role of the futurist in these things, and about what they (Brian and Robin) will be doing at the ForeSight conference.
"So the actual futurist, most business futurists, are focused on a relatively
short time scale, about 3-10 years, or not much longer than that. So clearly
most demand for futurism, that's sort of practical, is in that time scale."
"But I'm most interested in the longer time scale, that you know after 20-100
years or something, and out there most of the people who do that kind of
futurism, are basically entertainers, unfortunately. That's the kind of mode
they're in, science fiction, inspirational speakers, whatever else it is."
Yep, that may be true. But that has value. How many times have you heard engineers say they were inspired by Star Trek? Lots, right?
"And, I'm an academic, I'm a professor, and I know how much people love to see
sort of odd, creative, contrarian points of view, but honest, I think what the
future most needs, what understanding the future most needs, is just to take
the standard points of view from our various academic fields, and then combine
them. Not necessarily to be creative and contrarian, but just to take what
computer scientists usually think that's sort of the most straightforward,
conservative things. What computer scientists think, combine that with
economists think, for example, and put those views together, to make our best
estimate of what is likely to happen. And honestly, that doesn't happen
today."
"That doesn't happen today, because when an economist looks at the future,when he thinks about computers, he doesn't use what computer scientists think about computers. He uses what he has read in the newspaper, about computers. So each academic discipline takes their own expert field, and they combine that with their amateur image of other fields. And when computer scientists talk about the future of artificial intelligence, or whatever, they don't go talk to economists about what they think. They make up their own economics, like most people do. They make up their own social science that seems intuitively right to them. And then they use that to make forecasts."
This might not be snobbery, but it does seem to be a claim that the 'experts' have a monopoly on the acquisition or perception of truth.
First of all, contrarian thinking is seductive, that's true. It makes one feel that one posesses far greater wisdom than the experts, without putting in much effort.
However, experts and the general public alike are full of silly traditions and unquestioned assumptions, and contrarian thinking sometimes helps to address that.
Secondly, the vague idea of 'combining' the viewpoints of computer scientists
and economists, is not in itself a work of science. You can combine things in
all sorts of crazy ways.
But, more importantly, there are some things which experts are not more
'expert' at, than the common man. And one of those things, is predicting the
future. Human beings in general are very poor at this type of cognitive
exercise. We enjoy making predictions, for various reasons, and we love to
take credit for the times in which we seemed to have been right. But we really aren't very good at it.
No. The future is inherently unpredictable, and we should not leave it solely in the the hands of academia. 2012 was a work of fiction; but it illustrated the preferential treatment of the well-connected and powerful. This circumstance is a fact of our lives, and always has been. In a situation
where some kind of rationing seems advisable, the well-connected experts will
for some strange reasons exclude themselves from any consideration; count on
that.
A question I'd like to ask is, in what way would he like to combine the
recommendations from different experts? In the form of a prediction market,
one that is closed to the general public?
Posted by: DCWhatthe | December 26, 2009 06:55 PM