The Speculist: All In

logo.jpg

Live to see it.


« Convergence 08 Wrap-Up | Main | Top 10 Revisions to the Star Trek Universe that Would be Fine by Me »


All In

Just saw this Facebook notification for our FFR chat host Michael Darling:

Michael went all-in in Texas Hold 'Em Poker and won $2,441 chips in one hand. World-class playing!

Not bad at all! Unfortunately, a few minutes later this notice appeared:

Michael bet their way to success in Texas Hold 'Em Poker, walking away with $999.

I'll leave the atrocious grammar alone for a moment. You win almost $2500 on a single hand, and a while later you cash out having lost over $1400 of that. As they say on the TV shows:

"That's poker."

Anyhow, he still finished up by a grand. Not too shabby. If only it were real money!

Michael displayed one of the hallmarks of intelligent gaming behavior in this sequence of events. He quit while he was ahead. Along the way, he had to demonstrate his intelligence repeatedly by placing, raising, and folding bets each at the appropriate time, comparing what he knew about his own hand with what he could surmise about his opponents' hands, as well as their likely behavior in the face of his next move. He had to play smart when he got good cards, and even smarter when he didn't.

If a computer could demonstrate the kinds of behaviors that Michael did in winning his $1000 on Facebook poker, we would almost certainly credit it with possessing some level of intelligence. Via GeekPress, New Scientist reports on how last summer a computer program beat some of the world's best poker players at Limit Texas Hold 'Em (a slightly less random and complex variation of the game than the No-Limit version that Michael was playing.)

pokerchips.jpg

When this story broke last summer, I predicted that software consistently beating humans at poker will be taken by many as evidence not that machines have become intelligent, but rather than no "true" intelligence is required in order to win at poker. This is part of a progression in an earlier stage of which a computer beating a human being at chess was considered a reliable signpost of machine intelligence until it happened. Then suddenly winning at chess became something that could be done "mechanically" with no real intelligence required.

However, the New Scientist article goes on to point out something very interesting:

Hundreds of online poker players use fully automated bots in the hope of making money without lifting a finger, even though this is against poker websites' rules. Most are crude, off-the-shelf programs bought online, designed to evade the sites' detection systems. They generally lose money for their owners. It is estimated by industry and leading botters that only around 1 in 10 players using bots make a profit, mainly in low-stakes games.Those "botters" who do make money are understandably secretive. Being identified can lead to their accounts being frozen and funds seized. One London-based botter told New Scientist his program made in the region of $35,000 per year.

If one in ten players using bots make a profit, I can't help but wonder what the overall percentage of players using bots is. Let's say it's fairly low -- one player out of every 100. That would mean that one out of every 1000 people playing poker online is making a profit by way of a computer program.

That number is likely to go up.

And the thing is, the bots will go right on beating us at poker whether we acknowledge their intelligence or not. They don't care what we think. This could be the early stages of a long and sad rhetorical slippery slope. When computer programs are driving our cars and performing surgery on is, will there still be those arguing that no "real" intelligence is involved? How about when they're flying airplanes? Or when we turn to them to bail us out of the next financial crisis, and ask them to prevent any further such catastrophes? Will they be truly intelligent when we turn law enforcement and other security matters -- even national security -- over to them? Or when we simply hand the reins of government over to them?

Maybe even then there will be some who argue that computers still aren't really, really intelligent. But that won't matter much. By then, we will have gone on all in on a much bigger bet than Michael's $2400. We will have bet that these machines can run things better than we can. We will have bet our future on them.

Here's hoping it pays off.

Comments

First- I'm convinced the bad grammar is intentional. a ploy hook or social engineering move to force me to correct it. There is a way for players to make their own announcements.
I didn't lose - I won a different hand but instead of going all in, I raised and bet and dragged a bunch of sheep with me to the show at the end.
But no way I'm going to be able to deal with the guilt of burning time playing Facebook poker and then add to that time to correct the automated pronouncements of my status.

Second -
I did quit while I was ahead, but I don't always. Or at least, I haven't always. Even though I know it's the intelligent move. Personally- I have identified two things about myself playing poker (neither of which I should reveal, both of which I'm sure a decent player would figure out playing with me one time.) I get impatient and tired and don't care whether I win or lose. Tough to bet smart against that. Second, my only tell (that I can observe) is that as I get tired and impatient- I bet more and more "traditionally" smart. Yes- I seriously intend to be somewhat random when I can pay attention.

And then on to the real point- reading this you can't know that a bot didn't write it. You can claim it's me- butyou can't tell. Now- I say this knowing that I was one of the King/Bachman fans that challenged Bachman with accusations of really being King because I can tellt he"voice" of a writer. But no one reading this knows my writing well enough. Maybe.
But what if I've been a bot all along?

As for pilots- I've flown fast moving jets with 1960's era autopilot technology. And I can state with certainty the only reason we don't have auto piloted planes (and buses and taxis and other conveyances) is we're afraid to give in. And those so employed don't like it. (The Air Force, for example, is run mostly by pilots.)

No reason a decent GPS and some reasonable sensor technology wouldn't be as safe as a human operator. Safer most of the time.

We already have bet computers can run some important things better than us. The trains at Dallas Ft Worth Airport. Most major financial traders. I don't think WOPPR has taken over the launch codes yet- but certainly something equivalent determines the inbound threat and the corresponding defense posture. The grid, which could get even better if we let it.

Post a comment

(Comments are moderated, and sometimes they take a while to appear. Thanks for waiting.)






Be a Speculist

Share your thoughts on the future with more than

70,000

Speculist readers. Write to us at:

speculist1@yahoo.com

(More details here.)



Blogroll



Categories

Powered by
Movable Type 3.2