<< Back to Off-topic Forum   Search

Posts 1 - 7 of 7   
Google DeepMind Challenge Match Live: 3/15/2016 05:15:11


Hitchslap
Level 56
Report
https://www.youtube.com/watch?v=mzpW10DPHeQ#t=6198

For those who want to watch history in the making
Google DeepMind Challenge Match Live: 3/15/2016 05:17:11


Жұқтыру
Level 56
Report
The history has already been made, shame this wasn't heard about more, though. I heard somewhere Go is estimated to have 2^600 different combinations of games, while chess has 2^120.
Google DeepMind Challenge Match Live: 3/15/2016 05:37:09


l4v.r0v 
Level 59
Report
I heard somewhere Go is estimated to have 2^600 different combinations of games, while chess has 2^120.


2^361 for Go.

To contextualize this: there are more possible game combinations for Go than there are estimated atoms (10^80, about 2^240) in the universe.

Which is why this is huge. People expected that it would take another decade (that's 5 cycles of Moore's Law- or 32 times the computational power we have today) for an AI to best a Go champion. And it already happened just last week.

Why? Well, the "decade" calculations were based on the assumption that a Go AI would adopt the same strategy as a Chess AI- actually look at all possible moves and follow the tree of possibilities. But instead AlphaGo uses Monte Carlo methods (basically, some "guessing" where you check out random moves and see which types of moves work better, and then narrow your search a bit; these can be used, for example, to experimentally derive probabilities or determine the value of pi) and then investigates a small subset of moves rather than all of them.

So we're 10 years ahead of schedule here. This is pretty big.

:P You could also adapt (a variation on) this approach to Warlight, actually.

Edited 3/15/2016 05:38:39
Google DeepMind Challenge Match Live: 3/15/2016 05:58:28


TBest 
Level 60
Report
"the assumption that a Go AI would adopt the same strategy as a Chess AI- actually look at all possible moves and follow the tree of possibilities. But instead AlphaGo uses Monte Carlo methods (basically, some "guessing" where you check out random moves and see which types of moves work better, and then narrow your search a bit"

Sounds to me like a Chess and Go AI both are working the same way :) No Chess AI calculates everything. They are very good at finding "candidate moves" for which they evaluate, using more candidate moves. Compared to looking at all moves they only 'think' about a very small sample. They also use an opening database and a tablebase.
Google DeepMind Challenge Match Live: 3/15/2016 06:50:16


l4v.r0v 
Level 59
Report
Sounds to me like a Chess and Go AI both are working the same way


I oversimplified quite a bit to make it semi-understandable for non-technical people. Specifically avoided any mention of deep artificial neural networks since I can't really explain them better than "algorithms that kind of simulate how neurons behave and make a bunch of statistical guesses."

But you can pick up the crucial differences between AlphaGo and Deep Blue just by checking their Wikis:

AlphaGo: https://en.wikipedia.org/wiki/AlphaGo#Algorithm

Deep Blue: https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#Deep_Blue_versus_Kasparov

Of course, Deep Blue isn't very representative of modern state-of-the-art chess AI, but the core difference is that AlphaGo significantly reduces the need for computational power through its use of neural networks and Monte Carlo methods- so the massive number of possible combinations in Go is no longer daunting.

An article from The Verge explains this better:


The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a "policy" network to help AlphaGo predict the next moves, which in turn trains a "value" network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.


source: http://www.theverge.com/2016/3/9/11185030/google-deepmind-alphago-go-artificial-intelligence-impact

(as you can read from one of Deep Blue's creators in the article- AlphaGo doesn't emphasize search nearly as much as a chess AI but instead focuses on building and testing intuition)

Basically, it's got two separate networks- one to look at possible movesets, and another to evaluate them without simulating a whole game. Needless to say, this is far more advanced than the "candidate move" strategy used by chess AI- it also means that AlphaGo keeps getting better and better over time.

As far as what a neural network is (oh boy): neurons (in your body) fire when an activation threshold is met. You can do something a little bit analogous with statistics as an efficient way to aggregate a bunch of complicated data- having a bunch of simulated neurons "fire" when some statistical thresholds are met, and scaling up through a bunch of layers until you get a good statistical guess about how likely you are to win with a particular playstyle, for example. Or whether a picture depicts a dog. Or whether someone has cancer. Or whether you're likely to buy something.

P(x) = f(a, b, c, ..., z) basically and your neural network (and a lot of other ML algos) tries to simulate that function based on what it can pick up from existing data.

Just highlights the huge difference machine learning makes- and why a lot of people think it's our best shot at a general artificial intelligence.

Edited 3/15/2016 06:54:47
Google DeepMind Challenge Match Live: 3/15/2016 09:15:00

Konkwær III
Level 54
Report
Well the challenge match is over, 4-1.
Google DeepMind Challenge Match Live: 3/15/2016 14:01:53


[AOE] JaiBharat909
Level 56
Report
"algorithms that kind of simulate how neurons behave and make a bunch of statistical guesses."

^How closely do the algorithms match neural firing? I would imagine its quite unrefined and still has potential to grow stronger considering we haven't even mapped all the neuronal circuits yet.
Posts 1 - 7 of 7