Mr Redmont is not only a true gentleman but also a great commentator. I really enjoy your videos.
@yoloswaggins21616 жыл бұрын
For anyone wondering AlphaGo Zero [W] vs AlphaGo Lee [B]
@yoloswaggins21616 жыл бұрын
The number of blocks refers to the size of the network, specifically it is directly proportional to the number of layers. A network with more blocks is therefore capable of expressing more advanced "ideas". While not inherently tied to training time a larger network typically takes longer to train since it's typically slower to evaluate and it typically requires more data to generalize properly.
@KilgoreTroutAsf6 жыл бұрын
Thanks for the clarification.
@gamecoder06 жыл бұрын
Caveat: I am a hobbyist who is still learning this domain. Each residual block is a configuration of layers stacked in the following order: a convolution layer, a batch normalization modifier, a rectifier non-linearity modifier, a second convolution layer, a second batch normalization modifier, a residual addition modifier, and a second rectifier non-linearity modifier. The twenty-block AlphaGo Zero has nineteen of these blocks, and the forty-block version has thirty-nine of these blocks (each of these has an additional convolutional block to make the rounded numbers that give them their names). This means that the 40-block's network is roughly twice as deep. Because this is a residual network, I'm not sure you can say that the deeper network learns more advanced features; I'd say it is more nuanced. I also do not believe the training times for the twenty-block version (three days) and the forty-block version (forty days) differed because the different network depths required different training times for convergence. Both batch normalization and residual blocks are techniques for solving related problems that break deep network training; the combination should theoretically not require many more iterations to train the deeper network than the shallower one. I believe the twenty-block version was the proof-of-concept, whereas the forty-block version was the full version DeepMind hoped would generate new ideas for Go.
@cgibbard6 жыл бұрын
At some point it would be really interesting to get Michael Redmond set up with Leela Zero to perhaps play some games against it and do some commentary. Though it still has quite a ways to go before reaching AlphaGo Zero's strength, it's already beginning to look similar in some ways in terms of style, and is beginning to reach into the upper pro levels if given enough computing power. It's improving all the time, and it has been a lot of fun to follow its progress over the past few months, from playing almost randomly, to being a kyu that played the large scale game very strongly, but would mess up every local situation, to beating high amateur dans while still not understanding ladders, and now beginning to take games off of professional players.
@Maharani19916 жыл бұрын
Yessss I'd love to see Michael play LZ :3
@dtracers6 жыл бұрын
I think sadly we should wait 2 months and once that happens LZ should be very very strong
@jasonli72096 жыл бұрын
As always, really enjoy the commentary from Redmond 9P. Just one suggestion, maybe change the title to Alpha Go Zero vs Alpha Go Lee?
@Maharani19916 жыл бұрын
+
@decidrophob6 жыл бұрын
35:00 Wow, understandable that Lee version seemed to miss stone pagoda (sekito shibori) shape tesuji in the Sedol Challenge Match Game 5. Right top slow moves missing the fundamental life and death tesuji is simply below dan level (amateur). Oh Meien was commenting in Game 5 that it was surprising AlphaGo (Lee version) accomplished the strength of that level coexisting with the weaknesses inherent to Monte Carlo based programs. Seeing this game, I have the feeling that Lee Sedol would have had a pretty high chance of winning like three of five games if he had been allowed to have sparring sessions with the program before the challenge match. Lee version clearly has serious weaknesses, but it may be somewhat difficult to exploit them unless you clearly know what they are by playing many times with the program. Sedol was extremely impressive in that he seemed to have found the weaknesses only with five games.
@Keldor3146 жыл бұрын
20 block vs. 40 block is referring to the number of layers in the neural network. More layers in theory can be stronger, but are slower for each evaluation as well as taking much longer to train. It's currently unknown just how much the layer count effects maximum strength. Leela Zero, for instance, uses 10 blocks, and is currently believed to be similar in strength to top pros. However, it's still getting stronger as the training progresses, so it's any one's guess what the limit is.
@-art-6 жыл бұрын
Consider doing it on twitch where people with amazon prime can support you for free with their subscriptions. And you can still upload the same video to youtube AGA channel after
@RoryMitchell006 жыл бұрын
Refreshing change of pace with this game. I wonder if DeepMind are at times like proud parents as they pit their newest creation against their previous progeny, and watch as their latest achievement mops the floor with what was once considered state-of-the-art. It must feel just as strange for them to see the strength of AlphaGo Lee so easily humbled.
@kevint29956 жыл бұрын
I love these videos. Chris and Michael are so awesome, they got me into Go, more and more. I do wish Chris could have a slightly better mic set up, but it's not a big deal in general.
@Maharani19916 жыл бұрын
Thank you, great video. :)
@KilgoreTroutAsf6 жыл бұрын
Hi. I just realized this is a Zero vs. Lee game, so the video title and thumbnails are wrong. Also, as a side note, could you write in future videos in the title or in the caption who is B and who is W? It really helps when searching for older videos or when resuming watching an unfinished commentary. Otherwise, great job as always. I am learning so much from these series.
@neilharper18583 жыл бұрын
It is alpha go vs alpha go. Just the strongest version vs the version Lee faced
@DontbtmeplaysGo6 жыл бұрын
Thanks for the video, but the TITLE is wrong: there's no Master in this game, it's AlphaGoLee taking Black against AlphaGoZero_20blocks :) I'm really excited to have this new series! I've been wondering if you'd take a look at these games, and I'm glad you will :)
@hippophile6 жыл бұрын
Nice video; nice little tsumego problem on the left too, I don't recall seeing that one! A bonus element! :-)
@RyanSmith-ow6cm6 жыл бұрын
The video title is mislabeled, this is not Master. Of course thanks for the wonderful videos, these are the highlight of my week.
@hippophile6 жыл бұрын
Is Alphago Lee not an iteration of Master?
@alekerickson43016 жыл бұрын
same here!
@Rubrickety6 жыл бұрын
I'm really looking forward to seeing one of the few games where Maser defeats Zero. Are those on the horizon?
@alekerickson43016 жыл бұрын
I think at the end there, Chris maybe referring to S14? This move played by AlphaGo Lee, was certainly not working, and it had an impact on the outcome... or not ?
@hookedonafeeling1006 жыл бұрын
guys cut the crap, always state who is playing who in the description plz
@fringd6 жыл бұрын
Hehe, loading joseki... ERROR
@infinitysalinity79816 жыл бұрын
Pretty sure "20 blocks" as opposed to 40 just means this alphago is much less complex than the one that played master.
@bernardfinucane20616 жыл бұрын
The tennukis seems so disciplined...
@bobdinn66216 жыл бұрын
zero is consistently computing the branches a little deeper than his opponent so that overall zero is coming up about 1 point better per 100 moves. people are generally more variable in this, but AI is not that variable at all. AI is like good optics in a camera or like the consistent strength of a pocket calculator. Redmond is an example of our variable calculating nature, so he guesses when the branches are more complex and deep. This is the key to understanding what the two AI programs are doing. It is just math basically, one being a bit more accurate/deep than the other. people naturally imagine various things, which can help overall in life, in a case of pure computing what is happening is that AI is just staying on the math and therefore comes through better. that is of course my understanding only in a broad way, not via programming but mostly from watching chess computing in action. i don't think it's much different as applied to go or other board strategy challenges.