Пікірлер
@di380
@di380 2 күн бұрын
Two take aways from this is that humans seem to have a very good understanding of the game of chess and are able to competitively handcraft evaluation functions that play as good as reinforcement learning engines. Second, is that two CNN using Montecarlo could evolve into completely different solutions using the same exact implementation 😮
@zhangwei2671
@zhangwei2671 4 ай бұрын
Kotlin Notebook pls.
@YuriKhrustalev
@YuriKhrustalev 4 ай бұрын
Roman, nice to see you using vector db as well, greetings from Canada
@beattoedtli1040
@beattoedtli1040 4 ай бұрын
Nice talk, but in 2024, Stockfish is still better than alpha zero. Why?
@SebastianBeresniewicz
@SebastianBeresniewicz 4 ай бұрын
Very well presented and great content! I struggle with understanding some accents and maintaining focus with many presenters, especially if they are not great communicators but Dr. Grebennikov is very articulate and easy to follow. Thank you!
@jonabosman4524
@jonabosman4524 4 ай бұрын
Favourite talk of the conference!
@carlosfreire8249
@carlosfreire8249 4 ай бұрын
Very helpful content, thanks for sharing.
@bisdakaraokeatbp
@bisdakaraokeatbp 5 ай бұрын
It's really annoying when you desperately need help and chatbots are just redirecting you over and over,. They don't understand context so companies using these are definitely a turn off.
@gyanantaran
@gyanantaran 5 ай бұрын
This was comprehensible, quite insightful too, thanks for sharing.
@urimtefiki226
@urimtefiki226 6 ай бұрын
I play and I do not think, just repeat the same things wasting my time while waiting for the bullshiter since 2016
@primingdotdev
@primingdotdev 7 ай бұрын
Great talk. A reasonable approach.
@dipanshukumar5504
@dipanshukumar5504 9 ай бұрын
You made a Awesome video but nobody actually cares about ML in Kotlin . Because most of the wanted to learn ML chooses Python over ML
@danruth1089
@danruth1089 10 ай бұрын
tHANK YOU, but I disagree that rook usage
@michaelmassaro4375
@michaelmassaro4375 7 ай бұрын
That Rook move was the best move it’s the only move to draw the game otherwise there is no stopping the Queen from checkmating you can try battling Rook vs Queen but that’s a losing effort
@berndmayer3984
@berndmayer3984 Жыл бұрын
the best investigation yielded approx. 10^42 positions and that is what counts.not the rough estimate of 10^120 possible games.
@XiaoshuoYe
@XiaoshuoYe Жыл бұрын
This is a very very nice talk, thank you!
@wmchanakakasun
@wmchanakakasun Жыл бұрын
awesome!
@plunderersparadise
@plunderersparadise Жыл бұрын
He sounds like he has no idea what he is talking about lol. Just the speech issue I think. Sorry for hate.
@michaelmassaro4375
@michaelmassaro4375 7 ай бұрын
He’s not doing a good job in breaking down the mechanics in my view probably because of my own ineptness but I was hoping for more simpler terms and mechanisms to explain how it is the engines function
@philj9594
@philj9594 Жыл бұрын
Just started learning chess and I know only a little about computer science/programming but this was wonderful to gain a better understanding of what chess engines are actually doing under the hood when I use them and also a better understanding of their limitations. I've noticed many people talk about people over-relying on engines so I figured it would be a good use of my time to gain a deeper understanding of what a chess engine even is if I'm going to be using them regularly. Also, it's just interesting and fun to learn! Thanks for the amazing lecture. :)
@Magnulus76
@Magnulus76 Жыл бұрын
Yes, it's possible to over-rely upon computer chess. Stockfish and Leela are powerful engines but they can have problems in their own understanding (particularly with chess concepts, such as endgame fortresses, something that still baffles engines out there). They also don't always produce data that is particularly relevant to learning chess, in particular Stockfish's "thinking" is very alien and sometimes difficult to learn from.
@michaelmassaro4375
@michaelmassaro4375 7 ай бұрын
So engines can be used for studying lines etc or analyzing a game that was played but it seems many players can use them to cheat on line
@christrifinopoulos8639
@christrifinopoulos8639 Жыл бұрын
about the stockfish evaluation function, is it completely prewritten or are there some (handwritten) parameters that can be optimised through learning? (
@desertplayz3955
@desertplayz3955 Жыл бұрын
I wanna see stockfish pull a Jerome opening now
@A_Swarm_of_Waspcrabs
@A_Swarm_of_Waspcrabs Жыл бұрын
There's a KZbinr Joe Kempsey that forced Stockfish 14 to play the Jerome Gambit against MagnusApp
@nedafiroz514
@nedafiroz514 Жыл бұрын
Fantastic talk
@ruffianeo3418
@ruffianeo3418 2 жыл бұрын
There is one point, usually never mentioned. I will try to explain that (rather valid question) below, hoping, someone else will explain, why this is not a concern: Neural networks (deep or otherwise) act as function estimators. Here, it is the value function F(position) -> Value. As was pointed out early in the talk, this must be an approximation, because it would be cheating the universe if it managed to be exact in the presence of those high numbers of possible positions. (Store more information than number of atoms in the universe). So, an assumption is being made (and that is what is usually not elaborated): Positions, never seen before by the network still yield a value and the means of doing that is some form of interpolation. But for this to work, you assume a smooth value function (how ever high dimensional it is), you assume: V(P+Delta) = Value + Delta` for small enough deltas. So for this to work, the value function for chess has to be smooth-ish. But where did anyone ever prove, that this is the case? Here a simple example of the difference I try to point out: f1: Float -> Float f1 x = x * x If you sample F1 at some points, you can interpolate (with some errors) values between the samples: So, you train the network for, say: x in [1,3,5,7,...]. And when the network is trained and applied to values in [2,4,6], you get some roughly useful value (hopefully). Why? Because the function the network approximates is smooth. Here another function, not smooth: f2: Float -> Float f2 x = random x Training a network at the x in [1,3,5,7] cases does not yield a network which gives good estimators for the even x values. Why? Because that function is not smooth (unless you got lucky with your random numbers). So, which of the above F1, F2 is more akin to a chess value function V(position) -> Value? Who has shown, that chess is F1-ish?
@marcotroster8247
@marcotroster8247 Жыл бұрын
You don't just learn a value function but also a strategy to weigh the trajectories sampled during training, so you can concentrate on good moves and their following moves to distill critical positions out of the game tree. It's quite clever actually. The strategy pi provides a random distribution indicating how likely each move is to be picked by the agent. When sampling a trajectory, you pick moves according to the distribution (stochastically), so the training experiences are just an empirical sample of the real game tree. Then you fit the distribution to explore good trajectories more intense by increasing their probabilities and vice versa. (Have a look at policy gradient / actor-critic techniques if you're interested) So to answer your question about smooth functions. You're usually only guaranteed to converge towards a local minimum of your estimator's error term. It's an empirical process, not an analytical one, so you cannot expect that from AI anyways. After all, you pick moves by sampling from a random distribution to model the intuition of "this move looks good" 😉
@congchuatocmay4837
@congchuatocmay4837 Жыл бұрын
@@marcotroster8247 There are a lot of ways to go in higher dimensional space. If you get blocked one way, there are still 999,999 ways to go for example, or if you get blocked 1000 ways there are still 999000 ways to go. And that is how these artificial neural networks can be trained at all. They completely turn the tables on the 'curse of dimensional.'
@mohit6517
@mohit6517 2 жыл бұрын
can we get the code?
@allorgansnobody
@allorgansnobody 2 жыл бұрын
Wow just 4 minutes in and this is an excellent explanation. Just knowing whether or not stockfish had these "handcrafted" elements is so important to understanding how it works.
@themanwhoknewtoomuch6667
@themanwhoknewtoomuch6667 2 ай бұрын
Also didn't know Stockfish is classical. We tend to take engines as gospel.
@mohamedyasser2068
@mohamedyasser2068 2 жыл бұрын
attending such a lecture for me is a dream , I can't believe that most of them don't play chess !!
@michaelmassaro4375
@michaelmassaro4375 7 ай бұрын
I play chess I’m subscribed to a you tuber he shows many Stockfish game Jozarovs Chess I figured I’d take a look to see how the engines actually work
@hrsger3760
@hrsger3760 2 жыл бұрын
Can someone give me some good reference material or guides for Time Series Analysis using Deep Learning?
@kevingallegos9466
@kevingallegos9466 2 жыл бұрын
Please what is the song at the beginning of the video! I've heard it before and now I want to listen to it! Thankyou!
@sunnysunnybay
@sunnysunnybay 2 жыл бұрын
Without analysing as a chess engine i can see it's actually better for black. Count 9 pieces around the king, both queens are at the 5th rank of the king so they are not included, but black has a rook while no rook is near the white king and the pawn structure has 1 shape out for them too, agains't 3 move on black and good defense around them with both pawn and major pieces. Black has 1 pawn on the 5th rank in font of this weak king also, while it's 1 pawn for H rank & G & E for white.
@AnthonyRonaldBrown
@AnthonyRonaldBrown 2 жыл бұрын
Stockfish 15 NNUE Plays ? The A.R.B Chess System kzbin.info/www/bejne/hHyuYoqYntOZhM0 Stockfish 15 NNUE Plays The (A.R.B.C.S) - Kings & Pawns Game - A.R.B :) kzbin.info/www/bejne/l3exp4aCft-Xerc
@kleemc
@kleemc 2 жыл бұрын
Great presentation. I deal with a lot of time series using deep learning. This lecture gave me some ideas to test.
@uprobo4670
@uprobo4670 2 жыл бұрын
I liked his take on GPT ... 1000% accurate and respectable answer ...
@trontonmogok
@trontonmogok 2 жыл бұрын
thank you for mentioning generative autoencoders
@Frost_Byte_Tech
@Frost_Byte_Tech 2 жыл бұрын
It's because of content like this that I'll never get bored of trying to solve complex problems, really insightful and thought provoking 💫
@ME0WMERE
@ME0WMERE 2 жыл бұрын
10:45 as someone who is actually making a chess engine: Haha no.
@zeldasama
@zeldasama 2 жыл бұрын
The disconnect from professors to students. Lmao
@samreenrehman6643
@samreenrehman6643 2 жыл бұрын
They probably just suck at chess
@michaelmassaro4375
@michaelmassaro4375 7 ай бұрын
@@samreenrehman6643they might suck at chess but than again they probably have a greater understanding of the elements the man is speaking on
@andrescolon
@andrescolon 3 жыл бұрын
Great talk on NLG! Simple, comprehensive and to the point. Thank you for doing this.
@kingshukbanerjee748
@kingshukbanerjee748 3 жыл бұрын
Awesome - excellent treatment - use-case by use-case
@vladimirtchuiev2218
@vladimirtchuiev2218 3 жыл бұрын
I don't understand why do you need the value function, if you have probabilities over possible moves, you will always during deployment select the argmax of the probability vectors... Is it for victory/defeat flags or something like that? Also, after each iteration of the MCTS, is the network trained until convergence or do you go over the self-played game only once?
@fisheatsyourhead
@fisheatsyourhead 2 жыл бұрын
for timed games is a value function not faster
@vladimirtchuiev2218
@vladimirtchuiev2218 2 жыл бұрын
@@fisheatsyourhead After some month of digging, ye it's faster because you don't have the time usually to go towards the end of the game, and instead you consider the values of the leaf nodes.
@amanbansll
@amanbansll Жыл бұрын
I think there is another reason: the task of learning the policy vector alone doesn't teach the model about whether the current position is good or bad, it only learns whats the best thing to do in the situation. While this is enough to play, augmenting the learning process by adding another objective (multi-task learning style, because the model is shared between both objectives, only the head is different) helps the model learn better. Just my thoughts though, feel free to correct me if I'm wrong.
@misterfisk7402
@misterfisk7402 3 жыл бұрын
Thank you for uploading this. It is very informative.
@virtualvoyagers429
@virtualvoyagers429 3 жыл бұрын
wow this is just amazing
@dominican5683
@dominican5683 3 жыл бұрын
I hate chatbots, I miss the good ole days when you could simply press 0 and talk to a human who could fix your problems easy peasy
@avlavas
@avlavas 3 жыл бұрын
Intel Core i7 11700K Motherboard Asus Z590 32GB RAM DDR4 1TB SSD GTX 1060 3GB DDR5 Hi this is my computer, and I use SF14, plus I have a 100 gb chess moves, but in a way I didn't get 100% of it. Can you help me? Ty
@nielspaulin2647
@nielspaulin2647 3 жыл бұрын
EXCELLENT TEACHING. I am a university teacher myself in the past!
@ahmadmaroofkarimi9125
@ahmadmaroofkarimi9125 3 жыл бұрын
Great talk!
@kyokushinfighter78
@kyokushinfighter78 3 жыл бұрын
Interesting talk. I do MehOps nowadays... I don't bloody cares about my stupid management..
@levelerzero1214
@levelerzero1214 3 жыл бұрын
Lot of preparation with Tensorflow to create a project, but it's the real deal. I hope some day I find the time to dig into this.
@geoffreyanderson4719
@geoffreyanderson4719 3 жыл бұрын
Mr Henkelmann j(DIVISIO) has supplied us with an excellent and informative video here. Thanks buddy I hope you make more, and good luck with your work there! It is such a good video because it's clear and brief and full of practical info I did not find elsewhere.
@YourMakingMeNervous
@YourMakingMeNervous 3 жыл бұрын
This is still by far the best lecture I've seen on the topic so far
@simovihinen875
@simovihinen875 3 жыл бұрын
This is very interesting... just finished the game. The accent is very strong though, and I'm kind of struggling to understand the speaker. Only worth watching for actual coders I think.
@saydtg78ashd
@saydtg78ashd 3 жыл бұрын
They should show the slide fullscreen so we don't have to zoom our eyes. We don't need the footage of the presenter speaking.
@ME0WMERE
@ME0WMERE 2 жыл бұрын
they did? (Or very close to fullscreen anyway)
@your_average_joe5781
@your_average_joe5781 2 жыл бұрын
Footage is a term used for movie film. No film was used here so... No 'footage'👍
@kingsgambit
@kingsgambit 3 жыл бұрын
Very interesting contribution! However, the windows executable files, that are linked on the Github site, are down (404 error). Could you check that?