What do you mean? Does it mean the previous one is not from data or what?
@YoussefMohamed-er6zy8 ай бұрын
WTF?!, it pattern matches, and it doesn't chose optimum moves
@xaxfixho8 ай бұрын
Next a cure for cancer 😂
@antarctic2148 ай бұрын
1:42 stockfish isn't handcrafted anymore. It also uses a neural network for eval, just a very small one that is optimized for incremental evaluation on a cpu (NNUE)
@fireninja82508 ай бұрын
Does that mean it could be smarter/faster on a GPU?
@szymonmilosz998 ай бұрын
I think you still need do consider it handcrafted, as those NNUEs are trained on those handcrafted evaluations themselves, just on some depth search.
@许玄清8 ай бұрын
search is handcrafted no?
@charis78548 ай бұрын
Search seems to be handcrafted but its parameters tuned with some blackbox optimiser. NNUE's dataset also contains games from the Leela engine 🤔
@ThePositiev3x8 ай бұрын
@@szymonmilosz99 actually SF authors used many Leela evals in the training process which were not handcrafted. So it's a far-fetched claim to say SF is handcrafted
@cosmosmythos8 ай бұрын
Now we need an algorithm to analyze why the chicken really crossed the road
@Axiomatic758 ай бұрын
I wish we lived in a world where chickens didn't have their motives questioned 😊
@wisdomking83058 ай бұрын
Can somebody explain this joke
@2019inuyasha8 ай бұрын
Chicken doesn't notice road just walks and runs around as desired
@Axiomatic758 ай бұрын
@@wisdomking8305 The joke "Why did the chicken cross the road?" is a classic with origins dating back to the mid-19th century. Its humor lies in the unexpected simplicity of the punchline, "To get to the other side," which subverts the expectation of a clever or humorous answer. The joke's enduring popularity stems from its versatility and ability to be interpreted in various ways. While it's traditionally seen as a play on words, some may also interpret it metaphorically, suggesting deeper meanings related to life and death.
@ibgib8 ай бұрын
Anyone who has chickens knows this too be 💯 % to be "to 💩 on the other side". Sander's law: Anywhere a chicken can 💩, a chicken will 💩.
@wacky30228 ай бұрын
What a time to be alive!
@kpoiii77958 ай бұрын
Does it mean that this is the start of making full algorithms out of neural networks instead of black boxes which they are currently?
@calebbarton42938 ай бұрын
This is a fantastic question, so I am engaging with it in hopes more people see it!
@dustinandrews890198 ай бұрын
This is the real implication. Can we next show it snippets of code and have it decompile the algorithm from the weights and biases in the original model? That sounds like a typical "Two papers down the road" type leap. Inspection is lacking in current models, and that's a big problem.
@KaworuSylph8 ай бұрын
The footage accompanying that part of the video looked like frequency analysis (of node weights?) to pattern match a known chess move (algorithm). I don't know how you'd be able to isolate new algorithms the AI comes up with on its own - maybe something like a fourier analysis to see if it's composed of the freqencies of multiple algorithms combined?
@miriamkapeller67548 ай бұрын
I don't know where you got that from... because what happened here it is the exact opposite. It takes an algorithm or more specifically, two algorithms: position evaluation and search and turns it into a black box that can somehow figure out the right move (or at least a very good move) instantly.
@jimmykrochmalska35018 ай бұрын
saying a chess ai can outperform gpt 4 in chess is like saying a sumo master can knock over a child
@tobirivera-garcia16928 ай бұрын
more like a sumo master beating a child in a sumo match, and the child doesnt even know the rules
@ananthakrishnank32088 ай бұрын
2:10 "Lichess elo" of 2895 sounds more like IM (International Master) strength, than that of a GM's strength. Alphazero (AI chess engine of Deepmind) could easily be 3300+. Leela Chess Zero (Lc0) built using the same architecture is 3600 today. 3:24 ChatGPT is not even good. In the sense it was not made for chess. Gothamchess channel has covered a video with GPT playing chess. At times it played moves that are illegal. Regardless, incredible results 👍🏻
@BooBaddyBig8 ай бұрын
No. GMs start at 2500. IMs are 2400-2500. Above 2700 are sometimes called 'super grandmasters'.
@ananthakrishnank32088 ай бұрын
@@BooBaddyBig That is in "real elo" right? I always thought that "Lichess elo" is 300-400 more from real elo.
@ruinenlust_8 ай бұрын
@@BooBaddyBig Lichess consistently rates everyone 300+ ELOpoints higher than their actual ELO
@andrewboldi478 ай бұрын
the main breakthrough is not that it's the best chess engine. It's that it can be run on very minimal hardware and none of its essential algorithms are specific for chess
@Tymon00008 ай бұрын
and lichess uses glicko-2 not elo
@benjaminlynch99588 ай бұрын
I’m really curious about the real world benefit of this model beyond the fact that it’s small and fast and can be run efficiently locally on low cost and low power devices. I know this supposedly isn’t about chess (???), but they essentially managed to build a worse version of Alpha Zero 7 years later with a completely different model architecture. And yes, the new model can be applied to many domains beyond chess or even board games in general, so there’s some usefulness in that, but… this just seems like a repackaged form of supervised learning that has been around for ages, and the result / accuracy doesn’t seem that different than what older methods of supervised learning would yield. What am I missing here?
@geli95us8 ай бұрын
The fact that it didn't use search is very significant, as search is one of the most important factors in creating a strong chess engine, even a simple hand-crafted evaluation function can be extremely strong when paired with a strong computer and a good search algorithm. There are 2 aspects I consider relevant here: 1) This AI has not been given any "hints" by humans, it had to figure everything itself, unlike alphazero which was given the search algorithm and only had to develop the evaluation function. 2) This AI is forced to spend a constant amount of compute for each position. It's not very useful at all in terms of chess, stockfish is faster and stronger (it's the model it was trained on), but it can teach us a bunch of stuff about how transformers learn (the architecture LLMs are based on), and if we could decipher the algorithm this is using it would be incredibly interesting I imagine, so it could be helpful in developing stronger chess engines in the future
@Chris-b-28 ай бұрын
It strikes me as a thrust towards a two tier modeling system. The primary model learns about the system it is trying to infer on. The secondary models try to predict what the primary model would do. The second is _much_ easier to train and can be smaller for deployment onto low power devices.
@IiiiIiiIllIl8 ай бұрын
@@geli95usstill seems like a pretty small advancement to make a whole video on. Even given your explanation it sounds like a pretty meh change. With how rapidly AI is advancing, this shouldn't even be a footnote. Let alone a whole video... Maybe I'm missing something too 😅
@percy92288 ай бұрын
@@geli95us wasn't deepmind first go at alpha zero using human games as data then played lots of more games using that data to improve? I honestly don't understand what this video is showing us, why is it better than alpha zero in anyway? isn't inference just using an algorithm in AI. Also it's abit weird to say this doesn't use search, because stockfish uses search... so it's tying to imitate stockfish which has had over a decade worth of work put in it and even then it's no where near stockfish and stockfish has stupid amount of search. such a bad video , can't be that hard to explain the advantages over traditional methods. seems like it was rushed
@Aerxis8 ай бұрын
You can train chess bots that imitate humans a lot better, for lower level ELOs.
@Jackson_Zheng8 ай бұрын
Sounds like the AI just learned to play blitz really well. It's just pattern recognition, no calculations are needed and humans can do it too.
@ayoCC8 ай бұрын
seems like the purpose of thise research was to discover how to make an AI output repeatable algorythms that then can maybe run more cheaply.
@PlayOfLifeOfficial8 ай бұрын
Raw intuition
@miriamkapeller67548 ай бұрын
Yes, it runs on pure intuition like human blitz players, but it can presumably not only beat the best blitz players in the world, but does so with only 270 million parameters. If you want to correlate parameters with synapses, that's smaller than the "brain" of a bee.
@Jackson_Zheng8 ай бұрын
@@miriamkapeller6754 yes, and humans haven't seen billions of games and do not play chess for 24/7 without rest. The network is highly optimised for this one specific narrow task and does nothing else. Not really a fair comparison is it? ...Especially as bees and other insects have to pack sensory neurons, an flight controller, accleration and gyroscopic processors, and homeostasis systems, and much much more, into a similar amount of neurons.
@miriamkapeller67548 ай бұрын
@@Jackson_Zheng I'm not saying a bee can learn chess, I'm just saying "a human can do it too" is not much of an argument when a human has about 1 quadrillion synapses compared to this tiny neural network.
@johnmarmalade43458 ай бұрын
Leela Chess Zero (or just Leela) tried a similar test to this one after the team saw the DeepMind paper. They compared how the most recent Leela weights performed against DeepMind's AI. The cool thing is that Leela performed a little better than the DeepMind one. The results are published on Leela's Blog.
@shoobidyboop86348 ай бұрын
The neural net embeds the lookahead library.
@cannot-handle-handles8 ай бұрын
Agreed, the video's take that no search was used was weird. The search used by Stockfish was translated into weights and biases. Still a cool result.
@chritical_ep8 ай бұрын
@@cannot-handle-handlesfrom my understanding, it's "no-search" because it doesn't search more than one move ahead when playing a game. How it was trained doesn't matter
@cannot-handle-handles8 ай бұрын
@@chritical_ep Yes, that's also how I'd understand "no search", but the framing in the video was weird / a bit too sensationalist. So it's technically no search, but the neural net contains equivalent information.
@jswew128 ай бұрын
@@cannot-handle-handles If I am understanding the paper from my skim, the data it is trained on uses binned positional assessments generated by StockFish. Are you saying that because of this, calling it "no search" is sensational? I feel like the underlying data doesn't really matter much as long as the system itself isn't using the approach, kind of like we wouldn't call a sentence-generating transformer "conscious" just because the data underneath was generated by conscious beings
@Xamy-8 ай бұрын
Guys stop being bloody stupid. It’s novel because it means it can run in your browser, where as you cannot run the original could not
@DanLinder8 ай бұрын
Thanks!
@shinyless8 ай бұрын
What about the fact that the input/output parameter count is drastically reduced by iteration ? Better to compute "just a move" rather than a whole board ? Or am I misled ? :)
@Chris.Davies8 ай бұрын
This is a clear derivation of the word prediction of the large language models. All these pseudo-smart systems are deeply impressive tools for humans, and I can't wait to see what we'll do with them!
@pinkserenade8 ай бұрын
This AI proves that it's possible to reach 2800 using pure intuition (pattern recognition) without calculating (searching)
@menjolno8 ай бұрын
6:52 very important disclaimer: if you try to create csam using microsoft AI, they will catch you. Please make sure to disclose that you don't really own the service.
@Thomas-ot5ei8 ай бұрын
If you thought you couldn't be more surprised… wow!
@miriamkapeller67548 ай бұрын
I tried this approach a while ago. The network I trained could at least beat me (~1000 elo), which I was satisfied with. I could have trained it longer, but this was 6 years ago and my graphics cards was really bad back then. It was also a smaller network and not a transformer, but an AlphaZero style convolutional network. One fun thing you can do with this is to condition on the rating of the player/chess engine to have the network mimic a certain skill level.
@wealthycow56258 ай бұрын
If you adjusted the amount of moves it could see, this could be a great engine for chess players to see actually human playable lines
@galmacky8 ай бұрын
So this is just a distilled student model from Stockfish?
@Blattealkiller8 ай бұрын
Can they use this as a new evaluation function that they would use inside a search algorithm ?
@user-dh8oi2mk4f8 ай бұрын
No, you need a separate network for evaluating positions
@DeepThinker1938 ай бұрын
Imma call it now. This is a true breakthrough to the actual start of AGI...provided the paper is true. The ability to derive pattern learning algorithms from the model and duplicate them rapidly and place them in other models would potentially lead to the exponential growth we were actually expecting when AI was announced.
@c4fusion18 ай бұрын
For anyone confused, the real breakthrough is that this further proves transformers can reason better at the zero shot level with more scale. By making the neural network 10x bigger than AlphaZero, they were able to get way better zero shot performance.
@user-dh8oi2mk4f8 ай бұрын
Leela Chess Zero had already been experimenting transformer networks for more than a year before this paper, this isn’t a breakthrough
@propotkunin4458 ай бұрын
does the comparison to gpt-4 make sense? i know little about ai but is it really such a surprise that this algorithm is better at chess than a large LANGUAGE model?
@vladthemagnificent90528 ай бұрын
Dear Karoly, thank you for extensive coverage of the most exciting reakthrough in computer science and more. But this one, you did not convinse me this paper has any significant result. They deleted the search part and it preformed worse than tree-search algorithms... duh... In fact it couldn't even make mate in 3 on its own. That whole speach about the algorithms in the end seems to be on the opposite side of what was done in the paper. I don't get it whats new or exciting about this particular paper. And I do not like the general sentiment that now the transformer is a soluion for everything, just throw more compute into it. No, I believe, it is still very important to develop different algorithms and different architecture to do really impressive stuff, and other videos on this channel illustrate this idea perfectly.
@EstrangedEstranged8 ай бұрын
The transformer is not a solution for everything but it solves way more than our ego would like to admit. The paper proves that even training on isolated examples can create inside the model structures similar to the things in the real world that produced the examples. The examples are enough to create (through training which is an evolutionary process) the behaviour that produced the examples. It's a chess proof against the "stochastic parrot" argument.
@joshuascholar32208 ай бұрын
Didn't it say that it's using a one move look ahead? I should read the paper, but I bet if you understood the details enough, you'd find that all kinds of look aheads are actually happening. In large language models, transformers make many passes with many specialized "key and query" networks.
@miriamkapeller67548 ай бұрын
@@joshuascholar3220 No. The "one move" it looks ahead is just the move it's going to make. There is no search. Now internally the transformer is obviously going to do some analysis, it will likely generate maps of where the opposite pieces could move, create some higher level danger maps and so on. But it's still only a single inference pass.
@vladthemagnificent90528 ай бұрын
@EstrangedEstranged I don't have my ego hurt in any way, if they put 1000 times more compute and memory into it they would have had a program that plays chess better than everything seen before (although it still wouldn't know if it is allowed to do castling or en passant, without search and game history, lol). The problem I see here is that a bunch of people put a lot of resources to build a useless model to prove a point that had no need to be proven. ok deepmind can waste their money and time all as mush as they want, but I am confused as to why this is reported about as an exciting piece of research.
@andreaspetrov59518 ай бұрын
The cadence of speech in this video differs from earlier ones so much that I can only assume it's AI-generated. It's borderline robotic in it's precise timing.
@jasonruff12708 ай бұрын
what kind of computer is used to handle all these operations and parameters, do they just used multiple gpus for deep learning like this?
@DeepThinker1938 ай бұрын
lol "I love the smell of an amazing dusty old paper". xD
@gergopool8 ай бұрын
I think I'm missing the point. They did a supervised training on pre-generated, powerful chess moves and made a paper out of it? That sounds like something one would do as a simple baseline for further experiments.
@paroxysm64378 ай бұрын
Essentially, an AI learned by watching games instead of playing them. Traditionally, you train AI through playing itself and searching/calculating the most "optimal" move. Tens of millions of games are played/analyzed then stored into massive datasets. This AI didn't play any games and purely just "watched" another strong AI play. This is big not just for chess but a bunch of applications as you could theoretically train an AI for a fraction of the computing power and get relatively similar performance.
@gergopool8 ай бұрын
@@paroxysm6437 They trained AlphaZero with no human training data because that was the goal they wanted to achieve. They were aware they could have done it with supervised data similarly to AlphaGo, but AlphaZero stood as a proof that RL can do that task alone, tabula rasa. But just because something can be solved by RL, it's often not optimal. So now, years later they present a transformer can be trained supervised to learn moves. Of course it can, their AlphaZero policy network did the same thing with a much harder learning curve. So I am missing the main contribution here. For me it sounds like a university project, but I'm very much open for arguments.
@MoritzvonSchweinitz8 ай бұрын
Huh. Way back in the day, AI researchers asked human experts tons of questions in order to understand their internal "algorithms" - with the problem that often, humans didn't even know how the came to certain conclusions. This algorithm-from-data thing could be groundbreaking for this - and, if I understood correctly - should reduce the 'model' size by a lot, because you could replace millions of neurons with algoithms.
@Adhil_parammel8 ай бұрын
From this papper we can infer that inorder to get human level general language model we we need super human level data.
@research4178 ай бұрын
Well, yes and no. Human language isn't a zero-sum game, you aren't trying to beat the other person (unless you're in a debate). For a general language model like Gemini and GPT-4 it's more about the breadth and size of the amount of data, you want it to be able to imitate very high level academic language, yes, but you also want it to be able to imitate your average casual conversation. I think the next step for a human level general language model is inventing a transformer that's more novel, being able to learn from itself or on its own without humans, so it can infer data on its own.
@jnevercast8 ай бұрын
I agree with 417, the ability to reach better than human artificial intelligence really hinges on being able to before deductive reasoning faster than humans on more data they could see in their life time. Subjectively, It doesn't seem unlikely that we could achieve that with in context learning alone, which could be a year away.
@dustinandrews890198 ай бұрын
I think you have a point. GPT type models write like an average, if overly polite, internet user. It takes a lot of prompt engineering to get it to be brief, stop apologizing, and stop explaining like I am five. So that data's not going to cut it.
@jnevercast8 ай бұрын
@@dustinandrews89019 They write however they're tuned to, but I'd say the RLHF had a big effect on OpenAI GPTs feeling mid. I'm not sure how they accomplished the difference, be it fine tuning or just better data cleaning, but Claude 3 feels a lot better. Though there is a tradeoff, Claude doesn't fear jargon, and sometimes that makes communication slightly more challenging. Though of course it can explain what it meant, so we all get smarter.
@vik24oct19918 ай бұрын
humans use search so no.
@ansklfnaskidfhn-hi6zg8 ай бұрын
Leela Chess Zero surpassed it last year. Hopefully Google gives their vast resources to the Leela team so they can make it even better.
@STONJAUS_FILMS8 ай бұрын
even tho i dont completely understand the how, this sounds like the biggest leap I've heard in the last year. sounds like there is room for crazy optimization of way bigger models. ... wow
@peterkonrad43648 ай бұрын
i tried something similar about 15 years ago for connect 4. i had a very good traditional connect 4 program (like stockfish for chess here), and i let a neural net learn boards and what my player would do. but i never got it working as intended. i always had a feeling that i simply didnt give it enough training examples. i wanted to go to tic tac toe, so i could give it maybe enough examples with the computing power i had available back then, but then i lost interest. nice to see, that it works! even on chess!
@vik24oct19918 ай бұрын
this is akin to rote learning with some pattern recognition, you analyse so many board positions from top engine that you literally know the best move in each of those position, for positions which are unseen you do pattern recognition (this part is shady because there is no real way to know if logic was used to make the move as search is not used).
@banggiangle82588 ай бұрын
To me, the breakthrough of this paper is kind of limited. Firstly, it seems that this is only supervised learning on a massive dataset of chess games. Secondly, the ground truth of the action is taken from ST, which is already a strong chess engine. ST 16 elo rating is around 3k6, while the learned model achieves a 2k8 rating, which makes sense but is not surprising to me at all. Finally, the whole point of searching and self-play is that we don't know what the right moves are, so the machines have to come up with new moves on their own. In this way, the machines can make up things that can even surprise experts. The hard part of finding good moves in the first place is already done by ST, and given that the labels are provided by ST, the end performance is always upper-bounded by the performance of ST.
@DanFrederiksen8 ай бұрын
top human players don't calculate either for the most part (non classical). It's shocking how shallow their compute is. they have learned a function to make the call without branching. the game with magnus and polgar in the park showed me that
@paroxysm64378 ай бұрын
this is false lol humans especially in classical do very deep thinks/calculations - even Magnus. although in other time formats like blitz, it's more intuition
@DanFrederiksen8 ай бұрын
@@paroxysm6437 true in classical they do calculate but in faster time control it's just a glance. Hikaru plays a lot on camera and you can hear his decision process. He has an immediate impression of the board and he'll say something like I guess I'll go here? and there is no calculation going on at all. And it's typically out of theory. And I have no doubt he's still playing in the 90s of accuracy. So much so that Nepo has accused him of cheating.
@Johan511Kinderheim8 ай бұрын
Now give it various openings and positions to see where its performance are the maximum and minimum. Then we would have an idea of which openings would lead to play that’s easier and more intuitive
@exsurgemechprints26718 ай бұрын
this paper proved that you can have capable neural networks just with datasets that are large enough.
@feynstein10048 ай бұрын
Aha! So size does matter. I knew it
@henrytep88848 ай бұрын
@@feynstein1004uhh no, because size is too costly. What really matters is efficiency, so getting the job done really really really fast
@dreadowen6168 ай бұрын
Can you imagine specs of machine to run this?
@PaulanerStudios8 ай бұрын
This is confusing to me. Why try to replace the even tinier architecture that relies on UCT? It is orders of magnitude more sample efficient. Why replace it with a vastly less data efficient approach that compresses Stockfish's search algorithm? Just to get rid of Tree Search? Or to get rid of having to train in an environment? It's kind of cool in regards to immitation learning but the sample efficiency is incredibly bad.
@research4178 ай бұрын
Like Károly mentions in the video it isn't really about chess or creating a better chess algorithm. Reading the paper, it's pretty clear they're testing the viability of large transformer models for tasks other than language. If you think about how successful text and image based transformer models like ChatGPT, DALL-E, AlphaFold, and other similar models have been, it's understandable. If all you need for an AI that can perform at superhuman levels is to just implement a transformer model and feed it a bajillion parameters, then think about how useful that could be. Chess is also a great example to train AI with, because it's nearly been solved (computers perform better than humans), and there is an almost infinite amount of data to train on. The implications of papers like these are important, because if they can improve their models more, say they used a trillion parameters and refined the techniques they use, and they got a model to consistently perform at 4000 ELO, that would imply transformer networks can eventually surpass the data they've been given and start performing at a higher level. Those techniques could also be used as inspiration for other networks, improving the field as a whole.
@jnevercast8 ай бұрын
Sample efficiency has tradeoffs depending on the problem you're learning, for example, self-play is reinforcement learning. Say you didn't have a simulator of the environment, or if the simulator was computationally very intensive, then perhaps reinforcement learning might take more clock time than sampling billions of games in parallel. Supervised learning scales better than reinforcement learning for hard problems.
@dustinandrews890198 ай бұрын
Short answer: They did it in a better way with less resources. This one can run on a consumer grade PC. Sort of like replacing a 747 with a Civic for cross town jaunts.
@PaulanerStudios8 ай бұрын
@@jnevercast True but also it's hard to find a billion examples of tasks performed that can be learned from in most cases. The chess problem illustrates this quiet well. They could get an easy 15 billion samples because it is a synthetic/cheap to compute environment. If data is taken from the real world in most cases this is infeasible. If the data is already taken from a synthetic environment why not run the more sample efficient algorithm? You have to compute the environment either way. If they just want to bootstrap a transformer for faster inference from the trajectories another agent has taken then fine. But then it's not nearly as cool as it's made out to be.
@PaulanerStudios8 ай бұрын
@@dustinandrews89019 I can train and run a muzero that plays chess at >3500 elo on my 4 year old laptop. Backproping though 270mil parameters of a transformer 15 billion times... not so much.
@alanESV28 ай бұрын
“New” from 2017. AI goes by quick
@TreeYogaSchool8 ай бұрын
Looks like AI has us in checkmate, but I want a rematch! Great video. Thank you for this information.
@lasagnadipalude89398 ай бұрын
Give it eyes and it could tell you the laws of physics we still don't understand or know
@JoshKings-tr2vc8 ай бұрын
If only they could learn from fewer datapoints. That would be an absolutely crazy time.
@pierrecurie8 ай бұрын
Did the title change just a few hrs after release? Was this channel always so reliant on clickbait?
@Bwaptz8 ай бұрын
Your enthusiasm is always inspiring!
@Chorm8 ай бұрын
That would be crazy if P=NP turned out to be true.
@usualatoms48688 ай бұрын
Let us know if someone leaks a military tactics AI. I'd be utterly surprised if there weren't several already.
@philanthroperadical8 ай бұрын
I've been waiting for this since several YEARS! I'm doing my own research and it's always interesting to see where others are at. And I feel the urge to solve chess too lol
@MrSongib8 ай бұрын
Next, we do Shogi.
@PlayOfLifeOfficial8 ай бұрын
There are a lot of unanswered questions here
@Ghost-pb4ts8 ай бұрын
"If we could use the Sharingan, this is how it would have felt."
@MrVbarroso8 ай бұрын
Damn, I just read this paper yesterday! That's a coincidence and a half.
@Nonkel_Jef8 ай бұрын
No selfplay? Why would they do that?
@adamweishaupt37338 ай бұрын
If this is ChatGPT's next brain compartment (after GPT and DALI) we might be getting awfully close to an AI that can tell truth from fiction, which sounds pretty general to me...
@P-G-778 ай бұрын
Great method. The game. Basic, elementary, mathematical... but superb for the search for superiority, elasticity, stubbornness I would say almost perfection. Something that leads us to be proud to see how much we are doing for our future in the end.
@yanushkowalsky14028 ай бұрын
I thought chess ai is already on the top level that can lead to a draw anytime it can't win
@blackmartini76848 ай бұрын
But can it play 5D chess with multiversal time travel
@manavkumar3488 ай бұрын
We should have chess tournaments where different types AIs play against each other
@Benjamin-yq8yl8 ай бұрын
They already exist 😁
@Adhil_parammel8 ай бұрын
Ccrl
@quellepls25688 ай бұрын
Were can i Download that ai?
@许玄清8 ай бұрын
You can’t, but this neural network just a marketing stunt. It is weaker than Leela Chess Zero network even at 1 node.
@milseq8 ай бұрын
AI has been beating grandmasters since the 90s...
@dustinandrews890198 ай бұрын
Potentially running on a phone? Granted the phones are a lot better now...
@SageMadHatter8 ай бұрын
What is actually new here? I’m sure Deepmind published something interesting, but this video fails to properly explain what that was. It sounds like any other neural network that has been trained on data.
@gwentarinokripperinolkjdsf6838 ай бұрын
Sounds to me just the result is suprising
@AySz888 ай бұрын
He seems to explain in the later third of the video, no? The main point is that you wouldn't want an AI to need to experiment on the real world in order to learn, or to run lengthy simulations during usage to fill the gap on weaknesses. For strong performance, good expert data is enough.
@VeeZzz1238 ай бұрын
Skill issue.
@JayYu-lr4ro8 ай бұрын
@@AySz88isn’t pre-training known by ChatGPT and such?
@miriamkapeller67548 ай бұрын
The achievement is merely that it's a lot better than past attempts. Even I have trained a chess neural network that plays chess without search many years ago, it just doesn't perform as well.
@Maltebyte28 ай бұрын
Its pre AGI! its here soon!
@miriamkapeller67548 ай бұрын
This is as far from AGI as it could possibly get. This is a classical and highly specialized neural network that can do only a single thing and nothing else.
@Maltebyte28 ай бұрын
@@miriamkapeller6754 That is true but this capability can be expanded bejond chess. And OpenAI probably already has AGI but its not available to the public.
@Johan511Kinderheim8 ай бұрын
@@miriamkapeller6754This translates to other fields. Give AI a bunch of data about how a human acts in certain situations and the AI would be able to mimic humans
@ILLUSIONgroup8 ай бұрын
What happens if it plays tic-tac-toe with itself like in the movie War Games? 😄
@marcio.oliveira8 ай бұрын
What a time to be alive!!!! 😄
@AviweZathu8 ай бұрын
😌 I need this AI for checkers as AR Sun Glassea, like those Meta + Rey-ben
@Dark_Brandon_20248 ай бұрын
algorithm that creates an algorithm that creates an algorithm that creates an algorithm - god
@ekkehard88 ай бұрын
So it performs at 1000 less elo than stockfish...
@phen-themoogle76518 ай бұрын
Humans aren't very good at chess (compared to top engines), a lot make guesses and aren't completely sure about the position in many positions. It doesn't surprise me that an AI learning from billions of stockfish moves in various positions transfer over to very high playing, even if the positions are a bit more random, it can still compare how similar something is to another position and sometimes extra pawns or something doesn't matter when positionally you have faster tactics or combinations. Maybe it learns combinations in context from so much data which makes it strong at tactics giving it that kind of rating. Heck, it can even learn openings and every part of the game from that many moves...that's a lot. Imagine a human remembering 15 billion random moves (if I remembered the number correctly), even top players only have a few thousand positions/games memorized (except Magnus which has 10k+) if there's on average 50 moves per game, maybe Magnus has 50,000 random positions/moves down and so yeah...actually compared to how little humans train on, that AI thing kinda sucks for having to look at so much data and can only be around a human GM level. Quite inefficient. When you make an AI that you didn't teach to play chess, and it somehow plays at Stockfish level or better, than I'll be extremely impressed. GPT6 or GPT7? Right now it's just doing copy-cat with some RNG at a high level.
@panzerofthelake44608 ай бұрын
I'm all in for those raytracing algorithms 🤤 if this makes better raytracing algos, I'll be a happy scholar
@littlegravitas98988 ай бұрын
Given it creates specific algorithms rather than data piles, does this help with explainability?
@punpck8 ай бұрын
Chess without search 🤯🤯🤯
@downey66668 ай бұрын
This is a gaint leap forward.
@stevemeisternomic8 ай бұрын
We need a ai that can learn concepts and strategies, so that it can generalize from much less information.
@tubesteaknyouri8 ай бұрын
I was thinking something similar. In light of its training set of 15 billion examples, it seems somewhat trivial to boast that it doesn't use explicit search.
@Johan511Kinderheim8 ай бұрын
But they do have concepts and strategies. They understand chess the same way we do, but better
@diamondjazz20008 ай бұрын
But if this is trained on board positions generated by algorithms (or people) looking ahead, that information would be encoded in the board position. This is just an artifact of a game like chess that doesn’t depend on psychology or history. This same approach wouldn’t work well in poker. This feels overhyped. Of course it’s better than a large language predicting model.
@juhotuho108 ай бұрын
Imitation learning on stockfish? That is probably the most boring approach they could come up with
@lvutodeath8 ай бұрын
You'd be surprised on how much discoveries are made from "boring experiments".
@Daniel-xh9ot8 ай бұрын
Why didn't they use alpha go zero instead of stockfish I wonder
@jnevercast8 ай бұрын
Sure, but there could be good reasons for that. Stockfish isn't a neural network for one, another is that the paper is about demonstrating learning from expert data which is hard to do well.
@miriamkapeller67548 ай бұрын
@@Daniel-xh9ot Why? Stockfish is a lot stronger.
@Daniel-xh9ot8 ай бұрын
@@miriamkapeller6754 I meant an implementation of alpha zero for chess instead of stockfish, wouldnt that output better moves than stockfish?
@craigsurbrook57028 ай бұрын
Why did the robot cross the road? Because it was carbon bonded to the chicken.
@the_master_of_cramp7 ай бұрын
So basically, first we solve a problem using a neural net, then we apply this method to learn the underlying algorithm that best resembles the decisions made by the neural net? Yea sounds good. That's probably how humans also do it. First we experience the world, and get some intuition about it, learn to reflexively act. Then, with some conscious thought, we infer rules from these intuitions.
@timrose43108 ай бұрын
How is that different than any old neural network? Nothing new here??
@paroxysm64378 ай бұрын
One learned via watching another person/AI play. Another learned via playing tens of millions of games/attempting to figure out the best move.
@mahdoosh19078 ай бұрын
i saw you changed the title
@Julzaa8 ай бұрын
Love Anthropic
@dolorsitametblue8 ай бұрын
Is this voice AI-generated? Intonations are all kinds of wrong, even more so than usually.
@vnagaravi8 ай бұрын
How many did you see? AI: 15 billion How many did i win? AI: 0 so there isn't a chance until it freeze/crashed
@AndersHaalandverby8 ай бұрын
Very interesting, as usual. One problem with engines like stockfish is that it plays.. well.. like an engine. It would be interesting to train this not on stockfish, but on human/grandmaster games, maybe it would play more "human-like" moves? I think that would be a really powerful learning tool for chess players.
@GilBoewer8 ай бұрын
I'm always thinking could Dr. TMP be an AI himself from the way he talks
@TwoMinutePapers8 ай бұрын
I can assure you that every episode is me flipping out behind the microphone while looking at these papers! 😀
@shadowskullG8 ай бұрын
@@TwoMinutePaperswith the quality delivered , you get the credit 😅
@davechaffey34938 ай бұрын
Brilliant! If AI was ever going to take over the world, this would be the perfect plan!
@oyashi78 ай бұрын
Isnt this basically the ReBeL algorithm for poker…
@atakama23808 ай бұрын
It's not about winning, it's about sending a message :)
@CesSanchez8 ай бұрын
I still don't get why they keep on saying that Deep Blue was an AI. It was an engine, like stockfish, with nothing to do with AIs.
@hyperpony48658 ай бұрын
True ai does not exists, they are all more or less sophisticated handwritten algorithms, whenever an ai achieves a previously impossible goal, we retroactively push the definition of ai a little further
@Aerxis8 ай бұрын
Yes, according to the original definition, solving problems is intelligence. Hence, any non living contraption that solves problems is A.I. Engines are definitely A.I. according to such definition.
@arw0008 ай бұрын
The meaning of AI changes over time. Generally, AI is whatever is just slightly out of reach at the time that the speaker is being recorded...
@Aerxis8 ай бұрын
@@arw000I agree on the moving goal post, but I disagree on AI being what is out of reach. Most of us call chatGPT an AI as it currently is, which is certainly not out of reach.
@arw0008 ай бұрын
@@Aerxis I guess that is something that's changed more recently. Although there are still people who will look at a language model's ability to solver various problems and learn from input information and say "It doesn't 'understand' what it's doing(whatever that means)". And in that sense the spirit of it is carried on into today... haha
@mathsciencefundamentals31688 ай бұрын
Just to add, stockfish is not one of the best, but is the best😅
@thehealthofthematter10348 ай бұрын
Great...but...can we play with it?
@Leto2ndAtreides8 ай бұрын
This sounds like pattern recognition - across the various states of the game.
@Culpride8 ай бұрын
I see a notable step forward in the alignment problem in this approach. Machines not learning from machines with human set goals, but machines learning approaches and goals from humans. Now we only need a lot of data of desirable human behaviour. Oh ... i see a new problem arise ....
@joannot67068 ай бұрын
A step towards a Neural Network based AI that creates a symbolic or neuro-symbolic AI that runs blazingly fast?
@romanklyuchnikov-ym3ul8 ай бұрын
that voice MUST be AI-generated, right?
@joshuascholar32208 ай бұрын
It did say that it's using a one move look ahead, right? One move is two half moves, by the way. So "no move" wasn't good enough.
@miriamkapeller67548 ай бұрын
There is no look-ahead.
@EobardUchihaThawne8 ай бұрын
lol, i made similiar project for fun with only 10K dataset😂
@EobardUchihaThawne8 ай бұрын
if i only had 14.99999B more😢
@wauthethird8 ай бұрын
man, this channel really fell off
@MindBlowingXR8 ай бұрын
Interesting video!
@nightthemoon84818 ай бұрын
2800 elo is laughable for a chess bot
@TFclife8 ай бұрын
This is without search, and seeing many moves ahead are you listening, if stockfish has a depth of 1. I doubt it would perform as good.