Was "Machine Learning 2.0" All Hype? The Kolmogorov-Arnold Network Explained

  Рет қаралды 103,704

bycloud

bycloud

Күн бұрын

Пікірлер
@bycloudAI
@bycloudAI 7 ай бұрын
Streamline AI task delegation with HubSpot's Free Playbook: clickhubspot.com/9yu and check out my newsletter 😎 mail.bycloud.ai/
@sonOfLiberty100
@sonOfLiberty100 7 ай бұрын
Hmm, don't you know that machine learning is a subset of artificial intelligence?
@gameboyplayer217
@gameboyplayer217 7 ай бұрын
Why don't we combine both for more optimal results?
@ThatTrueCJ201
@ThatTrueCJ201 7 ай бұрын
What KAN is really cool for in my opinion is to find mathematical functions between data where there didn't exist any in the past. And since we know a lot about mathematical optimisation and things like the Taylor/Fourier series, we could theoretically calculate the input-output relationship much more cheaply (inference becomes commodity). Training would be more expensive however
@nyx211
@nyx211 7 ай бұрын
I watched a talk by one of the authors and it seems like KANs are more useful for people doing science with relatively small models. For LLMs and image generators, however, knowing the exact mathematical function doesn't seem to be very useful.
@adamrak7560
@adamrak7560 7 ай бұрын
what about training with GELU/SELU as usual, and converting it later? Interpretability usually is done _after_ training is done anyway.
@Filup
@Filup 7 ай бұрын
I am curious as to whether there will be applications with PINNs in the future, given this possibility
@flowerpt
@flowerpt 7 ай бұрын
Hey, KAN! Hiya, BAR-B. You wanna go for a spline?
@hannen758
@hannen758 7 ай бұрын
😂😂!
@bendikarbogast1229
@bendikarbogast1229 7 ай бұрын
I am KANough!
@rothauspils123
@rothauspils123 7 ай бұрын
Still waiting to wake up and realized all of this was just a dream.
@4.0.4
@4.0.4 7 ай бұрын
GPT? Computers that can draw? Bro it's 2005 wake up.
@csiguszfoxoup
@csiguszfoxoup 7 ай бұрын
@@4.0.4 god I wish
@justsomeonepassingby3838
@justsomeonepassingby3838 7 ай бұрын
Don't worry, transformers are still unable to do anything they haven't learnt from their dataset
@nescaufe1991
@nescaufe1991 7 ай бұрын
Favorite comment of who knows how long
@underscore.
@underscore. 7 ай бұрын
​@@justsomeonepassingby3838 they definetly can.
@Steamrick
@Steamrick 7 ай бұрын
Are you sure that a KAN will save VRAM? Yes, you need less parameters, but unless I misunderstood the video wouldn't a KAN need much 'bigger' parameters than a highly optimized MLP? A function should need a lot more bits to store than a 4-bit or 8-bit parameter.
@angelorf
@angelorf 7 ай бұрын
I don't think they would have counted a whole spline as a single parameter. A 1D B-spline with 4 control points simply has 4 parameters.
@AleatoricSatan
@AleatoricSatan 7 ай бұрын
Exactly, now you get to have less layers & less parameters per layer, but now your parameters are up to n times bigger. Except if they count, simpler cases (eg some curves are simpler than others so less data points) and they could shave of some low percentage of the size there (10-15% perhaps? Just pulling a random estimate). If that is case though, I do not understand why we can't just enrich MLPs with b-spline nodes when necessary and wrap this up, networks that mix multiple different activation functions are pretty common today. Instead seems like everyone is desperate to announce and hype the next best thing.
@WaefreBeorn
@WaefreBeorn 7 ай бұрын
I'm using gpt4 to design a KAN bspline stem separation model, KAN-Stem, this has ballooned the ram usage due to layer training parameters, there is no efficiency addition, what I get is layer complexity and weighting structure causes the initial abstraction into ram to skyrocket. My basic 5 example model with one second chunks when test ran on cpu only estimated 854gb of ram usage, I only have 64gb, rn making a caching and parsing system to step by step the training process as a ram swap with cache to prove the viabilty. IMO KAN is better for high spline prediction (1 input, 7 output) which is why I chose it for audio stem separation.
@jeremykothe2847
@jeremykothe2847 7 ай бұрын
@@WaefreBeorn In my testing you should be able to use far smaller layers for a KAN network to solve a similar problem. It's very situation specific though as you note.
@WaefreBeorn
@WaefreBeorn 7 ай бұрын
@@jeremykothe2847 that’s exactly what I did! A profiled 2 layer KAN network! I’m right now fixing memory management issues
@Guedez1
@Guedez1 7 ай бұрын
Ok, but when Kan we use it? :^)
@johndank2209
@johndank2209 7 ай бұрын
probably in 2 or 3 years you will see tech demos, the same way gpt 2 was introduced.
@raspberryjam
@raspberryjam 7 ай бұрын
whenever the gpu wizards grace us
@KostasOreopoulos
@KostasOreopoulos 7 ай бұрын
In mathematics we have "Generalized linear models". The simple explanation is that we know linear regression. What they forget to teach (not always) is that in order for that to work, all parameters and the result should have the should have the same distribution. For example Normal. What happens when they dont. We have to transform the output of the regression from one Distribution to another (or the other way around). This is easy for exponential distributions. Those S functions (or relu) are transformers from Normal to Categorical (we call that logistic regression). But that is not alway accurate ofcourse. It has been proven good enough though. In theory we could have different transform function that better map between those distributions. So the idea is pretty simple and I guess for many cases where logistic regression is obvious, it will fallback to obvious S-like functions. It would be interesting if that could be adaptive. Mean starting with simple Relus and by some criteria increase the Spline points etc
@UnbornIdeas
@UnbornIdeas 7 ай бұрын
Is it KAN-enough? We don't know but we'll find out eventually!
@efraim6960
@efraim6960 7 ай бұрын
I cannot believe My Little Pony powers the AIs that I regularly use.
@bresevic7418
@bresevic7418 7 ай бұрын
It's true, and nvidia currently has a massive hold on manufacturing the power of friendship, which is why they're dominating the stock market The GPU's are a side business
@Nekroido
@Nekroido 7 ай бұрын
I was confused why the activator function should be a static sigmoid. I'd just come from FP to study ML and it made total sense to have those adjustable along with weights. 10x more efficiency is pretty impressive on paper tbh. Really looking forward to see what researchers will achieve with KAN
@Woollzable
@Woollzable 7 ай бұрын
Mate, sigmoid is barely used anymore unless its for the output layer. Sigmoids are used as an introduction to artifical neural networks / DL, most people stopped using them years ago due to vanishing gradient problem. There are many activation functions that are used in intermediate layers that are far more effective.
@Nekroido
@Nekroido 7 ай бұрын
@@Woollzable thanks for the insight. Indeed, I only did introduction to ML, and had to go back to study related topics in mathematics. I didn't even remember the name of the function from that introduction, but sigmoid was mentioned in this video as an example
@inconformada1000
@inconformada1000 7 ай бұрын
What about the bias, you can change it also 2:05
@anthonychiang3182
@anthonychiang3182 7 ай бұрын
biases can be represented as weights
@daycred
@daycred 7 ай бұрын
@@anthonychiang3182 And how would you represent an offset as a multiplier?
@inconformada1000
@inconformada1000 7 ай бұрын
@@daycred Well I guess it could but it would be computacionaly ineffective, bycoud just let that one slide.
@anthonychiang3182
@anthonychiang3182 7 ай бұрын
@@daycred constant node of 1 as input for each layer, then just adjust the weight of that node’s edge to the next layer
@daycred
@daycred 7 ай бұрын
@@anthonychiang3182 Ahh, now I get what you mean. They're not thought, and words have a meaning so the og comment is still right. And besides, at that point that node basically has a bias of its own though i guess it isn't trained itself
@AlexLuthore
@AlexLuthore 7 ай бұрын
I really like that kan isnt a black box. Thags huge for alignment
@jeremykothe2847
@jeremykothe2847 7 ай бұрын
So which spline shape are you looking for to explain "evil"?
@pylotlight
@pylotlight 7 ай бұрын
​@@jeremykothe2847ss
@spencerfunk6697
@spencerfunk6697 7 ай бұрын
this would be cool to integrate into the mlp frameworks we have. it would be cool having something that inst just linear regression. i think what makes kans stand out its how theyre output can dynamically change. if we could think having this alongside transformers would be sick
@atticusbeachy3707
@atticusbeachy3707 7 ай бұрын
Where is the quote at 3:52 from? ("The use of splines is not necessary. In particular, they seem quite expensive due to the recursive nature of B_{i,n}. Many other families of non-parametric AFs are possible [ADIP21]. For example, our KAF [SVTU19] provides a similar flexibility without any need of recursion and it should be pretty straightforward to implement")
@Alcardian_0
@Alcardian_0 7 ай бұрын
I love this and all the other ideas for how to improve on AI like mamba, but I will believe them when I see the first competitive model to Mixtral, Llama3 or ChatGPT being released that utilizes any of these concepts.
@ronilevarez901
@ronilevarez901 7 ай бұрын
However it is possible that many of this improvements won't be usable at all for the current trendy AI tools we have and new types of AI apps will have to be developed, that will be smarter and faster.
@setop123
@setop123 7 ай бұрын
Gr8 simplification, thank you ! ❤‍🔥
@edsheeran1941
@edsheeran1941 6 ай бұрын
wuaw I loved this vid! subsribing to the newsletter now
@mujtabaalam5907
@mujtabaalam5907 7 ай бұрын
2:00 where is this from (the blue and orange)? I remember it was a google course of some kind but I can't find it
@NuncNuncNuncNunc
@NuncNuncNuncNunc 7 ай бұрын
It's the tensorflow playground
@Vedranation
@Vedranation 7 ай бұрын
One reason we use ReLu is to overcome vanishing/exploding gradient problem. Won't KAN bring this issue back, if not even more amplified?
@ilikegeorgiabutiveonlybeen6705
@ilikegeorgiabutiveonlybeen6705 7 ай бұрын
you can always try doing fractional derivatives backprop
@comradepeter87
@comradepeter87 7 ай бұрын
I've also been hearing a lot about "liquid networks". Been filling my YT feed lately. It'd be cool if you could make video on that.
@musicproductionbrauns2594
@musicproductionbrauns2594 7 ай бұрын
maybe just FM sin waves as activation functions as fourier sin composition = all functions eexisting
@ansidhe
@ansidhe 7 ай бұрын
that’s a good idea as an alternative to b-splines! Great thinking! 👍🏻
@sitkicantoraman
@sitkicantoraman 7 ай бұрын
this is actually really smart. I wonder how many parameters this would add.
@musicproductionbrauns2594
@musicproductionbrauns2594 7 ай бұрын
@@sitkicantoraman I just thought frequency,amplitude and phase per activation function, but to be honest I am not deeply into programming neural networks but just from music I know you can already get some crazy function / waveforms from just like 10 sinus function in a row ... In a neural net you also mix every point up so probably you can get allot of variations / paths
@chsovi7164
@chsovi7164 7 ай бұрын
im a bit confused how they avoid the problem of not every b spline being a function? why not use fourier series? you could just train the whole neural net with a n=1 fourier series then once the nn starts converging on a value for the activation, you make it n=2 and start adjusting that instead
@alkeryn1700
@alkeryn1700 7 ай бұрын
Someone actually did that lol
@franzwollang
@franzwollang 7 ай бұрын
@@alkeryn1700 sauce
@chsovi7164
@chsovi7164 7 ай бұрын
@@alkeryn1700 link???
@Eltaurus
@Eltaurus 7 ай бұрын
Aren't you confusing B-spline with Bezier?
@alkeryn1700
@alkeryn1700 7 ай бұрын
@@Eltaurus nope, i also shared the link but youtube deleted it lol. You can easily find it though
@key_bounce
@key_bounce 7 ай бұрын
What is that bike design at 0:02 for?
@The.Anime.Library
@The.Anime.Library 7 ай бұрын
Is for illustrating reinventing the wheel
@Eric-yd9dm
@Eric-yd9dm 7 ай бұрын
I can imagine a professor saying "Yes I KAN" "No you KAN't" "Yes I KAN"
@tiagotiagot
@tiagotiagot 7 ай бұрын
What if the weights and biases of each neuron actually each also had their own trainable weights and biases, working as sub-neurons for each neuron, and you would train those instead of the neurons own weights and biases directly, sorta training the network to rewire itself on-the-fly?
@Coach-Solar_Hound
@Coach-Solar_Hound 7 ай бұрын
adding a linear layer inside of a linear layer would make the system still behave linear wouldn't it?
@tiagotiagot
@tiagotiagot 7 ай бұрын
@@Coach-Solar_Hound It would still be using conventional non-linear activation functions; the difference is it would adjust the weights and biases at inference time using the same mechanism that currently just drives the neurons directly..
@poipoi300
@poipoi300 7 ай бұрын
How do you adjust the weights at inference? Magic? You need to know what the output would be and therefore it's just regular training. Besides you can already train on inferences and have a learning model with any NN if you're dealing with data that changes over time. Take seasonal weather for instance, it's been done to predict like 10 minutes in the future, then 10 minutes later the model is trained by a small margin on that output. Adding smaller weights on the overall architecture here really doesn't do anything.
@tiagotiagot
@tiagotiagot 7 ай бұрын
@@poipoi300 Didn't you read what I wrote? There would be special neurons inferring the weights of the regular neurons at inference time.
@novantha1
@novantha1 7 ай бұрын
Hm... I wonder if this doesn't pave the way for a hybrid setup with either MLP + KAN MoE models, or maybe a series FFN where you have a small MLP block to handle noisy inputs which feeds into a KAN that does the actual approximating.
@newbie8051
@newbie8051 7 ай бұрын
2:40 to get the network respond correctly to an input right ? How will it respond to an output lol
@csabaszekely3098
@csabaszekely3098 5 ай бұрын
It won't be good for language models, but the point is that we shouldn't just rely on trying to guess the next word for everything. Pretty bad idea for science.
@skeptiklive
@skeptiklive 7 ай бұрын
Could you use a mature MLP model to produce high quality synthetic training data for training a KAN model? In other words, can you "overfit" a KAN model to the outputs of something like GPT-4 to a sufficient similarity in output that you could then run that model on consumer hardware? 🤔
@timeflex
@timeflex 7 ай бұрын
Given the fact that 1.53-bit networks are already in the labs, I doubt KAN with 32-bit precision will be any smaller.
@NuncNuncNuncNunc
@NuncNuncNuncNunc 7 ай бұрын
The description of neural networks seems just a bit off. Training difficulty seems like an implementation problem. There is also the issue of where you wish to place your costs. Models with fewer parameters may be cheaper (pick your metric) to run outweighing training costs. I thought you were going to make it through without a nod to figure 2.1
@Words-.
@Words-. 7 ай бұрын
Great analogy for curse of high dimensionality! I’ve never heard of the term, as I’m not in ML, but your analogy was easy to understand
@jsivonenVR
@jsivonenVR 7 ай бұрын
I’ll just admit that this was way over my head 😅👌🏻
@dmitryr9613
@dmitryr9613 7 ай бұрын
I'm surprised that only about 60% of it went over my head , reading an entire Twitter head about KAN might've helped tho
@kinkanman2134
@kinkanman2134 7 ай бұрын
@@dmitryr9613 lol same. ive been SUPER interested in Ai suddenly so im trying to use my fortnite rotted brain to learn on twitter and this video shows its lowkey working. being able to just screenshot tweets and send it to gpt4 omni for free tutoring is amazing
@SolathPrime
@SolathPrime 7 ай бұрын
Instead of KAN or MLP why don't we just sum single layer perceptron activations in parallel Like this for example: ```python # imports import numpy as np from datasets import xor # load xor labels for example xs = xor.xs ys = xor.ys # weights = 5 input_size = 2 output_size = 2 ws = np.random.randn(weights, output_size, input_size) bs = np.random.randn(weights, output_size, 1) # batch dot product pred = np.einsum("woi,io->woj") + bs ``` This when tested appears to be faster in training and even better in parallelization
@tomoki-v6o
@tomoki-v6o 7 ай бұрын
briliant!!
@SolathPrime
@SolathPrime 7 ай бұрын
Wait why did my comment disappear?
@arg0x-
@arg0x- 7 ай бұрын
What math should i need to learn to understand this video?
@GeneralKenobi69420
@GeneralKenobi69420 7 ай бұрын
yes
@arg0x-
@arg0x- 7 ай бұрын
@@GeneralKenobi69420 😭😭😭
@MilkGlue-xg5vj
@MilkGlue-xg5vj 7 ай бұрын
No
@justsomeonepassingby3838
@justsomeonepassingby3838 7 ай бұрын
Start with simple MLPs (multi-layered perceptrons), activation functions and backpropagation, with digit recognition as the main "goal". You don't need to know all the algorithms, just how neural networks work. Wait a few months until you are familiar with the concepts, then check how NLP is solved on google translate with adversarial networks and tokenization (converting words and sentences into vectors that can be understood by other models) For adversarial networks, you should write at least one autoencoder to really understand how it works (with pytorch or keras, to also get used to high level AI libraries that describe MLP layers as simple functions). Then, read/watch about transformers and the attention mechanism, and wait a few months again to meditate. By that point, you can make your own transformer, or re-watch bycloud's videos to get a summarized technical explanation and the keywords to google in order to get up to date with the latest shiny things
@nyx211
@nyx211 7 ай бұрын
The math might look intimidating, but it's not too difficult to understand if you already understand how MLPs work. The only thing you need to wrap your head around are B-splines.
@ajaypatro1554
@ajaypatro1554 7 ай бұрын
Basically, dimentionality is multiple 2D matrices in a 3D array sharing the same index space with axics (stacked on top of each other), like we have multiple skin layers on the same spot of the body. right!!
@jameshughes3014
@jameshughes3014 7 ай бұрын
You have a real gift for explaining this stuff. I feel like even my smooth brain gets it. Thank you
@The_Unexplainer
@The_Unexplainer 7 ай бұрын
Did you made an episode about liquid neural nets?
@karthikeyank2587
@karthikeyank2587 7 ай бұрын
What is that substack profile of ur newsletter,I prefer to read in substack
@DustinRodriguez1_0
@DustinRodriguez1_0 7 ай бұрын
Wouldn't the nodes in the B-spline internal to KAN just end up being represented across multiple layers of perceptrons? Sure you use less params... because you're training 4-5x more "weights" but just calling them B-spline control points. If the number of control points used in the B-spline is a dynamically learned property rather than being fixed across the layer or whole model, then I could see it being more interesting. But as-is, it sounds like a difference without distinction and if you just squint at a big MLP, you could interpret it as approximating a KAN.
@nevokrien95
@nevokrien95 7 ай бұрын
This does not seem like it scale. Main issue is that having a polynomial can just get the zero/exploding gradient more easily. Other issue is thst ur parameters r not modeling relationships so ur using more parameters per connection.
@TheStickCollector
@TheStickCollector 7 ай бұрын
Impressive what they can do behind the scenes.
@alexxx4434
@alexxx4434 7 ай бұрын
If KAN takes less RAM for more compute, then it's a good trade off at the current stage of development.
@AleatoricSatan
@AleatoricSatan 7 ай бұрын
A bit less ram, but a lot more processing time it's faster on CPU than GPU due to the the branching required for each custom curve. Some hacky things could be done to have it operate on data that acts like textures, but the complexity in implementation goes through the roof and the results are questionable. It remains to be seen.
@justindressler5992
@justindressler5992 7 ай бұрын
I thought the activation function wasn't that important it only really needed to clamp values from out liers. Over fitting would make sense because the activation is fitted against the data. Plus dimensionality in MLP can be reduced by pruning and sparsity training.
@AnotherVGMlover
@AnotherVGMlover 7 ай бұрын
Not a knock on you but I'm seeing everyone hyping up KANs as a new paradigm of machine learning and it's strange, cause the original authors weren't even claiming that. My impression was that KANs form a useful alternative to MLPs for *certain* situations, specifically in AI4Science where they may have better inductive biases, and have stronger interpretability with the ability to "upload" your own priors into the network; I don't think the authors were trying to claim more than this
@Wlucrow
@Wlucrow 7 ай бұрын
Kolmogorov Arnold Network is short for KAN?
@XenoCrimson-uv8uz
@XenoCrimson-uv8uz 7 ай бұрын
Kan the conqueror
@TobiMetalsFab
@TobiMetalsFab 6 ай бұрын
This sounds like Network in Network CNNs, which we've had since 2013
@Sams-li8tj
@Sams-li8tj 7 ай бұрын
I wonder if you have a custom CLIP model that maps each sentence in the script to a meme.
@75hilmar
@75hilmar 7 ай бұрын
We know that it is impossible for humans to fully understand all the effects of machinelearning, yet it still works. Thus it might be possible that AI might find robust strategies with good generalisation, right?
@Adventure1844
@Adventure1844 7 ай бұрын
Why can't both methods be used alternately in the training process?
@MrSongib
@MrSongib 7 ай бұрын
In a nutshell, we still need more memory. xd
@JonasMielke
@JonasMielke 7 ай бұрын
Does 3blue1brown know about your usage of his animations? Good essay tho
@honkhonk8009
@honkhonk8009 7 ай бұрын
I think its better to have more efficient models than just fast ones. The brain takes more time to learn, but takes less cycles.
@user-fc3cz6nh5j
@user-fc3cz6nh5j 7 ай бұрын
Idk if i KAN take this anymore, its too much.
@dinoscheidt
@dinoscheidt 7 ай бұрын
1:13 I dearly hope you don’t believe this “personally”. Money is fine.
@vantagepointmoon
@vantagepointmoon 7 ай бұрын
Worse for training, but better for running pretrained models locally if they take up less VRAM
@AntoshaPushkin
@AntoshaPushkin 5 ай бұрын
It sounds like calling CNNs or LSTMs as brand new type of AI which is totally different from MLP, but in reality those are just slightly different types of layers
@MrSofazocker
@MrSofazocker 7 ай бұрын
Wait, they didn't do that before? LMAO, people arguing about using Sigmoid etc. for years and I felt like the dumb one for asking, why isn't the function parameters adjusted?
@zhelmd
@zhelmd 7 ай бұрын
I should have paid more attention to math in school
@veekshith1074
@veekshith1074 7 ай бұрын
We got machine learning 2.0 b4 GTA 6
@LuicMarin
@LuicMarin 7 ай бұрын
Yes we KAN!
@cvs2fan
@cvs2fan 7 ай бұрын
bycloud has the mest meme trancisions i have ever seen hod you do it?
@TheDreamFx
@TheDreamFx 7 ай бұрын
Can you feel KANergy?
@krassav43g
@krassav43g 7 ай бұрын
nah relu is best thing ever
@finn_the_dog
@finn_the_dog 7 ай бұрын
"Your mom" 😮😂
@75hilmar
@75hilmar 7 ай бұрын
He put in a dog and a cat 😂
@MTX1699
@MTX1699 2 ай бұрын
I was searching for this comment 😂. Such a rip-off of the original 😂
@coder3101
@coder3101 7 ай бұрын
I like how the female candidate match became 0.00
@valentinfontanger4962
@valentinfontanger4962 2 ай бұрын
Hopefully this never happens to me 8:57
@dhanooshpooranan1861
@dhanooshpooranan1861 7 ай бұрын
do liquid neural networks
@dafidrosydan9719
@dafidrosydan9719 7 ай бұрын
i dont understand any of those fancy mathematical equations TvT
@leosmi1
@leosmi1 7 ай бұрын
I think this paper is like thtat thoroidal fan blade LMFAO
@tomoki-v6o
@tomoki-v6o 7 ай бұрын
Can you point to the source where KANs are MLPs,Because definitely they are
@illuminum8576
@illuminum8576 7 ай бұрын
I thought that people were already using weights for activation functions lol
@thebrownfrog
@thebrownfrog 7 ай бұрын
Thanks
@goodtothinkwith
@goodtothinkwith 7 ай бұрын
This was terrific. Normally I don’t like b-roll, but those are funny
@jondo7680
@jondo7680 7 ай бұрын
The problem is as long as meta or mistral won't use it... it's just theory.
@ilshiin6043
@ilshiin6043 7 ай бұрын
Ken => Officer K 🤖🤖🤖
@gergelymarta5524
@gergelymarta5524 7 ай бұрын
kan it run crysis
@apolodelsol
@apolodelsol 7 ай бұрын
AI by itself is just hype
@bernardcrnkovic3769
@bernardcrnkovic3769 7 ай бұрын
doesn't seem to help solve problem. as far as i understand, the only important part of activation functions is non-linearity. at high enough granularity, shape of that function doesn't really matter. I don't see how splines which take up more storage to represent parameters would help us make models more efficient? maybe theoretically, sure. but practically speaking? where are bits going to be stored if not in VRAM?
@nutzeeer
@nutzeeer 7 ай бұрын
so basically we can have chatgpt at home sooner than expected
@eduardocesargarridomerchan5326
@eduardocesargarridomerchan5326 3 ай бұрын
Un tutorial en castellano por si os interesa: kzbin.info/www/bejne/gJOcqIB5hbqfpMU
@ssssssstssssssss
@ssssssstssssssss 7 ай бұрын
Machine Learning has been around 60+ years and we are only on 2.0? Calling it Machine Learning 2.0 sounds like it came from someone who knows very little about the field
@ilikegeorgiabutiveonlybeen6705
@ilikegeorgiabutiveonlybeen6705 7 ай бұрын
tf is "tested by activation function" dont mislead people please
@JImBrad
@JImBrad 7 ай бұрын
hey
@jeremykothe2847
@jeremykothe2847 7 ай бұрын
It was hype, and channels like this were the ones who hyped it.
@孔子明-t5m
@孔子明-t5m 7 ай бұрын
man,what kan i say😅
@cdkw2
@cdkw2 7 ай бұрын
Maybe we are reaching the limit of AI and all the big people are just mad that it isn't that good
@harshamesta
@harshamesta 7 ай бұрын
I just know how to centre div.
@haithemchethouna6363
@haithemchethouna6363 7 ай бұрын
Can you explain like im 5😅
@stat_life
@stat_life 7 ай бұрын
Explain to me like i am a 5 yr old now
@joelpaul8650
@joelpaul8650 7 ай бұрын
Arnold works well for bodybuilding not model buliding 💀
@ps3301
@ps3301 7 ай бұрын
Liquid network says they are superior too.
@1TW1-m5i
@1TW1-m5i 7 ай бұрын
He's just kan
@akis_pgn
@akis_pgn 7 ай бұрын
Im just KAN
@nicksullivan4203
@nicksullivan4203 3 ай бұрын
yeah this seems useless. every few months someone comes out with some random new fundamental ML method like some improvement on ADAM. Also I rather just increase hidden layer size instead of using this and computing whatever gradient this function has DURING training time. All the other activation functions have 0 computer since we derive the analytically and program it. All that time compute could be used with bigger layers or more layers.
@omgwtfrofltomato
@omgwtfrofltomato 7 ай бұрын
fastest way to get "dont recommend this channel to me" is by stealing another youtuber's distinct, thumbnail style. originality counts for something, bycloud.
@watcher8582
@watcher8582 7 ай бұрын
I'm a bit taken aback that you seemingly have never heard the name of the biggest name in Russian math pronounced before. Or maybe that's Markov, I'm not certain. You went to uni right? Does it maybe not come up in engineering fields when you only do Bachelor? I'd press the speaker button on peoples Wikipedia page before trying to come up with a pronunciation. Helps with credibility, given you want to take this channel more seriously you said.
@BrutalStrike2
@BrutalStrike2 7 ай бұрын
Kun out
@hanskraut2018
@hanskraut2018 7 ай бұрын
POG but its boring. Could have built that shit when i was at the end of Kindergarden.
@raphaelfrey9061
@raphaelfrey9061 7 ай бұрын
Ai will only advance when modelled more like a brain
Mamba Might Just Make LLMs 1000x Cheaper...
14:06
bycloud
Рет қаралды 133 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
Маусымашар-2023 / Гала-концерт / АТУ қоштасу
1:27:35
Jaidarman OFFICIAL / JCI
Рет қаралды 390 М.
How to have fun with a child 🤣 Food wrap frame! #shorts
0:21
BadaBOOM!
Рет қаралды 17 МЛН
Почему Катар богатый? #shorts
0:45
Послезавтра
Рет қаралды 2 МЛН
БАБУШКА ШАРИТ #shorts
0:16
Паша Осадчий
Рет қаралды 4,1 МЛН
xLSTM: The Sequel To The Legendary LSTM
11:42
bycloud
Рет қаралды 53 М.
Why AI Simulated DOOM Is Actually Absurd
13:20
bycloud
Рет қаралды 127 М.
The Right Way To Train AGI Is Just GOOD Data?
15:52
bycloud
Рет қаралды 30 М.
The Unreasonable Effectiveness of Prompt "Engineering"
15:12
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 942 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,3 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
Маусымашар-2023 / Гала-концерт / АТУ қоштасу
1:27:35
Jaidarman OFFICIAL / JCI
Рет қаралды 390 М.