Пікірлер
@MrAlipatik
@MrAlipatik 4 ай бұрын
Thank you
@johannescartus9847
@johannescartus9847 2 жыл бұрын
Very interesting video, but now I have some questions 🤔 In practice it is rarely the case that one has data that perfectly adheres to an analytic relationship of the variables. E.g., the measured data might be noisy and/or the underlying problem does not have a closed form solution. How well does this method perform if you, e.g., we’re to add some noise to your example data? Also, since it will always find some result, is there any way to tell when it finds something that is actually describing the underlying problem (as opposed to just finding a random formula that happens to fit the noise)?
@SyedMehmud
@SyedMehmud 2 жыл бұрын
Good question. I think in many situations it will fail to form a closed form solution, which makes this a limited application algorithm. Still can be fun/useful to try it on data and see if it gives any insight, even if imperfect. With a little noise and strong underlying signal, it can discover the signal but at some point the noise will overwhelm it. I haven't experimented much along these lines however.
@enlightenment609
@enlightenment609 Жыл бұрын
Genetic Programming based symbolic regression suffers from noise like other ML methods do. Therefore approaches to mitigate noise in other methods can also work for GP. For example you can use chi square error, which normalises the error term with the standard deviation of noise, instead of simple mean squared error. The benefit of GP is in its symbolic solutions but if the solution is very large, then interpreting it is non trivial. Therefore, discouraging complexity while maximising accuracy is the challenge.
@sunseeds4817
@sunseeds4817 2 жыл бұрын
Amazing! thanks for the video, definitely helped me while I'm crunching this during my degree (p.s.: this is a lot better than the lecture :p)
@FreeMarketSwine
@FreeMarketSwine 2 жыл бұрын
Can this be used for optimization or reinforcement learning?
@B.I.G-John
@B.I.G-John 2 жыл бұрын
wow !
@aschalewcherie4045
@aschalewcherie4045 2 жыл бұрын
Thank you for mind blowing presentation. I want to know how can I calculate tological and quantization error in Excel from SOM
@thezorrinofromgemail6978
@thezorrinofromgemail6978 2 жыл бұрын
Great explation. Where is the next youtube that must improve the model as you indicate it at the end ? And where is the file to download ? Thanks a lot.
@zhuoxuanli2277
@zhuoxuanli2277 2 жыл бұрын
When I use SymbolicRegression to fit my data, the final formula is always a constant. I don't know why :(
@carlosmerino6554
@carlosmerino6554 3 жыл бұрын
Is the file available?
@SergeKuper
@SergeKuper 3 жыл бұрын
Thank you. Interesting, while even predicting prices, it's right to say price can't be negative, but when training model and calculating coefficients it's important to have those features that will have negative influence on the price. Like "criminal situation in some neighborhood" feature will decrease predicted "real estate price". Why to nullify it in the regression model? Not so clear for me. Have any idea for some real life example where we'll want to set such coefficients to zero?
@nothingness1983
@nothingness1983 3 жыл бұрын
Thanks for the video. Really very useful for Physicists.
@santoshkhanal7982
@santoshkhanal7982 3 жыл бұрын
Really nice video. Could you please make a video using symbolic regression on real-world data such as AutoMPG or California House Price or abalone dataset ( small dataset) or something like that? Thank you!
@sammydemmi448
@sammydemmi448 2 жыл бұрын
Check out this intro to the QLattice a new symbolic regressor applied to a heart failure problem m.kzbin.info/www/bejne/e5K6f4ucrtGXY9k
@DistortedV12
@DistortedV12 3 жыл бұрын
I'm waiting for the Neurips paper that says: Symbolic Regression is solved, P != NP
@doddyardana2147
@doddyardana2147 3 жыл бұрын
How to make prediction equation with several input neuron in artificial neural network?Can we use the bias value, weight from ANN with MAtlab analysis?
@najeu3696
@najeu3696 3 жыл бұрын
Can u help me with nnls coding on rstudio?
@EnsariYILDIRIM
@EnsariYILDIRIM 3 жыл бұрын
Tanh or sigmoid functions are very useful for binary output problems. Well, in case of a polinomial output, which activation function do you supposed to use?
@greenief9097
@greenief9097 3 жыл бұрын
You can use the same activation functions for outputs greater than 2. The activation function will essentially turn on or off for each potential output in the list of possible outputs
@priyamgupta8170
@priyamgupta8170 3 жыл бұрын
i am mad do you all know
@namyam3840
@namyam3840 3 жыл бұрын
Thank you sir! great explanation . . but would you explain me something about the Tanh(a) and Tanh(b) functions, why and how are they initialized to 1.72 and 0.67 respectively ? Is it must to initialize it? If yes, how?
@predictivemodeler
@predictivemodeler 3 жыл бұрын
Thanks! The short answer is that it is a heurisitic choice. This choice is discussed in a paper by LeCun, "Generalization and Network Design", 1989. It has to do with making the equations a little simpler, and the overall second derivative of the hyperbolic function is a pleasing (well, to some!) +1 to -1. This choice is thought to improve convergence of the learning process.
@ethelgonzalesjara1016
@ethelgonzalesjara1016 3 жыл бұрын
Can youpls provide the idea for preparation excel for perceptron neural networks [email protected]
@rahulbpillai22
@rahulbpillai22 3 жыл бұрын
thank you for the video. Can you make a video on Mutigene Genetic Programming for regression problem in python
@ahmed-pk6gy
@ahmed-pk6gy 3 жыл бұрын
Hello, how may I contact you sir?
@jegatheshwaran1971
@jegatheshwaran1971 3 жыл бұрын
Can you pls provide the idea for preparation excel for preceptron neural networks
@predictivemodeler
@predictivemodeler 3 жыл бұрын
Not sure I understood the question, can you elaborate?
@prasadsanap8422
@prasadsanap8422 4 жыл бұрын
Great Explanation!
@predictivemodeler
@predictivemodeler 3 жыл бұрын
Thank you!
@johnchen2022
@johnchen2022 4 жыл бұрын
very interesting, thx for the video