Very interesting video, but now I have some questions 🤔 In practice it is rarely the case that one has data that perfectly adheres to an analytic relationship of the variables. E.g., the measured data might be noisy and/or the underlying problem does not have a closed form solution. How well does this method perform if you, e.g., we’re to add some noise to your example data? Also, since it will always find some result, is there any way to tell when it finds something that is actually describing the underlying problem (as opposed to just finding a random formula that happens to fit the noise)?
@SyedMehmud2 жыл бұрын
Good question. I think in many situations it will fail to form a closed form solution, which makes this a limited application algorithm. Still can be fun/useful to try it on data and see if it gives any insight, even if imperfect. With a little noise and strong underlying signal, it can discover the signal but at some point the noise will overwhelm it. I haven't experimented much along these lines however.
@enlightenment609 Жыл бұрын
Genetic Programming based symbolic regression suffers from noise like other ML methods do. Therefore approaches to mitigate noise in other methods can also work for GP. For example you can use chi square error, which normalises the error term with the standard deviation of noise, instead of simple mean squared error. The benefit of GP is in its symbolic solutions but if the solution is very large, then interpreting it is non trivial. Therefore, discouraging complexity while maximising accuracy is the challenge.
@sunseeds48172 жыл бұрын
Amazing! thanks for the video, definitely helped me while I'm crunching this during my degree (p.s.: this is a lot better than the lecture :p)
@FreeMarketSwine2 жыл бұрын
Can this be used for optimization or reinforcement learning?
@B.I.G-John2 жыл бұрын
wow !
@aschalewcherie40452 жыл бұрын
Thank you for mind blowing presentation. I want to know how can I calculate tological and quantization error in Excel from SOM
@thezorrinofromgemail69782 жыл бұрын
Great explation. Where is the next youtube that must improve the model as you indicate it at the end ? And where is the file to download ? Thanks a lot.
@zhuoxuanli22772 жыл бұрын
When I use SymbolicRegression to fit my data, the final formula is always a constant. I don't know why :(
@carlosmerino65543 жыл бұрын
Is the file available?
@SergeKuper3 жыл бұрын
Thank you. Interesting, while even predicting prices, it's right to say price can't be negative, but when training model and calculating coefficients it's important to have those features that will have negative influence on the price. Like "criminal situation in some neighborhood" feature will decrease predicted "real estate price". Why to nullify it in the regression model? Not so clear for me. Have any idea for some real life example where we'll want to set such coefficients to zero?
@nothingness19833 жыл бұрын
Thanks for the video. Really very useful for Physicists.
@santoshkhanal79823 жыл бұрын
Really nice video. Could you please make a video using symbolic regression on real-world data such as AutoMPG or California House Price or abalone dataset ( small dataset) or something like that? Thank you!
@sammydemmi4482 жыл бұрын
Check out this intro to the QLattice a new symbolic regressor applied to a heart failure problem m.kzbin.info/www/bejne/e5K6f4ucrtGXY9k
@DistortedV123 жыл бұрын
I'm waiting for the Neurips paper that says: Symbolic Regression is solved, P != NP
@doddyardana21473 жыл бұрын
How to make prediction equation with several input neuron in artificial neural network?Can we use the bias value, weight from ANN with MAtlab analysis?
@najeu36963 жыл бұрын
Can u help me with nnls coding on rstudio?
@EnsariYILDIRIM3 жыл бұрын
Tanh or sigmoid functions are very useful for binary output problems. Well, in case of a polinomial output, which activation function do you supposed to use?
@greenief90973 жыл бұрын
You can use the same activation functions for outputs greater than 2. The activation function will essentially turn on or off for each potential output in the list of possible outputs
@priyamgupta81703 жыл бұрын
i am mad do you all know
@namyam38403 жыл бұрын
Thank you sir! great explanation . . but would you explain me something about the Tanh(a) and Tanh(b) functions, why and how are they initialized to 1.72 and 0.67 respectively ? Is it must to initialize it? If yes, how?
@predictivemodeler3 жыл бұрын
Thanks! The short answer is that it is a heurisitic choice. This choice is discussed in a paper by LeCun, "Generalization and Network Design", 1989. It has to do with making the equations a little simpler, and the overall second derivative of the hyperbolic function is a pleasing (well, to some!) +1 to -1. This choice is thought to improve convergence of the learning process.
@ethelgonzalesjara10163 жыл бұрын
Can youpls provide the idea for preparation excel for perceptron neural networks [email protected]
@rahulbpillai224 жыл бұрын
thank you for the video. Can you make a video on Mutigene Genetic Programming for regression problem in python
@ahmed-pk6gy4 жыл бұрын
Hello, how may I contact you sir?
@jegatheshwaran19714 жыл бұрын
Can you pls provide the idea for preparation excel for preceptron neural networks
@predictivemodeler3 жыл бұрын
Not sure I understood the question, can you elaborate?