I2ML - Random Forest - Basics
29:54
I2ML - Tuning - In a Nutshell
10:55
SL - Regularization - Introduction
18:17
Пікірлер
@ShivaReddy-lq9is
@ShivaReddy-lq9is 18 күн бұрын
Hii i can help u for reaching monitization for ur channel
@PedroRibeiro-zs5go
@PedroRibeiro-zs5go 20 күн бұрын
Thanks, that was a great video!
@longtuan1615
@longtuan1615 Ай бұрын
Good explanation! Thank you so much!
@uxnuxn
@uxnuxn 2 ай бұрын
There is an error in the 1st slide. Entropy of the fair coin equals 1 bit, not 0.7. It's probably natural log was used in this graph.
@kevon217
@kevon217 2 ай бұрын
Really enjoyed the guitar tuning analogy. Can’t stand playing out of tune guitars or when the intonation is slightly off.
@_VETTRICHEZHIAN
@_VETTRICHEZHIAN 2 ай бұрын
Is Stock price prediction an suitable real world use case for this Online learning ?
@floribertjackalope2606
@floribertjackalope2606 3 ай бұрын
slide 13 is a little bit confusing
@floribertjackalope2606
@floribertjackalope2606 3 ай бұрын
the last example was a little bit confusing.
@Mohammed.1471
@Mohammed.1471 3 ай бұрын
Appreciate it 👍
@moonzhou1738
@moonzhou1738 4 ай бұрын
Hi,professor ! I have a question: I remember in previous video I learned that random forest can deal with missing data by surrogate splitting, but now in this video professor said proximities can be used for imputation. I'm confused. If random forest can deal with missing data then why do we need imputation? And in this video in imputation part, step one is using median to impute the data, why we need to impute if random forest already can use surrogate splitting to get the prediction result? why dont we use the result generated by surrogate splitting to compute proximity to impute the data and keep doing step 2 and 3? Besides, I feel I dont totally understand how does surrogate splitting work. I searched on the internet, but I still dont know the details. I just know it is about finding another variable that can generate the good result as variable[the primary split] with missing values. But how does the calculaton happen to make us know that the variable with missing values can be the best split at specific point[and if we already know that the variable with missing values can be a good split at specific point then why do we need surrogate splitting]?
@moonzhou1738
@moonzhou1738 4 ай бұрын
The question is a bit long, but wish for your replies! Thank you!
@krrishagarwalla3325
@krrishagarwalla3325 4 ай бұрын
Absolute gold
@virgenalosveinte5915
@virgenalosveinte5915 5 ай бұрын
great video thanks, very clear
@fayezalhussein7115
@fayezalhussein7115 5 ай бұрын
thank you
@rebeenali4317
@rebeenali4317 5 ай бұрын
how do we get phi values in step 4?
@gamuchiraindawana2827
@gamuchiraindawana2827 5 ай бұрын
LET'S GOOOOOOOO 💫💫 THANK YOU FOR TAKING YOUR TIME TO MAKE THESE VIDEOS💯💯💯💯❤❤
@holthuizenoemoet591
@holthuizenoemoet591 6 ай бұрын
Is this algorithm inspired by k-means clustering
@errrrrrr-
@errrrrrr- 7 ай бұрын
Thank you! You explained thing very clearly.
@jackychang6197
@jackychang6197 7 ай бұрын
Very helpful video. The visualization in the OOB is very easy to understand. Thank you!
@convel
@convel 8 ай бұрын
great lecture! what if some of the variables to be optimized are limited in a certain range? using multivariate normaldistribution to generate offspring might exceed the range limit?
@MarceloSilva-cm5mg
@MarceloSilva-cm5mg 8 ай бұрын
Excuse me, but wouldn't z1+z2+z3+....zT be (-1)^T/2 instead of (-1/2)^T? Anyway, you did a great job. Congratulations!!
@fiNitEarth
@fiNitEarth 9 ай бұрын
first :)
@gamuchiraindawana2827
@gamuchiraindawana2827 10 ай бұрын
It's so hard to hear what you're saying, please amplify the audio post processing on your future uploads. Excellent presentation nonetheless, you explained it so simply and clearly. <3
@berndbischl
@berndbischl 10 ай бұрын
Thank you. We are still not "pros" with regards to all technical aspects of recording. Will try to be better in the future.
@bertobertoberto242
@bertobertoberto242 11 ай бұрын
at 4:00 isn't the square supposed to be inside the square brackets?
@bertobertoberto242
@bertobertoberto242 11 ай бұрын
HI, great course, however a small note, at 12:20 I think that the function on the left might not be convex, as the tangent plane in the "light blue area" is on top on the function, and not below the function, thus violates the definition of convexity (afaik they are called Quasiconvex function that sort of functions)...
@rohi9594
@rohi9594 Жыл бұрын
finally found out clear logic behind the weights. Thank you so much🎉
@weii321
@weii321 Жыл бұрын
Nice video. I have a question. How to calculate shapley value for classification problem?
@zxynj
@zxynj 11 ай бұрын
to not violate axiom, do it in logit space
@twist777hz
@twist777hz Жыл бұрын
Thank you for doing this video in Numerator layout. It seems many videos on machine learning use Denominator layout but I definitely prefer Numerator layout! Is it possible you could do a follow-up video where you talk about partial derivative of scalar function with respect to MATRIX ? Most documents I've looked at seem to use Denominator layout for this type of derivative (some even use Numerator layout with respect to VECTOR, and then switch to Denominator layout with respect to MATRIX). I assume it's because Denominator layout preserves the dimension of the matrix, making it more convenient for gradient descent etc. What would you recommend I should do?
@mackwebster7704
@mackwebster7704 Жыл бұрын
💐 Promo'SM
@chrisleenatra
@chrisleenatra Жыл бұрын
So, the permutation order was only to define which feature will get the random value? Not creating a whole new instance with feature order same as the permutation order? (The algorithm shows S, j, S- , but your example shows S, S-, j)
@chrisleenatra
@chrisleenatra Жыл бұрын
Thank you!
@fanhbz1018
@fanhbz1018 Жыл бұрын
Nice lecture. I also recommend Dr. Ahmad Bazzis convex optimization series.
@shubhibans
@shubhibans Жыл бұрын
Great work
@maxgh8534
@maxgh8534 Жыл бұрын
Hi, sadly your github link doesnt work for me. Thanks for the video.
@jengoesnuts
@jengoesnuts Жыл бұрын
Can you explain more about the ommitted variable bias in M-plots? My teacher told me that you can use a linear transformation to explain the green graph by transofmring x1 and x2 to two independent random variables x1 and U. Is that true?
@ocamlmail
@ocamlmail Жыл бұрын
Thank you so much for this video. Consider example on 7:20 min. -- but doesn't it look like feature permutation? Shouldn't I use expected values for other variables (x2,x3) ? Thanks in advance.
@hkrish26
@hkrish26 Жыл бұрын
Thanks
@appliedstatistics2043
@appliedstatistics2043 Жыл бұрын
the material is not accessible right now, can someone reupload it ?
@yt-1161
@yt-1161 Жыл бұрын
what do you mean with "pessimistic bias" ?
@sogari2187
@sogari2187 Жыл бұрын
If i understand correctly, it is pessimistic, because you use lets say 90% of your available data as training set and 10% as test set. So the model you test is only trained on 90% of your data, but your final model that you use/publish will use 100% of the data. This means it will probably perform better than your training model that used 90% but you cant validate it because you have no test data left. In the end you evaluate the model on 90% of the data which is probably slightly worse than the model that is trained on 100% of the data.
@kcd5353
@kcd5353 2 жыл бұрын
good explanation madam
@kcd5353
@kcd5353 2 жыл бұрын
Good Explanation Madam
@appliedstatistics2043
@appliedstatistics2043 2 жыл бұрын
Hallo, I'm a student in TU dortmund, our lecture also use your resources, but the link in the description is not working now, how can we get access to the resources?
@namrathasrimateti9119
@namrathasrimateti9119 2 жыл бұрын
Great Explanation!! Thank You
@Parthsarthi41
@Parthsarthi41 2 жыл бұрын
Excellent. Thanks
@vaibhav_uk
@vaibhav_uk 2 жыл бұрын
Finally some serious content
@Rainstorm121
@Rainstorm121 2 жыл бұрын
Thanks Sir, but excuse myself (zero statistics & mathematics background), but what does this video suggest about using Brier Score for measuring forecast?
@guillermotorres4988
@guillermotorres4988 2 жыл бұрын
Nice explanation! You are using the same set of HP configurations λi, with i=1, ...,N through fourfold CV (in the innner loop). But, what happend if I would like to use a Bayesian hyperparameter to sample the values of the paramters? For example, for each outer cv with its corresponding inner cv, could I use a Bayesian hyperparameter search? Then the set of HP configuration wouldn't be the same in each inner cv, and then the question is: Can be the set of HP configurations different in each inner cv and is it still valid this nested cross valitadion method?
@dsbio4671
@dsbio4671 2 жыл бұрын
awesome!! thanks so much!
@oulahbibidriss7172
@oulahbibidriss7172 2 жыл бұрын
thank you, well explained.
@canceledlogic7656
@canceledlogic7656 3 жыл бұрын
Heres a free resource on one of the most important academic concepts of the modern age: 800 views. GG humanity. GG
@manullangjihan2100
@manullangjihan2100 3 жыл бұрын
thank you for the explanation