Hii i can help u for reaching monitization for ur channel
@PedroRibeiro-zs5go20 күн бұрын
Thanks, that was a great video!
@longtuan1615Ай бұрын
Good explanation! Thank you so much!
@uxnuxn2 ай бұрын
There is an error in the 1st slide. Entropy of the fair coin equals 1 bit, not 0.7. It's probably natural log was used in this graph.
@kevon2172 ай бұрын
Really enjoyed the guitar tuning analogy. Can’t stand playing out of tune guitars or when the intonation is slightly off.
@_VETTRICHEZHIAN2 ай бұрын
Is Stock price prediction an suitable real world use case for this Online learning ?
@floribertjackalope26063 ай бұрын
slide 13 is a little bit confusing
@floribertjackalope26063 ай бұрын
the last example was a little bit confusing.
@Mohammed.14713 ай бұрын
Appreciate it 👍
@moonzhou17384 ай бұрын
Hi,professor ! I have a question: I remember in previous video I learned that random forest can deal with missing data by surrogate splitting, but now in this video professor said proximities can be used for imputation. I'm confused. If random forest can deal with missing data then why do we need imputation? And in this video in imputation part, step one is using median to impute the data, why we need to impute if random forest already can use surrogate splitting to get the prediction result? why dont we use the result generated by surrogate splitting to compute proximity to impute the data and keep doing step 2 and 3? Besides, I feel I dont totally understand how does surrogate splitting work. I searched on the internet, but I still dont know the details. I just know it is about finding another variable that can generate the good result as variable[the primary split] with missing values. But how does the calculaton happen to make us know that the variable with missing values can be the best split at specific point[and if we already know that the variable with missing values can be a good split at specific point then why do we need surrogate splitting]?
@moonzhou17384 ай бұрын
The question is a bit long, but wish for your replies! Thank you!
@krrishagarwalla33254 ай бұрын
Absolute gold
@virgenalosveinte59155 ай бұрын
great video thanks, very clear
@fayezalhussein71155 ай бұрын
thank you
@rebeenali43175 ай бұрын
how do we get phi values in step 4?
@gamuchiraindawana28275 ай бұрын
LET'S GOOOOOOOO 💫💫 THANK YOU FOR TAKING YOUR TIME TO MAKE THESE VIDEOS💯💯💯💯❤❤
@holthuizenoemoet5916 ай бұрын
Is this algorithm inspired by k-means clustering
@errrrrrr-7 ай бұрын
Thank you! You explained thing very clearly.
@jackychang61977 ай бұрын
Very helpful video. The visualization in the OOB is very easy to understand. Thank you!
@convel8 ай бұрын
great lecture! what if some of the variables to be optimized are limited in a certain range? using multivariate normaldistribution to generate offspring might exceed the range limit?
@MarceloSilva-cm5mg8 ай бұрын
Excuse me, but wouldn't z1+z2+z3+....zT be (-1)^T/2 instead of (-1/2)^T? Anyway, you did a great job. Congratulations!!
@fiNitEarth9 ай бұрын
first :)
@gamuchiraindawana282710 ай бұрын
It's so hard to hear what you're saying, please amplify the audio post processing on your future uploads. Excellent presentation nonetheless, you explained it so simply and clearly. <3
@berndbischl10 ай бұрын
Thank you. We are still not "pros" with regards to all technical aspects of recording. Will try to be better in the future.
@bertobertoberto24211 ай бұрын
at 4:00 isn't the square supposed to be inside the square brackets?
@bertobertoberto24211 ай бұрын
HI, great course, however a small note, at 12:20 I think that the function on the left might not be convex, as the tangent plane in the "light blue area" is on top on the function, and not below the function, thus violates the definition of convexity (afaik they are called Quasiconvex function that sort of functions)...
@rohi9594 Жыл бұрын
finally found out clear logic behind the weights. Thank you so much🎉
@weii321 Жыл бұрын
Nice video. I have a question. How to calculate shapley value for classification problem?
@zxynj11 ай бұрын
to not violate axiom, do it in logit space
@twist777hz Жыл бұрын
Thank you for doing this video in Numerator layout. It seems many videos on machine learning use Denominator layout but I definitely prefer Numerator layout! Is it possible you could do a follow-up video where you talk about partial derivative of scalar function with respect to MATRIX ? Most documents I've looked at seem to use Denominator layout for this type of derivative (some even use Numerator layout with respect to VECTOR, and then switch to Denominator layout with respect to MATRIX). I assume it's because Denominator layout preserves the dimension of the matrix, making it more convenient for gradient descent etc. What would you recommend I should do?
@mackwebster7704 Жыл бұрын
💐 Promo'SM
@chrisleenatra Жыл бұрын
So, the permutation order was only to define which feature will get the random value? Not creating a whole new instance with feature order same as the permutation order? (The algorithm shows S, j, S- , but your example shows S, S-, j)
@chrisleenatra Жыл бұрын
Thank you!
@fanhbz1018 Жыл бұрын
Nice lecture. I also recommend Dr. Ahmad Bazzis convex optimization series.
@shubhibans Жыл бұрын
Great work
@maxgh8534 Жыл бұрын
Hi, sadly your github link doesnt work for me. Thanks for the video.
@jengoesnuts Жыл бұрын
Can you explain more about the ommitted variable bias in M-plots? My teacher told me that you can use a linear transformation to explain the green graph by transofmring x1 and x2 to two independent random variables x1 and U. Is that true?
@ocamlmail Жыл бұрын
Thank you so much for this video. Consider example on 7:20 min. -- but doesn't it look like feature permutation? Shouldn't I use expected values for other variables (x2,x3) ? Thanks in advance.
@hkrish26 Жыл бұрын
Thanks
@appliedstatistics2043 Жыл бұрын
the material is not accessible right now, can someone reupload it ?
@yt-1161 Жыл бұрын
what do you mean with "pessimistic bias" ?
@sogari2187 Жыл бұрын
If i understand correctly, it is pessimistic, because you use lets say 90% of your available data as training set and 10% as test set. So the model you test is only trained on 90% of your data, but your final model that you use/publish will use 100% of the data. This means it will probably perform better than your training model that used 90% but you cant validate it because you have no test data left. In the end you evaluate the model on 90% of the data which is probably slightly worse than the model that is trained on 100% of the data.
@kcd53532 жыл бұрын
good explanation madam
@kcd53532 жыл бұрын
Good Explanation Madam
@appliedstatistics20432 жыл бұрын
Hallo, I'm a student in TU dortmund, our lecture also use your resources, but the link in the description is not working now, how can we get access to the resources?
@namrathasrimateti91192 жыл бұрын
Great Explanation!! Thank You
@Parthsarthi412 жыл бұрын
Excellent. Thanks
@vaibhav_uk2 жыл бұрын
Finally some serious content
@Rainstorm1212 жыл бұрын
Thanks Sir, but excuse myself (zero statistics & mathematics background), but what does this video suggest about using Brier Score for measuring forecast?
@guillermotorres49882 жыл бұрын
Nice explanation! You are using the same set of HP configurations λi, with i=1, ...,N through fourfold CV (in the innner loop). But, what happend if I would like to use a Bayesian hyperparameter to sample the values of the paramters? For example, for each outer cv with its corresponding inner cv, could I use a Bayesian hyperparameter search? Then the set of HP configuration wouldn't be the same in each inner cv, and then the question is: Can be the set of HP configurations different in each inner cv and is it still valid this nested cross valitadion method?
@dsbio46712 жыл бұрын
awesome!! thanks so much!
@oulahbibidriss71722 жыл бұрын
thank you, well explained.
@canceledlogic76563 жыл бұрын
Heres a free resource on one of the most important academic concepts of the modern age: 800 views. GG humanity. GG