Aggregation
1:44:51
21 күн бұрын
Ethics as a Technological Problem
1:42:35
What is to be done? (context)
1:23:42
21 күн бұрын
Data
1:44:54
21 күн бұрын
How to think about Technologies
1:48:25
21 күн бұрын
The Construction of Data and Distributions
1:24:09
What is to be done? (rhetoric)
1:28:17
21 күн бұрын
Challenges of Choice of Fairness Measure
1:37:18
Aggregation Functionals
1:31:04
21 күн бұрын
Theorie II - 21 - NP-Vollständigkeit
1:13:18
Theorie II - 20 - P != NP?
1:25:36
3 ай бұрын
Theorie II - 18 - Komplexität
1:17:55
Theorie II - 19 - P und NP
1:27:44
3 ай бұрын
Theorie II - 14 - Satz von Rice
1:23:52
Theorie II - 13 - Reduktionen
1:26:38
Theorie II - 11 - Abzählbarkeit
1:28:14
Theorie II - 09 - Turingmaschinen
1:23:26
Пікірлер
@mehdibeigzadeh6014
@mehdibeigzadeh6014 2 күн бұрын
nice topic to watch, thanks for sharing
@googlesong8679
@googlesong8679 15 күн бұрын
this is the best LDA video I have seen. thank you so much.
@AlgoNudger
@AlgoNudger 26 күн бұрын
Thanks.
@annawilson3824
@annawilson3824 27 күн бұрын
10:47
@bobitsmagic4961
@bobitsmagic4961 Ай бұрын
On the slide of 33:00 we are using the Jacobian instead of the hessian. When the network only has a single output and we use the least squares loss function would the newton step collapse to gradient descent with the gradient divided by its length? It feels like we are just throwing away all curvature information at this point
@JackRid-k5s
@JackRid-k5s Ай бұрын
Nice content just wondering why your views are so low
@AdrienLegendre
@AdrienLegendre Ай бұрын
Excellent presentation
@matthieudegeiter3709
@matthieudegeiter3709 Ай бұрын
Very nice lecture ! Thank you very much !
@mohammadhoseinrezaee-d1s
@mohammadhoseinrezaee-d1s Ай бұрын
do anyone have the exercise of this course?
@farshidshateri_wp
@farshidshateri_wp Ай бұрын
man's mind ❌ Human mind ✅
@annawilson3824
@annawilson3824 Ай бұрын
1:20:00
@annawilson3824
@annawilson3824 Ай бұрын
50:55 Bayesian Inference is not hard (c)
@HerzbergTesta
@HerzbergTesta Ай бұрын
The best course as ever... Thanks so much.
@HerzbergTesta
@HerzbergTesta Ай бұрын
The best lecture as ever
@prateekpatel6082
@prateekpatel6082 Ай бұрын
In the GMM ELBO , why is the q(z) in denominator missing . The ELBO looks incorrect ?
@leeris19
@leeris19 Ай бұрын
Cool explanation and visualizations!
@ahmedhamza3939
@ahmedhamza3939 2 ай бұрын
I don't understand how A is independent of C|B translated to if i told you B was can you make any statement of A and C independent of each other ?
@Amulya7
@Amulya7 2 ай бұрын
Absolute goldmine
@sitrakaforler8696
@sitrakaforler8696 2 ай бұрын
Title: "Foundations of Machine Learning: Walking Through Linear Regression" Introduction to basic concepts of machine learning - Course aims to prepare students for advanced machine learning courses - Focus on developing key concepts and intuitions behind machine learning Machine learning aims to detect patterns in data and make useful predictions in challenging situations. - Machine learning involves training an algorithm by giving it data and answers, allowing it to discriminate without explicit rules. - The focus of machine learning is on making useful predictions rather than learning about the world. Introduction to different types of machine learning problems - Supervised learning involves labeled data to distinguish classes - Unsupervised learning clusters data without labels, focusing on different kinds of animals Simple linear regression involves predicting a continuous variable based on one predictor. - - It uses a linear function with two parameters - intercept (beta zero) and slope (beta one) to fit the data. - - The loss function for linear regression is the mean squared error, which measures the squared deviation between actual and predicted values and is used to optimize the model. Introduction to Baby Linear Regression with a Single Parameter Beta - The model simplifies linear regression by ignoring the intercept and using only one parameter, beta. - The optimization process involves finding the minimum of a quadratic loss function using baby gradient descent with a learning rate. Understanding the challenges with non-convex functions and choosing the right learning rate in gradient descent. - Non-convex functions can lead to challenges in finding the global minimum using gradient descent. - Choosing the right learning rate is crucial as a large learning rate can cause divergence, while a small learning rate can lead to slow convergence. Explaining gradient descent for simple linear regression - Computing gradient using derivative with respect to beta not x - Utilizing derivative to update beta and converge to minimum point Understanding beta as a vector in two dimensions and its update rule using gradient - Beta can be considered as a vector with two coordinates, beta 0 and beta 1 - The gradient is a vector consisting of partial derivatives along each coordinate
@alexboche1349
@alexboche1349 2 ай бұрын
Great lecture thank you! At 39:24, to compute the covariance, I found his explanation incomplete because he doesn't address variation in x. I give a more detailed derivation below. cov(f(x,θ),f(x',θ)) \approx cov(f(x,θ_*) + J(x,θ_*)(θ - θ_*)), f(x',θ_*) + J(x',θ_*)(θ - θ_*)) by Taylor of J in θ = J(x,θ_*) cov((θ - θ_*)), (θ - θ_*)) J(x',θ_*)' by multilinearity of cov = J(x,θ_*) Var(θ) J(x',θ_*)' \approx J(x,θ_*) ψ^-1 J(x',θ_*)' by the Laplace approximation to posterior on θ. As for the negative, is that a typo? I thought he said it was but then he said it wasn't? I'm confused.
@sangraampatwardhan1573
@sangraampatwardhan1573 2 ай бұрын
Yes, the negative sign was indeed a typo
@annawilson3824
@annawilson3824 3 ай бұрын
40:52
@sevdaebrahimi7199
@sevdaebrahimi7199 3 ай бұрын
Thank you so much for this great course.
@saripallijitendra3573
@saripallijitendra3573 3 ай бұрын
It would be nice to have the option of slides in English and captions in English if possible :)
@annawilson3824
@annawilson3824 3 ай бұрын
1:21:28
@shubhajitchakraborty
@shubhajitchakraborty 3 ай бұрын
Would you please make videos in English? I'm from BHARAT 🇮🇳🙏🏻.
@florentin3141
@florentin3141 3 ай бұрын
Is it correct that the way we define x on slide 10 the order matters? Otherwise p(x|f) would not be a probability distribution. I think this is quite inconsistent with the way x was used before: Not as some vector in {0,1}^n but as the number of glasses-wearing people -> Would be more consistent to use a binomial coefficient as normalization constant.
@jakobpcoder
@jakobpcoder 4 ай бұрын
Danke fürs hochladen!
@electric_sand
@electric_sand 4 ай бұрын
Tübingen ML has some of the best educational KZbin thumbnails. Usually very clean.
@AlgoNudger
@AlgoNudger 4 ай бұрын
Thanks.
@TharunanJR
@TharunanJR 4 ай бұрын
good video
@enlightened8116
@enlightened8116 4 ай бұрын
Best video so far on ANOVA
@annawilson3824
@annawilson3824 4 ай бұрын
1:23:40
@blup737
@blup737 5 ай бұрын
next lecture please
@graedy2
@graedy2 6 ай бұрын
The best video on this topic I have found so far by a large margin. Excellent work!
@sumankhatri2679
@sumankhatri2679 6 ай бұрын
Hi , Please provide code and excercise of this very nice code.
@sumankhatri2679
@sumankhatri2679 6 ай бұрын
Can we get course website
@richardm5916
@richardm5916 6 ай бұрын
You are the best teacher in the world thanks
@rolanddeui3843
@rolanddeui3843 6 ай бұрын
It was mentioned earlier that the product of two GP is another GP only if it is over the same set of variables (x), and that it is some else if it is over two different set of variables (say x and y). Does not this apply to the prediction step at 1:17:11 (from 2nd to 3rd line)?
@Pedritox0953
@Pedritox0953 6 ай бұрын
Great video!
@rudeprover
@rudeprover 6 ай бұрын
Having watched quite a lot regression videos I can say confidently this is something which sums up and condenses each and every thing for a beginner to grasp linear regression smoothly(see what I did there?). Thank you so much for making this public!
@seanranieri3816
@seanranieri3816 6 ай бұрын
27:20 Really impressive, especially the pronunciation of Kolmogorov's name.
@christophec6992
@christophec6992 6 ай бұрын
have you tried with silver nano wire networks ?
@edbertkwesi4931
@edbertkwesi4931 6 ай бұрын
awesome
@SiqiYinEclipse
@SiqiYinEclipse 6 ай бұрын
sehr gut
@nipamghorai3217
@nipamghorai3217 6 ай бұрын
Which book do you guys follow?
@sakcee
@sakcee 6 ай бұрын
can we have the homeward or exercises of this course?
@Elena-fh6ez
@Elena-fh6ez 7 ай бұрын
Maike hat das toll gemacht!
@bithigh8301
@bithigh8301 7 ай бұрын
in 36:40 sum( f_i(\theta) P_i), P has dimension 3N and f(\theta) has 207? how this multiplication is possible?
@bithigh8301
@bithigh8301 6 ай бұрын
Answer is also on the SMPL paper with a better notation
@bithigh8301
@bithigh8301 7 ай бұрын
Awesome class. But the notation in slide 11 is a bit confusing, what is the advantage of having a unity vector and rotation angle on \omega_j? And, is there a typo on slide 11 should \omega² be (\omega_j)²?
@bithigh8301
@bithigh8301 7 ай бұрын
okay, answer is in the SMPL paper 😀
@itsamankumar403
@itsamankumar403 7 ай бұрын
where can i get the code ?