Supervised Learning and Support Vector Machines

  Рет қаралды 5,011

Nathan Kutz

Nathan Kutz

Күн бұрын

Пікірлер: 7
@Enem_Verse
@Enem_Verse 3 жыл бұрын
You are one of the greatest teacher of mankind
@billykotsos4642
@billykotsos4642 4 жыл бұрын
These videos are treasure.
@anantchopra1663
@anantchopra1663 4 жыл бұрын
What does the constraint imply geometrically in the linear SVM optimization problem?
@siranguru
@siranguru 4 ай бұрын
Thank you for the lecture Can anyone explain the reason behind the other factors apart from the loss function in the optimization equation? why do we need to reduce the distance of the line or the hyperplane center from the axis central point? -> ||w||^2 and what is the 'subject to' condition? how did it come by or what is its purpose? why should the points be parallel to the SVM line ( assuming dot product) correct me on this if it is wrong
@JiaheWang-f4d
@JiaheWang-f4d 2 ай бұрын
||w||^2 actually is the inverse of distance(magnitude), which is not mentioned in the lecture. You can find other explanation from internet.
@JiaheWang-f4d
@JiaheWang-f4d 2 ай бұрын
Does anyone can explain the subject to min|xj.w|=1, why it looks like in this formula? 16:03
@siranguru
@siranguru 2 ай бұрын
We have the main line as W.X +b =0 and for the margins we will have the lines as W.X +b = +- k. On normalizing by k we will have W.X + b = +-1. So when y = +1 for green dots and -1 for the magenta dots and to have all points in the corresponding direction we will have the constraint y(W.X+b) >=1 To give some lenience and not have a hard bound we will have W.X +b = 1 - ζ where ζ is Zeta. if ζ = b then W.X = 1. Which makes the constraint y(W.X) >= 1 i.e. the minimum of W.X = 1 since y = 1 Correct me if I am wrong
Feature Selection and Data Mining
23:42
Nathan Kutz
Рет қаралды 4,7 М.
Multi-Layer Networks and Activation Functions
27:17
Nathan Kutz
Рет қаралды 4,3 М.
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 11 МЛН
这是自救的好办法 #路飞#海贼王
00:43
路飞与唐舞桐
Рет қаралды 131 МЛН
Supervised Learning and Linear Discriminants
19:05
Nathan Kutz
Рет қаралды 6 М.
Regression and Ax = b: Over- and under-determined systems
38:25
Nathan Kutz
Рет қаралды 9 М.
The Stochastic Gradient Descent Algorithm
27:49
Nathan Kutz
Рет қаралды 12 М.
Model selection: Cross validation
31:16
Nathan Kutz
Рет қаралды 5 М.
Unsupervised Learning:  Mixture Models
21:45
Nathan Kutz
Рет қаралды 5 М.
Introduction to Machine Learning:  Support Vector Machines
22:41
The Backpropagation Algorithm
24:24
Nathan Kutz
Рет қаралды 8 М.
Classic Curve Fitting
25:51
Nathan Kutz
Рет қаралды 10 М.
Lecture 01   The Learning Problem
1:21:28
CorestratAI
Рет қаралды 578
Deep Convolutional Neural Networks
30:29
Nathan Kutz
Рет қаралды 9 М.