10. Support Vector Machines

  Рет қаралды 14,683

Inside Bloomberg

Inside Bloomberg

Күн бұрын

We define the soft-margin support vector machine (SVM) directly in terms of its objective function (L2-regularized, hinge loss minimization over a linear hypothesis space). Using our knowledge of Lagrangian duality, we find a dual form of the SVM problem, apply the complementary slackness conditions, and derive some interesting insights into the connection between "support vectors" and margin. Read the "SVM Insights from Duality" in the Notes below for a high-level view of this mathematically dense lecture.
More...Notably absent from the lecture is the hard-margin SVM and its standard geometric derivation. Although the derivation is fun, since we start from the simple and visually appealing idea of maximizing the "geometric margin", the hard-margin SVM is rarely useful in practice, as it requires separable data, which precludes any datasets with repeated inputs and label noise. One fixes this by introducing "slack" variables, which leads to a formulation equivalent to the soft-margin SVM we present. Once we introduce slack variables, I've personally found the interpretation in terms of maximizing the margin to be much hazier, and I find understanding the SVM in terms of "just" a particular loss function and a particular regularization to be much more useful for understanding its properties. That said, Brett Bernstein gives a very nice development of the geometric approach to the SVM, which is linked in the References below. At the very least, it's a great exercise in basic linear algebra.
Access the full course at bloom.bg/2ui2T4q

Пікірлер
13. Kernel Methods
1:39:20
Inside Bloomberg
Рет қаралды 28 М.
23.  Gradient Boosting
1:24:35
Inside Bloomberg
Рет қаралды 21 М.
-5+3은 뭔가요? 📚 #shorts
0:19
5 분 Tricks
Рет қаралды 13 МЛН
Sigma girl VS Sigma Error girl 2  #shorts #sigma
0:27
Jin and Hattie
Рет қаралды 124 МЛН
I Sent a Subscriber to Disneyland
0:27
MrBeast
Рет қаралды 104 МЛН
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 2 МЛН
Support Vector Machines (2): Dual & soft-margin forms
14:09
Alexander Ihler
Рет қаралды 72 М.
11. Subgradient Descent
53:48
Inside Bloomberg
Рет қаралды 14 М.
Support Vector Machines Part 1 (of 3): Main Ideas!!!
20:32
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
12. Feature Extraction
1:14:03
Inside Bloomberg
Рет қаралды 19 М.
8. Loss Functions for Regression and Classification
41:14
Inside Bloomberg
Рет қаралды 22 М.
SVM Kernels : Data Science Concepts
12:02
ritvikmath
Рет қаралды 79 М.
Support Vector Machines (SVMs): A friendly introduction
30:58
Serrano.Academy
Рет қаралды 92 М.
The Kernel Trick in Support Vector Machine (SVM)
3:18
Visually Explained
Рет қаралды 298 М.
27. EM Algorithm for Latent Variable Models
51:17
Inside Bloomberg
Рет қаралды 20 М.
-5+3은 뭔가요? 📚 #shorts
0:19
5 분 Tricks
Рет қаралды 13 МЛН