There were so many places I wanted to give a like for. Excellent talk and research direction. Thank you for sharing
@JeeveshJuneja-n1b9 ай бұрын
The green dots at 18:50 are actually classifiers. The classifier at any iteration is the line perpendicular to the line joining the green dot and origin. If the green point moves farther away from origin, without changing the direction, it just means that the gradient descent will just blow up the parameters of the linear classifier, without changing the orientation of the classifier. The loss of the classifier can be thought of as the sum of (signed) distances of the data points from the classifier line x=0. The rotational symmetry of the problem can be used to infer the direction of green points as along the y=0 line. As the data starts to become separable we get in a regime where there are red and blue points on either side of the x=0 classifier. This means that there are both positive and negative contributions to the loss and hence GD stops or converges (in magnitude). But when the data is completely separable, GD just continues indefinitely to increase and takes the magnitude to infinity.
@JeeveshJuneja-n1b9 ай бұрын
Why does hessian PSD imply span(X)=span(M) at 32:45 ?
@lelandowen2828 Жыл бұрын
P r o m o S M 🌸
@peasant12345 Жыл бұрын
46:12 ????
@drake74332 жыл бұрын
p̲r̲o̲m̲o̲s̲m̲ 😈
@ariel415el2 жыл бұрын
Great talk! The background review was very clear.
@javad32962 жыл бұрын
It got real quiet real fast :)
@meetings51453 жыл бұрын
Me too
@mohammadrezayazdanifar63063 жыл бұрын
The video isn't synced with the audio. I had a terrible experience finding out what the lecturer trying to say.
@jfjfcjcjchcjcjcj99473 жыл бұрын
Nice talk 🙂. Thanks!
@victorzurkowski23883 жыл бұрын
Important line of research
@Ilikepi1232333 жыл бұрын
It should be ML4A, not M4LA (in the video title)
@DanNavon-b7r3 жыл бұрын
Wow feels IMO exercise
@arnoldcrawford91954 жыл бұрын
Keep uploading!!! You deserve more views. Do you know of smzeus . c o m?! You could use it to promote your videos!!!
@irvin_umana4 жыл бұрын
This is so good! Wish he had taken more time.
@CamiloDS4 жыл бұрын
I disagree with Dr. Broderick. I argue that some humans may be able to see in 3 dimensions xD. Great Talk, Thank you for posting :)
@felixurciacruz34234 жыл бұрын
omg
@011galvint4 жыл бұрын
Is there a robot or a severe pedant trying to keep the speaker in the middle of the shot? Very rarely do you see what has been written as he is referring to it
@joecolvin57194 жыл бұрын
Infuriating.
@rodas4yt1373 жыл бұрын
Agree, there's someone there who doesn't even understand seeing the lecturer is the least important thing.
@wahabfiles62604 жыл бұрын
well need to provide a context first, not just start writing formulas
@shreyashnadage34594 жыл бұрын
Can somebody share the slides please
@shreyashnadage34594 жыл бұрын
Where can I find the code please?
@xjchen46594 жыл бұрын
Xj Chen The central theory behind this talk, the master theorem (Theorem 1 in page 7) of ``Exponential Line Crossing Inequalities'' of the speaker and his coauthors, is essentially the same as inequalities (4) and (12) of Theorem 6 of Xinjia Chen's paper, ``New Optional Stopping Theorems and Maximal Inequalities on Stochastic Processes,'' arxiv.org/abs/1207.3733, published in Year 2012. As can be found in Chen's paper, based on the general results of Theorem 6, more explicit line crossing inequalities are given in Theorem 7, Corollaries 5--8. In particular, explicit line crossing inequalities have been derived for the exponential family in Corollary 5 and bounded variation martingales in Corollary 6. It can be checked that the master theorem will result in the same inequalities. Chen's works were done in year 2012 and have been published in Proceeding SPIE Conference (see Pages 4-6 of paper `` A Statistical Approach for Performance Analysis of Uncertain Systems’’). In the paper of ``Exponential Line Crossing Inequalities'', the authors are using the idea of sub \psi and variance process V_t to unify all kinds of concentration inequalities. Actually, in Chen's paper, the same idea was proposed by using the function \varphi and variance bounding function V_t to unify a wide range of concentration inequalities.
@MohamedMohamed-br8lh5 жыл бұрын
Nice presentation!
@martinschulze53995 жыл бұрын
an indian speaker without accent <3
@kalidasy48675 жыл бұрын
Please show the board, as and when the speaker is referring to it