Пікірлер
@胡一驰
@胡一驰 Ай бұрын
@sayekhaniha8309
@sayekhaniha8309 Ай бұрын
Thank you for sharing this video.
@PleochroicSpodumene
@PleochroicSpodumene 3 ай бұрын
what a fantastic presentation, thank you. This was a lot of fun
@parthsalat
@parthsalat 4 ай бұрын
Hi Sanjoy, did you immigrate to USA, or were you raised there?
@salima_laguerriere
@salima_laguerriere 5 ай бұрын
youtube.com/@salima_laguerriere?si=nPjX6nPapUkXjOYB
@rotors_taker_0h
@rotors_taker_0h 6 ай бұрын
There were so many places I wanted to give a like for. Excellent talk and research direction. Thank you for sharing
@JeeveshJuneja-n1b
@JeeveshJuneja-n1b 9 ай бұрын
The green dots at 18:50 are actually classifiers. The classifier at any iteration is the line perpendicular to the line joining the green dot and origin. If the green point moves farther away from origin, without changing the direction, it just means that the gradient descent will just blow up the parameters of the linear classifier, without changing the orientation of the classifier. The loss of the classifier can be thought of as the sum of (signed) distances of the data points from the classifier line x=0. The rotational symmetry of the problem can be used to infer the direction of green points as along the y=0 line. As the data starts to become separable we get in a regime where there are red and blue points on either side of the x=0 classifier. This means that there are both positive and negative contributions to the loss and hence GD stops or converges (in magnitude). But when the data is completely separable, GD just continues indefinitely to increase and takes the magnitude to infinity.
@JeeveshJuneja-n1b
@JeeveshJuneja-n1b 9 ай бұрын
Why does hessian PSD imply span(X)=span(M) at 32:45 ?
@lelandowen2828
@lelandowen2828 Жыл бұрын
P r o m o S M 🌸
@peasant12345
@peasant12345 Жыл бұрын
46:12 ????
@drake7433
@drake7433 2 жыл бұрын
p̲r̲o̲m̲o̲s̲m̲ 😈
@ariel415el
@ariel415el 2 жыл бұрын
Great talk! The background review was very clear.
@javad3296
@javad3296 2 жыл бұрын
It got real quiet real fast :)
@meetings5145
@meetings5145 3 жыл бұрын
Me too
@mohammadrezayazdanifar6306
@mohammadrezayazdanifar6306 3 жыл бұрын
The video isn't synced with the audio. I had a terrible experience finding out what the lecturer trying to say.
@jfjfcjcjchcjcjcj9947
@jfjfcjcjchcjcjcj9947 3 жыл бұрын
Nice talk 🙂. Thanks!
@victorzurkowski2388
@victorzurkowski2388 3 жыл бұрын
Important line of research
@Ilikepi123233
@Ilikepi123233 3 жыл бұрын
It should be ML4A, not M4LA (in the video title)
@DanNavon-b7r
@DanNavon-b7r 3 жыл бұрын
Wow feels IMO exercise
@arnoldcrawford9195
@arnoldcrawford9195 4 жыл бұрын
Keep uploading!!! You deserve more views. Do you know of smzeus . c o m?! You could use it to promote your videos!!!
@irvin_umana
@irvin_umana 4 жыл бұрын
This is so good! Wish he had taken more time.
@CamiloDS
@CamiloDS 4 жыл бұрын
I disagree with Dr. Broderick. I argue that some humans may be able to see in 3 dimensions xD. Great Talk, Thank you for posting :)
@felixurciacruz3423
@felixurciacruz3423 4 жыл бұрын
omg
@011galvint
@011galvint 4 жыл бұрын
Is there a robot or a severe pedant trying to keep the speaker in the middle of the shot? Very rarely do you see what has been written as he is referring to it
@joecolvin5719
@joecolvin5719 4 жыл бұрын
Infuriating.
@rodas4yt137
@rodas4yt137 3 жыл бұрын
Agree, there's someone there who doesn't even understand seeing the lecturer is the least important thing.
@wahabfiles6260
@wahabfiles6260 4 жыл бұрын
well need to provide a context first, not just start writing formulas
@shreyashnadage3459
@shreyashnadage3459 4 жыл бұрын
Can somebody share the slides please
@shreyashnadage3459
@shreyashnadage3459 4 жыл бұрын
Where can I find the code please?
@xjchen4659
@xjchen4659 4 жыл бұрын
Xj Chen The central theory behind this talk, the master theorem (Theorem 1 in page 7) of ``Exponential Line Crossing Inequalities'' of the speaker and his coauthors, is essentially the same as inequalities (4) and (12) of Theorem 6 of Xinjia Chen's paper, ``New Optional Stopping Theorems and Maximal Inequalities on Stochastic Processes,'' arxiv.org/abs/1207.3733, published in Year 2012. As can be found in Chen's paper, based on the general results of Theorem 6, more explicit line crossing inequalities are given in Theorem 7, Corollaries 5--8. In particular, explicit line crossing inequalities have been derived for the exponential family in Corollary 5 and bounded variation martingales in Corollary 6. It can be checked that the master theorem will result in the same inequalities. Chen's works were done in year 2012 and have been published in Proceeding SPIE Conference (see Pages 4-6 of paper `` A Statistical Approach for Performance Analysis of Uncertain Systems’’). In the paper of ``Exponential Line Crossing Inequalities'', the authors are using the idea of sub \psi and variance process V_t to unify all kinds of concentration inequalities. Actually, in Chen's paper, the same idea was proposed by using the function \varphi and variance bounding function V_t to unify a wide range of concentration inequalities.
@MohamedMohamed-br8lh
@MohamedMohamed-br8lh 5 жыл бұрын
Nice presentation!
@martinschulze5399
@martinschulze5399 5 жыл бұрын
an indian speaker without accent <3
@kalidasy4867
@kalidasy4867 5 жыл бұрын
Please show the board, as and when the speaker is referring to it
@DJvolli
@DJvolli 6 жыл бұрын
Thank you!