10:10 Convex functions 12:50 Examples on R 15:23 Examples on R^n and R^mxn 20:09 Restriction of a convex function to a line 28:43 Extended-value extension 31:09 First order condition 35:39 Second-order conditions 37:20 Examples 49:38 Epigraph and sublevel set 52:10 Jensen's inequality 57:21 Operations that preserve convexity 59:17 Positive weighted sum & composition with affine function 1:02:05 Pointwise maximum 1:04:39 Pointwise Supremum 1:08:13 Composition with scalar functions 1:13:31 Vector composition
@nileshdixit96724 жыл бұрын
You really deserve more than just likes
@shiv0934 жыл бұрын
@@nileshdixit9672 Honestly, I put them so that it helps me revise topics quickly. Happy that it is helping others too.
@mariomariovitiviti4 жыл бұрын
keep liking this one to keep it up
@jamalahmedhussein134110 жыл бұрын
start from 10:10
@rogeryau31157 жыл бұрын
No problem. I usually play at 1.5 speed so I will get ready after 10 minutes.
In case anyone else was confused: at 41:20 the "softmax" he describes there is different from the "softmax" in deep learning. The deep learning softmax should probably be called something like "softargmax" instead.
@jaimelima24203 жыл бұрын
The real 'inspirational' essence is what he says at 32:50 and keeps saying for a two minutes or so. Thanks for sharing.
@mhpt744 жыл бұрын
Great teacher and wonderful sense of humor!
@benjamingoldstein14Ай бұрын
Professor Boyd mentions around 20:00 that the spectral norm of a matrix X is a very complicated function of X, as the square root of the largest eigenvalue of XTX. I would mention, however, that this norm has a very simple geometric interpretation -- it is the maximum factor that X can "stretch" a vector through multiplication. Just as the largest eigenvalue of a matrix is the maximum factor that the matrix can stretch a vector if you don't allow for rotation, the largest singular value is the most that matrix can stretch any vector if you do allow for rotation. It therefore also has the interpretation of the magnitude of the largest axis of the ellipse which is the image of the unit L2 ball under the action of (left) multiplication by X.
@janiceliu54733 жыл бұрын
shouldn't the determinant of Hessian of the quadratic-over-linear function be just 0 but not greater or equal to 0? 41:00
@grunder2013 жыл бұрын
Craving for more of this kind of stuff.
@moxopal56814 жыл бұрын
1:05:50 It is extremely usefull to know if you are studying control theory.
@DarkDomnu12 жыл бұрын
this guy is a winner.
@rogeryau31157 жыл бұрын
2:06 That blink, that grimace.
@abhimanyu32446 жыл бұрын
Thank you, Prof. Boyd!
@manueljenkin952 жыл бұрын
13:58, the condition r++ is important. X^3 is not convex in R I think.
@rabeamahfoud32257 жыл бұрын
I'm a Ph.D. student in electrical engineering. I'm studying this book by myself. Those lectures are so helpful for me to start using convex optimization. How can I get the home works professor Boyed is talking about???
@guoweih73395 жыл бұрын
You can find the textbook, assignment and solution on this page. see.stanford.edu/Course/EE364A/94
@revooshnoj40785 жыл бұрын
@@guoweih7339 thanks man
@happydrawing73095 жыл бұрын
@@guoweih7339 thank you sir.
@rodfloripa103 жыл бұрын
Did they change the assingments, since the answers are available?
@kenahoo2 жыл бұрын
@@rodfloripa10 No - students just have to realize that at this level, assignments are a tool for learning, not a tool for getting grades.
@personalchannel638211 жыл бұрын
Professor Boyed is super smart and definitely a researcher who have done large number of rigorous proofs. But even then, no one comes close in conveying mathematics to engineering students to David Snider, my professor at University of South Florida, Not even Prof Boyed. Snider is retired now and he was the author of all his math books that we studied in grad school. However this course is Awesome :). I love the rigorousness of it, it is very helpful to PhD students to come up with proofs to their theorems.
@izzyece7079 жыл бұрын
abuhajara do you suggest a specific book about the conveying to understand it very well
@annawilson3824 Жыл бұрын
24:17 if have no idea whether the function is convex or not - generate a few lines, plot, and look!
@engr.aliarsalan26287 жыл бұрын
Really Helpful.
@anamericanprofessor6 жыл бұрын
Is there an active link to the class notes that are presented in these lectures? It would be more leisurely to watch the videos and then write down notes afterwards.
@fexbinder5 жыл бұрын
web.stanford.edu/~boyd/cvxbook/ The book and the lecture slides are public available on the stanford web.
@gamalzayed2247 Жыл бұрын
Thank you for this nice lecture ❤
@7nard15 жыл бұрын
Good lectures. However, if you are just dropping by like me and want to skip the chatter about admin and classroom issues, start at the 10:10 mark.
@emilywong46016 жыл бұрын
I studied optimization including linear algebra techniques at Golden Gate University in a major called Computational Decision Analysis from 1999 to 2003. We used SAS, unix and excel solver.
@filippovannella49575 жыл бұрын
so?
@JTMoustache Жыл бұрын
Composition rule mnemonic 1) Rule is for determining if f has the SAME convexity as h - no rule for f to be opposite of h 2) h has to be monotone 3) the monotonicity should be the equality test of the convexities. If convexity of g == convexity of h -> then its should be increasing, else decreasing. Outside of that no simple rule.
@JTMoustache Жыл бұрын
Hm maybe not 😅
@ismailelezi7 жыл бұрын
I feel like a noob. I am understanding the main points, but still, the examples are totally non-obvious. Of course, it is a '300' class, so I should have expected that.
@shiladityabiswas28034 жыл бұрын
if Ryan Reynold would become a prof
@cherishnguyen5068 жыл бұрын
I understand that A*X = \lamda *X has \lamda as an eigenvalue. So, how could X^(-1/2)VX^(-1/2) has the eigenvalue \lamda?
@akshayramachandran78575 жыл бұрын
Didn't understand your question-is it that 1.Why eigenvalue of X is same as that of X^(-1/2)VX^(-1/2)? OR 2.Why eigenvalue exists for X^(-1/2)VX^(-1/2)?
@Alex-if6mv Жыл бұрын
He tells: 'very painful' as if he knows a lo-o-ot about pain!😀
@Abuwesamful7 жыл бұрын
someone is asking what is diag(z)? then how can the professor be sure that, the students are following, if they do not actually understand what is this very primitive item in that equation?
@hayderatrah11 жыл бұрын
The pace is ridiculously fast. The book (which is well-written) must be read first before can one cope with these videos.
@SequinBrain3 жыл бұрын
what book?
@rashilalamichhane97502 жыл бұрын
49:48 epigraphs
@000HakunaMatata00013 жыл бұрын
@10:20
@DarkDomnu12 жыл бұрын
hilarious prof.
@saiftazir6 жыл бұрын
extended value extensions kzbin.info/www/bejne/oZSyoJeweayJasU
@summerland23212 жыл бұрын
A lefty :D
@muratcan__226 жыл бұрын
32:00 vay aq
@learningsuper67857 жыл бұрын
This guy is the kind of professor I would avoid taking classes from at all cost. He's spending too much time talking about stuff that's not helpful to the understanding of the subject, like debating with himself whether a concept is obvious or not, lots of hand-waving when he really should have drawn some graph on a piece of paper. His coursera convex opt. course is even worse. I'd recommend reading a book than watching his videos for learning the subject.
@maxwellstrange45726 жыл бұрын
i think he's super entertaining and makes stuff make sense and seem interesting when it would otherwise seem dry
@ElektrikAkar6 жыл бұрын
Like or not, he is the superhero of the convex optimization and has the best material. Even the trivial examples he is giving, may help one to broaden his/her perspective.
@sridharthiagarajan4796 жыл бұрын
Disagree, it's quite entertaining, engaging and doesn't skimp on the important stuff
@wuzhai20095 жыл бұрын
Disagree. Knowing that it is hard to make things precise shows that he knows way too much. At points when he you say he is 'hand-waving', I suggest you delve deeper and you will appreciate why he said what he said.