Dear Professor, We cannot thank you enough for sharing your knowledge with us, especially in a way that we, as non-experts, can get it. Your Python codes makes the lessons immediately usable for our daily problems. I wish you would one day, extend it to the numerical solutions for the Maxwell equations to simulate S-parameters for an interconnect structure consisting of power ports.
@jairjuliocc4 жыл бұрын
It's awesome what you can do with linear algebra. Thank you for the explanation
@Eigensteve4 жыл бұрын
Glad you think so!
@cz56722 жыл бұрын
Why is s=(Theta+)y called L2 solution? I thought there should be an extra L2 norm (||s||_2) to penalize before it can be called L2 solution. Am I missing something?
@ivankwok91042 ай бұрын
the python code just type the function "minimize(xxxxxxxx)" and return the result. What is the principle of this function? Why do we calculate the answer of this kind of underdetermined problem?
@arunrajiitbaero4 жыл бұрын
I wish i have such a lightboard with good acoustics. Simply the best way for teaching theory and math.
@jorgevargas84582 жыл бұрын
Thank for the lecture, one question: the algorthims that you implemented in python with the L1 norm is called basic pursuit rigth?
@RussStukel4 жыл бұрын
Very cool stuff from a TAMS graduate! ha - all the best to you Dr. B.
@Eigensteve4 жыл бұрын
Awesome, thanks Russ!
@leif10754 жыл бұрын
@@Eigensteve Thanks for your lecture series. Hope you can respond to my other message when you can.
@germanvillalobos37282 жыл бұрын
I'm not sure if this question make sense, but if s corresponds to the Fourier transform of x, would that mean that Psi technically corresponds to the inverse of the Fourier basis? If Psi corresponds to the Fourier basis, then applying Psi to a vector would technically correspond to taking its Fourier transform.
@bulutosman2 жыл бұрын
Awesome videos. Thank you so much for your effort. But the python and Matlab links on the website direct me to this video again. How can I reach the codes?
@germanyafricansoul82694 жыл бұрын
deal steve, love your book, but in your book page 22, formel 1.26 , you may means B=X- avg(X ) ?
@CsatiZoli2724 жыл бұрын
According to the histogram, there are many very small, but still not zero solution components for vector s after the L1 minimization. Are those tiny entries rounded to 0 to obtain a truly sparse solution vector (denote it by ss)? If done so, is it always guaranteed that the Theta*ss - y is small? I ask it because it reminds me of the process when a mixed integer linear programming is replaced by a mixed linear programming problem and the corresponding optimum can largely differ.
@parskatt29714 жыл бұрын
I guess the error is gonna be bounded by something like ||theta||*||s-ss|| (where ||theta|| is some matrix norm, either 1 norm or 2 norm). For fourier stuff with sampling you have a unitary matrix followed by discrete sampling which means that you would have ||theta|| < 1with some handwaving, so hence the error should be decently small I guess.
@systemx66034 жыл бұрын
Most algorithms out there for solving these type of systems employ a shrinkage operator to ensure that those entries do in fact become zero.
@CsatiZoli2724 жыл бұрын
@@systemx6603 Thank you. Now that I know what keyword to look for, I can read more about it.
@MikhailBarabanovA4 жыл бұрын
Brilliant as always! Thanks!
@Eigensteve4 жыл бұрын
Glad you enjoyed it!
@billybest52764 жыл бұрын
these videos are so good
@Eigensteve4 жыл бұрын
Thanks!
@JessicaMcKenna-e8y Жыл бұрын
Love these lectures, thank you!
@marllonmoraes6582 жыл бұрын
Teacher, can you make the notebooks of this chapter available? This topic is very interesting
@nischalsehrawat21304 жыл бұрын
Hi Steve can you please do a series on time series analysis?
@Eigensteve4 жыл бұрын
Absolutely. I have bits and pieces of this floating around, but maybe something more consolidated would be good.
@nischalsehrawat21304 жыл бұрын
@@Eigensteve Thanks. I started with your control series and I made my own Segway (studio.kzbin.infoLzBVJ7Rq4XY/edit). I can't thank you enough. Looking forward to time series analysis.
@rito_ghosh2 жыл бұрын
How exactly does the L1 and the L2 norm make the difference in the values and nature of X? I would really like to know that. Exactly what happens and how- that is my question.
@ivankwok91042 ай бұрын
same doubt with you
@totifroti4515 Жыл бұрын
Awesome, brilliant 👏
@amuprakash254 жыл бұрын
could you please do a lecture on curvelets. Thanks!
@germanyafricansoul82694 жыл бұрын
and on page 23. Code 1.10 added the code für PCA coodinate would be helpful, otherwise we love you, greeting from germany :)
@prashantsharmastunning4 жыл бұрын
its all coming together..
@Eigensteve4 жыл бұрын
I love it when that happens
@parskatt29714 жыл бұрын
I think it would be more instructive if you actually generated y from some sparse vector
@parskatt29714 жыл бұрын
@@var67 yes i know, thats why i mean y should be generated through some sparse s times theta
@CsatiZoli2724 жыл бұрын
How would that be more instructive?
@parskatt29714 жыл бұрын
@@CsatiZoli272 you could for example actually compare the true vector with the estimated one for both the l2 and l1 case
@parskatt29714 жыл бұрын
@@CsatiZoli272 Here I made a plot of how the solution actually compares (using n=500, p=200): i.imgur.com/q2aj4IR.png
@CsatiZoli2724 жыл бұрын
@@parskatt2971 What is gt? It seems to be a manufactured solution for vector s with the first few entries being non-zero (i.e. 1). Did you use the L2 minimizer as an initial vector?
@sdal49263 жыл бұрын
Very good lectures but I have one critisim. there are many repetition during the class so makes difficult to follow. You have already explained y, theta and x. So you can directly go on.
@fullmetalschizoid2 жыл бұрын
Why didn't you just use Lasso regression? Much faster