Finally understood the weak duality theorem after endless YT videos
@andykandolf194822 күн бұрын
great playlist, concise and didactically valuable! thx :-))
@__amkhrjee__27 күн бұрын
Extremely well made video! It'd be nice if you make a tutorial on how to make those nice plots.
@QinyuChenАй бұрын
Amazing vedio, my English not very well, but still can get ur meaning quickly.
@vishnujatav63292 ай бұрын
Interesting
@m33pr0r2 ай бұрын
Extremely good explanation thank you
@AyanAli-se5fl2 ай бұрын
Summarised my professors 3 hours class in 10 minutes! Thank you for such a simple explanation!
@janushuang85872 ай бұрын
It's very clear. Very helpful!
@esaufloresvillar56892 ай бұрын
please the code
@musiclover-ik5jf3 ай бұрын
🙏
@yiranchang1364 ай бұрын
so clear! thank you very much! you solved my problems!
@tsunningwah34714 ай бұрын
90😅😊
@tsunningwah34714 ай бұрын
和樂 自殺
@structurelearninghub33404 ай бұрын
Is this similar to lemke pivotal method?
@daffodilzworld30525 ай бұрын
In context what does pdf mean ?
@vivekpetrolhead5 ай бұрын
Suscribed. Your channel is a gold mine. Concise and easy to follow. Thank you so much for your work.
@刘川-t5u5 ай бұрын
Professor,I have a question. The reduced cost is based on the dN=ej. When computing the reduced cost regarding the basic variable, I think the dN=0. Does it still apply?
@lifecat84525 ай бұрын
Thanks a lot ! I have a question that I hope you can answer. We have obtained the position where ℎ(𝑥∗)achieves its minimum value, that is, when 𝑥=𝑥∗ ,ℎ(𝑥) reaches its minimum. When I set 𝑥∗ = 𝑥𝑘 + 𝛼𝑑𝑘 to find the step size, is it possible for 𝛼 to fall outside the range[0,1]?
@tobiassugandi5 ай бұрын
What an awesome lecture, the examples are very helpful!
@briceathey27446 ай бұрын
vive la suisse !
@sharshabillian6 ай бұрын
Many thanks for taking the time to share your knowledge so articulately.
@pnachtwey6 ай бұрын
I would like to see a real example. In my case the line search begins from the current point in the opposite direction of the gradient. I must search along the line. I can just repeatably iterate along the line and evaluate the cost function until it no longer gets smaller. You are suggesting searching between two points but how far should the end point or 'low" point be from the current "high" point. You are assuming the boundary of the end point is known and will bracket the minimum point along the line search. What if it isn't?
@evstigneevnm7 ай бұрын
Dear sir, Thank you for your series. I was doing some research in the field of trust region updates and found you video series. I have a question on this video, though. Let's say we are working in reals. By definition, a positive definite matrix is a SYMMETRIC matrix M \in R^N \times R^N s.t. x*Mx >0 \forall x \in R^N and ||x||>0. At 04:04 it is said, that the matrix D_k = inv(A + \tau I) and \tau \in R is such, that D_k is positive definite. Am I correct that such method will not work if A is not symmetric? Is there a remedy for that case? I know about using a diagonal or symmetrization of the matrix (M = 1/2(A+A*)). But are there any other good suggestions, apart from trust region type methods, especially if the Newton's method is applied to find a solution to the problem F(x) = 0, not the minimization problem? Thank you for your time in advance.
@miqomargaryan157 ай бұрын
kroasan
@Abu_khalid17 ай бұрын
Thank you Sir
@arminmakani74717 ай бұрын
I appreciate you deeply due to your amazing teaching.
@arminmakani74717 ай бұрын
great
@arminmakani74717 ай бұрын
Wooooooooooooooooooooow. Dear Professor, your explanations are brilliant and helpful. I also downloaded your book. Thank you so much for your services.
@gobichai27048 ай бұрын
you saved my life!
@avk84778 ай бұрын
Extremely concise and lucid explanation. Thank you Prof. Michel.
@friegglb38468 ай бұрын
Dear Prof, is there any reason we set the upper bound for the nest parameter as 10?
@achrafBadiry8 ай бұрын
love the french accent. cheers !
@TauvicRitter8 ай бұрын
Don't understand customer behaviour. Customers dont go to a bar when it is closed. And i guess just leave when service takes too long.
@mircosoffritti64848 ай бұрын
Cristal clear
@NeoxX3179 ай бұрын
Vous êtes une référence à mes yeux ! Je connais votre chaîne depuis la fac et désormais je travaille dans un projet autour de la MDA / MDO et vos vidéos sont d’une grande aide, merci !
@amareyaekob33439 ай бұрын
Dear Michel, Thank you so much for this insightful video. Can you make a video demonstrating choice modeling with latent variables in SPSS, Stata or other softwares? That would help us figure out how to integrate SEM with discrete choice modeling in the form of Structural choice modeling. Thank you so much again
@nayeemislam81239 ай бұрын
The video Survival of the fittest is not available on KZbin anymore.
@hannukoistinen53299 ай бұрын
Well...math is not the strongest area of the French:). Wines and good food maybe.
@AkablaaTribe9 ай бұрын
Dr. Bierlaire you are the best. I have been involved with SP studies for the past 34 years from Park and Ride , LRT , BRT , early or late start time ( peak spreading) , risk averse propensity at signalised junctions. having conducted over 30K SP surveys myself on the past 34 years I always have had questions and never found transparent answers concerning theory and estimation but you are a super start who explains leaving nothing un-answered. A Big Thanks
@ZenjobBuddyJensJeremies10 ай бұрын
Merci beaucoup !
@operitivo463510 ай бұрын
thank you for the tutorial!
@billalaslam-z7h10 ай бұрын
very good and intuitive explanation. Thank you so much sir! you make my learning a really wonderful experience
@SepsOfficial11 ай бұрын
Thankyou.
@SepsOfficial11 ай бұрын
Where is the part 3?
@StudyJuly11 ай бұрын
Thank you so much. So well explained, exactly what I needed!
@minqixu-s5q11 ай бұрын
Why I could not install biogeme through anaconda?
@mostafashafaati18284 ай бұрын
I have the same problem
@raideno5611 ай бұрын
Monsieur a 2:38 nous avions les couts reduits de c6 = -1.25 et celui de c5 = -0.75, c'etait pas plus interessant de choisir c6 comme variable entrante vu qu'elle diminuera la fonction objectif plus que c5 ? Merci.
@raideno5611 ай бұрын
Hey please at 3:27 i didn't understand why N * dn is the same thing as the SUM of (Aj * dj) with j from m + 1 to n
@ytenergy44410 ай бұрын
N is mx(n-m) matrix; d_N is a (n-m) column vector hence their product is a (m) column vector. Now you can see the resulting (m) dimensional column vector as expressed by a linear combination of the columns of N where the coefficients of the linear combinations are the entries of d_N, which are the d_j in the summation. The columns of N, are indicated as A_j with j going from m+1 to n (remember that N is the part of A that is not B where B is (mxm)) and the entries of d_N are indicated as d_j (these are numbers). In the example, the idea is to choose only one non basic variable and to set it to 1, which is the k-th one. For this reason, the summation boils down to just the extraction of the k-th column of A. Hope this helps!