Holy crap this is a golden video, I am very thankful to you for making this concept so graspable and intuitive!! Using Lagrangian Mechanics as a real life example of an application of functional derivative is such a good idea. AND YES, as another commenter had pointed out, mentioning Gateaux derivative indeed led to a rabbit hole of revelations, basically being that the derivative of a functional is it's gateaux derivative in the function space.
@Mike2529113 жыл бұрын
I've been searching the web for an intuitive explanation of this topic, and this video is the best one yet. You're an amazing teacher 🙏🏼
@MachineLearningSimulation3 жыл бұрын
Thanks Michael 😊 I really appreciate your feedback. It's nice to hear. As far as I know, most other videos/blog posts etc. use the example of the Brachistochrone which I personally don't find as intuitive. Therefore it's again nice to hear you like this approach 😊
@MachineLearningSimulation3 жыл бұрын
Also take a look at this follow-up video: kzbin.info/www/bejne/kJbXo2puhM1qjdE Sometimes in books/lectures etc. people only introduce the simplified version of the functional derivative which only holds in a special case.
@sucim2 жыл бұрын
This is great! Thanks for dropping the term "Gateaux derivative" that was exactly what I was looking for to go down the rabbit hole and get a deeper understanding of the topic
@MachineLearningSimulation2 жыл бұрын
You're welcome 😊
@SerafimEgorov Жыл бұрын
the best explanation of the functional derivation so far, thanks you!
@MachineLearningSimulation Жыл бұрын
You're very welcome! 😊
@aakashakhouri638 Жыл бұрын
This was amazing... I couldn't find such an explanation in any of the books... Thank you
@MachineLearningSimulation Жыл бұрын
You're very welcome! 😊
@linjunhuang94952 жыл бұрын
Thanks!
@MachineLearningSimulation2 жыл бұрын
You're welcome!
@flo453a5 Жыл бұрын
21:17 "Entschuldige!" Great video. I'm in my 2nd Semester of Physics right now and am taking my first course in theoretical physics. We have never touched any of this and we were basically introduced to this in 1 hour in a tutorial class. Sure to say I didn't understand anything because I had 0 pre-knowledge, but this video helped me a lot, so thank you very much!
@MachineLearningSimulation Жыл бұрын
You're welcome 😊 Was the same experience for me during my undergrad in mechanical engineering. Glad it was helpful 😊
@minhhieuphamnguyen80292 жыл бұрын
It seems to be you make it a lot easier to adapt. I barely understood during class. Thanks!
@MachineLearningSimulation2 жыл бұрын
You're very welcome! :) I'm super happy, I could help.
@MarioBoley Жыл бұрын
A well-designed lecture on a challenging topic.
@MachineLearningSimulation Жыл бұрын
Thanks 😊
@amywinehouserarities3 жыл бұрын
Simply amazing, you literally save exams performances. Thank you so much
@MachineLearningSimulation3 жыл бұрын
You're welcome 😊 I'm glad I could help. Hope, the exam went well. All the best :)
@keithdow83273 жыл бұрын
This is a great presentation! I am now a subscriber. I would caution viewers that quite often minus signs are ignored to make things easier, in general. This is done by all scientists. They quickly make corrections though to get the correct answer. For example with the given coordinate system, g should be -2, instead of 2. This is shown by the fact that d/dt(d/dt y) = -2. As an exercise in details, the viewer may want to do the calculations with the correct value of g and the correct sign for the potential energy. These details definitely don't help pedagogy though.
@MachineLearningSimulation3 жыл бұрын
Hey Keith, thanks a lot for the comment and the nice words, :) I really appreciate it. Regarding your comment on the sign of g: You are correct. The directions of the forces (inertia and gravity) should have been declared more precisely. :D But as you also mentioned, this doesn't affect the actual content of the video on Functionals and Functional Derivatives.
@ebertd.m.alvares8249 Жыл бұрын
Awesome! Thanks for posting! Keep up the amazing work!
@MachineLearningSimulation Жыл бұрын
Thanks a lot 😊 Appreciate it.
Жыл бұрын
This is the best explanation, thanks alot
@MachineLearningSimulation Жыл бұрын
Thanks, appreciate the kind words :).
@Stealph_Delta_30032 жыл бұрын
Really enjoyed your explanation.
@MachineLearningSimulation2 жыл бұрын
Thanks so much 😊
@xueyangwu28472 жыл бұрын
Great video, thank you! At 27:10, where does the F = m * y' ' come from?
@MachineLearningSimulation2 жыл бұрын
You're welcome :). This is just Newton's second law of motion in differential form, i.e., we had F = m * a. The acceleration a is just the second temporal derivative of the position, i.e., a = y`` Hope that helped :). Let me know if that is still unclear.
@rodrigoappendino3 жыл бұрын
Thanks a lot. I was reading a pdf, but it didn't explain very well about the functional derivative (or I just couldn't understand). You helped me a lot.
@MachineLearningSimulation3 жыл бұрын
You're very welcome :)
@Stenkyedits Жыл бұрын
Did you also study calculus of variations? ML get math from all the places you can imagine, its crazy.
@MachineLearningSimulation Жыл бұрын
Hi, it's an interesting observation that I also made. Modern ML research is extremely interdisciplinary with people from a lot of different backgrounds, especially physicists bringing in so much theoretical knowledge 😅 My background is in mechanical engineering, and the basics of calculus of variations was part of a course on the finite Element method.
@theplasmacollider6431 Жыл бұрын
Excellent explanation. Just one trivial nitpicky correction. The English expression "this is nothing else than" (25:26) should be "this is nothing other than".
@MachineLearningSimulation Жыл бұрын
Thanks for the kind feedback :). This is one of the minor flaws when not being a native speaker; I am still constantly learning. Thanks for spotting the mistake. :)
@TheTacticalDood3 жыл бұрын
This is amazing, thanks!
@MachineLearningSimulation3 жыл бұрын
You're welcome :)
@s1gaba3 жыл бұрын
Helped a lot! Thank you
@MachineLearningSimulation3 жыл бұрын
You're welcome :)
@raulguerrero44383 жыл бұрын
Thank you very much help's me a lot
@MachineLearningSimulation3 жыл бұрын
You're very welcome 😊 Thanks for the feedback and glad you enjoyed it.
@peki3482 жыл бұрын
good video!
@MachineLearningSimulation2 жыл бұрын
Thanks 😊
@apppurchaser22682 жыл бұрын
Amazing
@MachineLearningSimulation2 жыл бұрын
Thanks ;)
@friedrichwilhelmhufnagel35772 жыл бұрын
Or apply lagrange multiplier to help find a maximum/minimum? I still dont understand what is the "special" "calculus of variations" around a functional derivative ? Könntest Du mir das vielleicht noch erklären was das "bigger picture" ist? Danke fuer das Video!!
@friedrichwilhelmhufnagel35772 жыл бұрын
Ausserdem verstehe ich nicht, wie/warum Du nach der Anwendung des delta operators dann das normale Differential von I zu d-epsilon bildest.
@MachineLearningSimulation2 жыл бұрын
Answering in English so that others can also benefit from it: The idea of using Lagrange Multipliers for equality constrained optimization is kind of detached from the functional derivatives. The only thing one has to be careful with is that the lagrange multiplied (for instance, the lambda) is of the same "type" as the primary unknown. Since for calculus of variations we seek functions that minimize functionals, this Lagrange Multiplied will also be a function. If you are interested in a concrete example, maybe check out this video on deriving the Normal distribution from a maximum entropy principle: kzbin.info/www/bejne/gGi4aaCImtxlnZI . In this video, the optimization is also equality constrained.
@MachineLearningSimulation2 жыл бұрын
The bigger picture of functionals/functional derivatives/calculus of variations is the extension of the derivative to (potentially) infinite-dimensional vector spaces. Typically, those are function spaces. I think a good application for it (besides the topics in probability theory) is the solution theory to Partial Differential Equation. In a sense, a differential equation is a problem which solution is yielding a function. Most PDEs I am aware of can also be equally represented in an energy form, for which the actual PDE is the necessary condition for an optimum. And then you can kind of think of the functional derivative as a way to go from this energy form to the PDE.
@friedrichwilhelmhufnagel35772 жыл бұрын
Danke! Habe Deinen Kanal abonniert :)
@philippfrogel93552 ай бұрын
do you have example problems where we can try this at home?
@MachineLearningSimulation2 ай бұрын
Unfortunately, I don't have good problem set at hand. A typical exercise would be to derive the euler Lagrange equations, or you can show that the normal distribution arises as the distribution that maximizes the (functional) entropy under some constraints: kzbin.info/www/bejne/gGi4aaCImtxlnZI
@philippfrogel93552 ай бұрын
@MachineLearningSimulation thanks a lot!
@xenusxenus11413 жыл бұрын
Thank you very much
@MachineLearningSimulation3 жыл бұрын
You're welcome :)
@friedrichwilhelmhufnagel35772 жыл бұрын
And how to apply certain constraints?
@MachineLearningSimulation2 жыл бұрын
I will reply to this in your second question :)
@joao-melo Жыл бұрын
What is the program you're using for the notes?
@MachineLearningSimulation Жыл бұрын
That's Xournal++ with a dark background, no guidance lines, the paper extended to 100cm length and in Fullscreen.
@joao-melo Жыл бұрын
@@MachineLearningSimulation Thank you very much. Just one more question, what is the screen recorder ? I've been using the SimpleScreenRecorder, but I lose quality when trying to record in a smaller region. For maximum resolution I must to record the entire screen unfortunately.
@chadwinters42852 жыл бұрын
Do you have a playlist for variational calculus stuff?
@MachineLearningSimulation2 жыл бұрын
Hey, I just created one: kzbin.info/aero/PLISXH-iEM4JmY0FIWF96Xjq727cXyH-2b I mostly used the calculus of variations in Probabilistic (Machine Learning) applications. I plan to have future videos to also showcase its application in FEM. :)
@cleon_teunissen3 жыл бұрын
Around 28:00 into the video you point out that the procedure that you demonstrate recovers the newtonian F=ma. The reason that F=ma is recovered is that your starting point is in accordance with the Work-Energy theorem. As we know: the Work-Energy theorem gives the following expression that relates force, change of position, and velocity: \int F ds = 1/2mv^2 - 1/2mv_0^2 (This is hardly readable, of course. I wrote the derivation in a recent physics.stackexchange answer. physics.stackexchange.com/a/638763/17198 Scroll to the section 'Energy' in that answer) As we know: potential energy is defined as the negative of work done We have that the Work-Energy theorem gives that the rate of change of kinetic energy will always match the rate of change of potential energy. The thing to be wary of here is *scope*. The Work-Energy theorem is applicable only when there is a well defined integral of the force that is acting. By contrast, the principle of conservation of energy is a blanket statement. The difference between the Work-Energy theorem and the principle of conservation of energy is *scope*. The Work-Energy theorem is applicable only when the integral is well defined; the scope of the principle of conservation of energy is unlimited. In the case treated in this video the integral of force over distance is well defined. That is why the procedure that you used recovers F=ma. Your starting point is in accordance with the Work-Energy theorem, and the Work-Energy theorem follows from F=ma. [Later addition] For clarification: at around 06:00 into the video you mention that Hamilton's stationary action evaluates the Kinetic energy and the *minus* Potential energy. Visually that comes out as follows: take the true trajectory and plot the kinetic energy and the *minus* Potential energy (the energy as a function of time) That is, the plot of the potential energy is mirrored, plotting the minus potential energy instead. In the case of the true trajectory those two plots are parallel to each other all along the trajectory. Hamilton's stationary action asserts: in the case of the true trajectory the derivative of Hamilton's action is zero. The derivative of Hamilton's action is derivative with respect to variation. The variation is variation of position. Hence the Euler-Lagrange equation (for the case of classical mechanics) finds the true trajectory by evaluating the derivative of the energy with respect to position. I have a visualization of that, with a slider that allows the user to perform variation of the trajectory. cleonis.nl/physics/phys256/energy_position_equation.php
@MachineLearningSimulation3 жыл бұрын
Hey, thanks for the addition. :) I think your thoughts might go a little over the scope of the video, but I think it's still interesting for viewers that want to dive deeper. Also, feel free to link to the visualization you mentioned. Maybe, it was a little tough to call it a "rediscovering of Newtonian Mechanics" as you mention this will only work under some conditions, which still hold here if I got you correctly? The main point, however, to take home is that Minimization/Maximization of Functionals involves Functional Derivatives, and the necessary condition of the Extremum is that the first Functional Derivative (=Variation) has to be zero which leads to a differential equation (here in that case F=ma).
@cleon_teunissen3 жыл бұрын
@@MachineLearningSimulation I added a link for the visualization to the initial comment. The purpose of the visualization is to make Hamilton's stationary action entirely transparent. That is, the purpose of the visualization is to make Hamilton's stationary action entirely transparent using visual means only. There are also classes of cases where the true trajectory corresponds to maximum of Hamilton's action. (Those cases are less prevalent, but physical instances exist.) The thing is: whether Hamilton's action is a minimum or a maximum (in the case at hand) doesn't make it to the derivation of the Euler-Lagrange equation. It doesn't make it there because it is immaterial. The actual criterium to find the true trajectory is that the derivative of Hamilton's action is zero. That is what goes towards deriving the Euler-Lagrange equation. The extremum interpretation can be abandoned without any loss of capability
@cleon_teunissen3 жыл бұрын
@@MachineLearningSimulation The visualization also addresses the following aspect that you point out in the video: the variation space is a higher dimensional space. The visualization is an interactive diagram, and in addition to the main slider that executes a global variation sweep there is a set of 10 sliders for local adjustment. The user can opt to find the true trajectory manually, by manipulating the granular controls. The process of manipulating the granular controls is what the Euler-Lagrange equation does (in the limit of infinitisimal steps) to solve for the true trajectory.
@cleon_teunissen3 жыл бұрын
@@MachineLearningSimulation There is something I could have stated better in the initial comment: Hamilton's stationary action is itself stated in terms of potential energy. This means that Hamilton's stationary action itself is applicable only when the integral of the force over distance is well defined. This is why the Work-Energy theorem is sufficient as basis of Hamilton's stationary action.