Error at 23:00. You can't do "division" like when using Einstein notation to cancel terms on both sides of an equation. The conclusion is correct though. Basically since the expression must be true for all vectors, you can put in vectors of all zeros and a single 1 to check that the tensors are equal component-by-component.
@redtoxic87017 ай бұрын
Came back here to refresh your memory, eh?
@jhumasarkar52034 жыл бұрын
Thanks to Euclid for inventing geometry, thanks to Isaac Newton for inventing calculus, thanks to harmann grassmann for inventing linear algebra ,thanks to Riemann for taking them together, thanks to Einstein for getting rid of the summation and thanks to eigenchris for explaining them.
@mastershooter642 жыл бұрын
and thanks to you for learning them :)
@chenlecong9938 Жыл бұрын
its actually einsteins wife though…
@bingusiswatching6335 Жыл бұрын
@@chenlecong9938 rip mileva maric
@Noah-jz3gt Жыл бұрын
@@chenlecong9938is IT? Wow..
@lcchen3095 Жыл бұрын
@@Noah-jz3gtyeah,wish I have one
@TheJara1236 жыл бұрын
U have no idea how you have helped us from tensor torturing, wasting countless hours and saved money. Please keep going with general relativity, fantastic presentation. You have made tensors calculus, algebra as delightful as it can be, and this way opened the doors of GRelativity. I never write comments, this is my first in ten years So you can imagine how respect gratitude you command on us by your video. Every week we were waiting for your videos. Thanks chris.
@taipeibest15 жыл бұрын
same here!
@u3nd3113 жыл бұрын
Same here!
@huonghuongnuquy72723 жыл бұрын
same here ! Maximum respect
@nagashiokimura9942 жыл бұрын
Same here :0
@abnereliberganzahernandez6337 Жыл бұрын
Me is the opposite I always find ways to criticize the content I see on youtube as many usually upload unusefull content principally in spanish I dont know why. but this man is really on another level. makes think so easy to understan yet in a formal mathematical construction that books fails to give in an autocontained manner. he is engenier and physist but this man i also a mathematican gives presice definitions
@saturn91996 жыл бұрын
Nothing in this life matters but your videos. Not love, not family, and not Forknife. Just your videos about tensor calculus.
@brilinos6 жыл бұрын
I looked up "Forknife" because I was full of doubts about whether something going by this name deserves to be mentioned in one sentence with these tensor calc vids. Well, my doubts were justified. But then again, de gustibus non est disputandum. :))
@jacobvandijk65254 жыл бұрын
Having arrived in my sixties I couldn't agree more, hahaha!
@__-op4qm3 жыл бұрын
that escalated quickly.. but yeah, tensor calculus moves are fancy :)
@EvanZalys3 жыл бұрын
This is one of the most lucid presentations of differential geometry I’ve ever seen. Seriously good, man.
@zhiiology6 жыл бұрын
And the legend of eigenchris continues
@lumafe19752 жыл бұрын
This video progression was really necessary to understand the concept of covariant derivative and parallel transport. Finally, the abstract description encompasses in a powerful way all the concepts involved. I recommend watching the videos in the proposed order. Great job !
@eigenchris2 жыл бұрын
Thanks. I remember finding the covariant derivative insanely confusing when I first learned it. At this point I don't think it's that big a deal. Glad the videos helped.
@armannikraftar19775 жыл бұрын
I just finished this video series. Thanks a lot for the effort you put into these videos Chris.
@OmegaOrius6 жыл бұрын
You are a life saver with these videos. Never been more interested in learning Tensor Calculus than I have been watching your videos. Please keep up the good work! I’m so looking forward to the rest of this series and the future teased series (i.e. General Relativity)
@robertturpie14634 жыл бұрын
This series of lectures explains tensor calculus in a very clear manner. This is a very difficult subject and his use of examples to explain the concepts makes understanding easier. Highly recommended.
@signorellil6 жыл бұрын
I think by now there's a growing community of people who wakes up every morning and the first thing it does is checking YT for a new video on a certain series on not (for instance) WWE wrestling or new videogame tips but on TENSOR CALCULUS. :)
@williamwesner42685 жыл бұрын
Why not all three? There's enough hours in the day. :)
@__-op4qm3 жыл бұрын
@@williamwesner4268 first thing tho! :D
@timelsen22362 жыл бұрын
Most helpful! Best post I've ever seen. Thank you for making this difficult subject accessible. Text are hard to follow and put me to sleep, in total contrast to your great presentations here! Your a top professor for sure.
@eigenchris2 жыл бұрын
Thanks. I remember the covariant derivative took me forever to understand. I'm glad if these videos made it more accessible.
@bernardsmith44644 жыл бұрын
I'm a late comer to your series but have been mesmerized since stumbling onto it. Your presentation is truly without equal.
@bluebears66273 жыл бұрын
Thank you Chris. You probably have no idea how many people you have helped with this series.
@TheCoyoteWayOverland Жыл бұрын
The best 10 minutes in this entire series in the first 10 of this video. My recommendation is watch this before watching the rest of the series, it helps motivate all the rest and gets you out of thinking in limiting ways about vectors--they are NOT arrows in 2 or 3-d! At least not only that..
@krobe86 жыл бұрын
Many thanks for your excellent video courses. Things I particularly appreciate: The content, of course. Also, clear explanations with examples and motivation of concepts, clarification of names of math structures. Your speaking speed is just right for me -- If I miss something I can pause and go back a bit, but am not waiting for next sentence while watching eye candy. No overhead of intro video clip - just right into the good stuff of each topic. Lots of work you have done / are doing. Thanks again and best wishes.
@eigenchris6 жыл бұрын
Thanks. Glad you like them.
@Noah-jz3gt Жыл бұрын
This whole series for tensor calculus is so amazingly helpful! Thanks a lot.
@leylaalkan66305 жыл бұрын
Your videos are leigendary! Literally, this series is beyond helpful, it's a lifesaver.
@subrotodatta783510 ай бұрын
@eigenchris is God's gift to us mortals. Thank you for creating this wonderful series. These lectures are highly recommended for self learners, students of math, physics, engineering of all ages. The creator has mastered the art of online teaching using visuals, text and explaining complicated concepts in easily understood layman's terms instead of high falutin gibberish, a rare gift. Would place him in the pantheon of my most respected teachers along with Sal Khan and Andrew Ng.
@iknowthatdubin48774 жыл бұрын
16 years old here studying quantum mechanics, GR, SG, and differential geometry. Spent one month trying to study tensor and got really confused until I found your videos. Absolute beautiful and a great way of helping me to understand these concepts. Thank you Chris!
@beetlesstrengthandpower18903 ай бұрын
If you were already studying GR at 16, what are you doing now 4 years later lol? I'm curious
@badgermcbadger196824 күн бұрын
Still studying GR If I had to guess lol@@beetlesstrengthandpower1890
@Cosmalano6 жыл бұрын
You’re the best. Thank you so much for this series, it’s been truly invaluable.
@metaphorpritam6 жыл бұрын
Need More videos on this topic eigenchris.You are a wonderful teacher. Please open a patreon account, so we can donate and contribute in your efforts.
@GoodenBaden3 жыл бұрын
As of today, exactly 50,000 views. Many thanks for your efforts!
@sinecurve99996 жыл бұрын
My mind is expanding just like the universe.
@jianqiuwu3 жыл бұрын
Oh my!!! I had struggled so much with these ideas trying to study Riemannian Geometry using De Carmo's textbook. Thanks for providing these intuitions!
@LucienOmalley4 жыл бұрын
Just a masterpiece of pedagogy... Thx a bunch !
@g3452sgp6 жыл бұрын
This is the masterpiece of tensor calculus , one of the hard subject in mathematics.
@abhishekaggarwal27126 жыл бұрын
Hi Chris, You already know that your videos are amazing and I can imagine how much of your time and energy must go into this. So thank you so much. I would love an opportunity to help you out either by means of donation or any grunt work you need help with to do to get these videos out even faster.
@eigenchris6 жыл бұрын
I think in my next video I'll post a link to a tip jar. Thanks!
@patrickrmiles4 жыл бұрын
Incredible videos. These are helping so much with my differential geometry class. Thanks for making this stuff so accessible and easy to learn!
@francoisfjag40706 ай бұрын
this video series on covariant derivative is a must !!!
@artashesasoyan62722 жыл бұрын
Thank you so much! It is so easy to understand with your explanations!
@J.P.Nery.N.4 жыл бұрын
Your videos are true gems. These explanations are the best I've ever seen so thank you very much for everything
@justanotherguy4692 жыл бұрын
eigenchris, the best teacher! That's all I can say, thank you.
@johnbest71354 жыл бұрын
Great lecture in a great series. Much appreciated.
@saudyassin5352 Жыл бұрын
Lucid Explanation, thanks for helping me self-study tensor calculus as an undergraduate physicist. I am now on my way to tackle Gen Relativity.
@JgM-ie5jy6 жыл бұрын
Completed the entire series just in time to wish you a happy New year. Thank you so much for this wonderful gift or your time and talent. My wish for the New year : your laser-sharp insights on divergence, curl, Green and Stokes theorems. But I know I am asking for a lot, considering all the time you have already put in.
@eigenchris6 жыл бұрын
I don't plan on making videos on those in the near future. You could check out Khan Academy's video on those topics. Sorry but I feel there are enough videos on those topics already.
@JgM-ie5jy6 жыл бұрын
@@eigenchris I understand, You already gave so much and I was embarrassed asking for more. Happy holiday.
@one7-1001s6 жыл бұрын
A Thank you for all the offers you offered your classes in a distinctive and excellent
@IntegralMoon6 жыл бұрын
This just keeps getting better! Thanks again :D
@eigenchris6 жыл бұрын
I'm glad you like them. This is more or less the point I wanted to stop at when I started the series. I think I'll end up doing a video on the Riemann and Ricci Tensors as well, but this series is basically done other than that.
@IntegralMoon6 жыл бұрын
@@eigenchris Awesome! I think you've done a great service to us all! Thank you so much :)
@manologodino9416 жыл бұрын
Incredible! It is amazing how easy to understand and interesting becomes Tensor algebra and calculus with your videos. Congratulations for your work and your clear mind. I will stay alert just in case you start another series of whatever the subject
@Cosmalano6 жыл бұрын
eigenchris easier to read Gravitation now that you’ve done these videos?
@ritikpal14916 ай бұрын
Damn, this was really nice. I think all physics students who are taught GR should first be taught these things rather then just making them learn index gymnastics of the tensor. This was really insightfull and i probably would come back to these lectures again and again (since i binged watched it from the start without carefully following the calculations). Thank you so much for taking your time to do this. I am following lectures on GR by Susskind and I couldn't digest covariant derivative. Someone in the comments suggested your playlist, and i am glad to have followed it. I wont continue from here on, as i only needed to understand covariant derivatives. But if i ever require the concepts from later lectures, surely i would continue. Edit: After typing this comment, i checked the topics of other lectures, and now i really want to watch them all (I really dont have time, as i have planned to finish a lot lectures before my holiday ends.)
@eigenchris6 ай бұрын
Yeah, the Christoffel symbols and covariant derivative took me way too long to understand. I ran into the same problem of learning "index gymnastics", but not really understanding what's going on. They main trick I've learned in this playlist and my relativity playlist is that tensors are much easier when you write out both the components and the basis. Writing transformation rules using only the components is possible, but not very enlightening most of the time.
@gguevaramu6 жыл бұрын
Hi CHRIS We haven't forgotten you. Please don't forget us. We would like to see more videos maybe till you can make us the favor to explain Einstein equation. MAybe we can help some way. You are one of the best showings where ideas are coming from.
@eigenchris6 жыл бұрын
I will be making more videos. I'm just taking a break now. I've been making videos continuously for about a year and I'm a bit burned out. I do plan on starting a new series that explains General Relativity from the basics in 2019.
@SpecialKtoday6 жыл бұрын
@@eigenchris Sounds good Chris! Do you accept donations?
@eigenchris6 жыл бұрын
@@SpecialKtoday I'll probably start a PayPal donation box in 2019 and announce it in my videos. Thanks.
2 жыл бұрын
You are an absolute gem!
@keyyyla4 жыл бұрын
Great video. In mathematical terms, the covariant derivative generalizes the directional derivative of a vector field with respect to an other vector field. Since for a submanifold of R^n there is the ambient space, R^n, we just have to project the directional derivative to the tangent space, to get at least the tangential component of the directional derivative, namley the amount of the derivative that corresponds to the manifold (its tangent space). The abstract definition cannot take into account an „ambient space“, so what we do is very typical for objects in math: Look at the analogous object in R^n, take its properties as the defining properties for the abstract object and define the new object between spaces related to the manifold that carry enough structure (here: the tangent bundle/ tangent space). And here you go, that’s the covariant derivative. What often confuses people is that in the definition there is no formula. Here is why: Since you already gave the defining objects, just look what these properties do to basis vectors. The expression you end up with is your formula.
@xiangfeiwang7556 жыл бұрын
This video serie is fantastic! looking forward for more~
@steffenleo59972 жыл бұрын
Thanks a lot Chris.... I understand it now.... Have a nice weekend..... Again... 👍👍
@zchen02112 жыл бұрын
I seldom leave a comment, but this video series is soooooooo great!!!!!
@81546mot5 жыл бұрын
Just thought I would check in with you to see if you were working on some more videos--they are great!
@eigenchris5 жыл бұрын
I plan on making more, but I've been busy lately Thanks for the support though!.
@imaginingPhysics3 жыл бұрын
26:09 and 27:36 is it not easy to see that covariant derivative(DC) of the metric MUST be 0, since g_ij is just a dot product of e_i and e_j, and the DC of a dot product is zero. So one can "see" it immidiately without lengthy calculations (right? )
@Dhanush-zj7mf Жыл бұрын
A small doubt. Isn't the metric compatibility a result of basic mathematical facts so is compulsory to be satisfied by any connection? If so how can there be other covariant derivatives(connections) not satisfying it??
@elduderino505Ай бұрын
I know this is a pretty old comment, so idk if you'll actually see this, but the reason is because the expression he gives for covariant derivative of a dot product is only true for a connection that is compatible with the metric. In einstein notation, he writes: cov_w(u^i v^j g_{ij}) = g_{ij} u^i cov_w(v^j) + g_{ij} v^j cov_w(u^i) (1) The generally true statement is actually cov_w(u^i v^j g_{ij}) = g_{ij} u^i cov_w(v^j) + v^j cov_w(g_{ij} u^i) (2) (1) == (2) in general only when cov_w(g_{ij}) = 0, which if it's true for all w is precisely the definition of a connection compatible with the metric.
@Panardo7772 жыл бұрын
Thank you so much for your priceless videos and for making those things accessibles for guys like me, your contribution to true knowledge is incredible. Concerning the sequence of this video : metric compatibility then covariant derivative of covectors then tensors maybe you could begin first with covariant derivative of a covector (covariant derivative of a scalar which is covector of a vector and then leibniz rule and second order symmetry), define covariant derivative of a tensor, apply this to the metric tensor (leibniz rule), impose that covariant derivative of the metric tensor is null (metric doesnt change so lenghts and angles are preserved) and then the metric compatibiliy appears by magic.
@chriszhao86956 жыл бұрын
Woooooow! Fantastic! This straightforward tutorial series help me understand concepts that I can never understand by merely reading textbooks, which always tends to build purely abstract terminologies to show off their intellectuality. I also have trouble in truly understanding Lie derivative, Lie groups, Lie algebra, ... anything associated with Lie... Pls make tutorials on those topics with basic examples. Thank you so much. -- By a student in computer graphics.
@eigenchris6 жыл бұрын
I have also been having trouble understanding Lie groups and Lie algebras. I don't think I will be able to make videos on these for a while. There are 3 videos on Particle Physics by Alex Flourney (videos 6,7,8) which have helped me understand Lie groups and Lie algebras somewhat... at the very least I understand that a Lie algebra is a vector space of tangent vectors at the identity element of a Lie Group, and the exponential map helps you go between Lie group and Lie algebra. I don't really understand more than that at this point.
@garytzehaylau94325 жыл бұрын
@@eigenchris ============== thank for your help,i can provide some useful link for you to learn more stuff and make videos i dont know manifold and killing vector/lie derivative either. but i think this might be useful to you(similar teaching style?) lie derivative kzbin.info/www/bejne/fniWhYeprZ2DiJI Killing vector kzbin.info/www/bejne/kInVqJuehqZ4qdU i also think this might be useful(general relativity with no gap ) if you make your video.. because the lecturer said he will not skip any detail when he teaches the course... kzbin.info/www/bejne/gKumiWZ8pql8hcU&lc=z23xyvdgdmithhhvhacdp43bic1l3qbjetiioa1qpu1w03c010c.1575076378780604
@chenlecong9938 Жыл бұрын
20:42 would you mind explaining where the expression on the top-right came from?or you derive that in the other video in the tensor calculus playlist?
@jdp99943 жыл бұрын
Thank you for this excellent summary. Very helpful.
@honzaa62356 жыл бұрын
Hi, just wanted to say that your videos are simply brilliant and please keep going. I also wanted to understand general relativity so I looked up tensors and tensor calculus, and, who would have thought, it's quite complicated. I'm making progress though and your videos are helping a lot.
@eigenchris6 жыл бұрын
Thanks. I plan to add a couple more videos in this series. After that I will do a short series on general relativity.
@rasraster3 жыл бұрын
Like a drink of water in the desert! One thing I'll note is a little extra info for people like myself who don't work with Einstein notation in everyday life: The renaming of summed indices may be jarring and may raise questions when there are 2 or more terms. If you work it out you'll see that the only times you cannot rename a summed index are: (1) there are other terms that use the new letter but it is not summed, and (2) the new letter is summed in all terms, but the range of summation would differ in each term (e.g. k = 1..2 in one term but k =1..3 in another term). Except for that, renaming always works.
@aliesmaeil10445 жыл бұрын
it is a very useful series i have ever watch thank yoy very much , please give use more series ...
@BLVGamingY Жыл бұрын
speaks in sans speech bubbles
@xinyuli66032 жыл бұрын
Hi Chris, I think I confused my self here. At 18:47 of this video. On the right hand side of boring connection, both gamma_ijk are zero. Then why on the left hand side, the partial derivative of g_{ij} not equal to zero?
@eigenchris2 жыл бұрын
Hi. The ∂_1(g_22) term will be non-zero for the metric given above (the sin^2 will become 2*sin*cos).
@jacobvandijk65256 жыл бұрын
After 28:49: "I hope you find these videos helpfull". From my perspective, that's The Understatement of The Year 2018". Thanks, Chris.
@eigenchris6 жыл бұрын
I'm glad you like them. :)
@jacobvandijk65256 жыл бұрын
@@eigenchris You better you bet, Chris!
@sigma2395 жыл бұрын
Please please please keep making more videos! Differential geometry and then General Relativity!
@FantasmasFilms6 жыл бұрын
Thank you, thank you! I love you! Your work has been soo soooooo enlightning!
@eigenchris6 жыл бұрын
I'm very glad to hear it.
@canyadigit62744 жыл бұрын
I think I found a mistake. 9:06 you say the vector is being parallel transported yet the acceleration vectors have a tangential component in them due to the vector looking like its “rotating” in each tangent space. Am I wrong?
@eigenchris4 жыл бұрын
I'll admit some of the diagrams I've included in this video are somewhat misleading. In the tangent spaces, we can't really tell what "rotation" or "tangential acceleration" looks like. just by looking at the pictures. We can't decide which direction is "up" in each separate tangent space unless we have some way of connecting them. The only way we have of connecting tangent spaces is the covariant derivative. So the covariant derivative, and nothing else, is our definition of whether or acceleration is happening.
@canyadigit62744 жыл бұрын
eigenchris this makes more sense now. Thank you.
@aidanmcsharry74192 жыл бұрын
Hi eigenchris, hope all is well. Have a quick question regarding the idea of metric compatibility, please: the formula at 13:48 says that when we take two vectors and parallel transport them together, the dot product is a constant. However, the righthand side of the formula seems to say that we transport one, then dot product and then add the inverse...would this not imply that we are taking the dot product of vectors that now live in different vector spaces (as one has been parallel transported and the other has not)? Thanks in advance :).
@eigenchris2 жыл бұрын
The covariant derivative outputs a vector that lives in the same tangent plane. You can think of taking an "initial" vector at the start of a path, and a "final" vector at the end of a path, then slowly shortening the path until it becomes a point, (similar to how you take ordinary functions by taking looking at a line connecting two nearby points and then taking the limit until the line becomes tangent to the curve). "Parallel transport" is just saying that the covariant derivative is zero, similar to saying a function is constant when its derivative is zero. Does that answer your question?
@aidanmcsharry74192 жыл бұрын
@@eigenchris I understand that to do anything useful with the two vectors, we'd need them to be in the same vector space, which we can achieve by 'connecting' two vector spaces. But, my lack of understanding here comes in the idea of dot producting a vector that has been parallel transported to some vector space with one from the original vector space (the righthand side of the aforementioned equation). If the parallel transported vector is actually 'moved' into the vector space of the vector it is being dot producted with, then surely the lefthand side of the equation doesn't necessarily make sense as it requires us to dot product v and u, living in different vector spaces. In terms of v and u: if we do derivative(d) of (v.u) = d(v).u + v.d(u), then if v and u are in the same tangent space, surely dv is no longer as it has been 'connected' to some other tangent space, and so d(v).u isn't a meaningful expression (and ditto for the v.d(u) term). My issue really is just with the spaces that each of these vectors belong to. Thanks a load!
@goddessservant66693 жыл бұрын
I'm giving this incredible guy more money.
@zoltankurti6 жыл бұрын
You always have to use the proper definition of the torsion. You are right that the lie bracket of your basis vector FIELDS is 0, but if you construct two general vector fields from those (you have a position dependent linear combination) you will not get 0 because of the product rule. So either write as a definition that nabla_e_j(e_i) equals the other way around, or with general vector fields it equals with the lie bracket of the two, since it's always the case for general smooth vector fields.
@benjaminandersson2572 Жыл бұрын
Very good explanations. I don´t think you are mentioning that you are multiplying both sides by g^{im} in the end at 15:44, where g^{im} is the i:th row, m:th element of the inverse matrix to the matrix-representation of the metric g.
@sylargrey10166 жыл бұрын
These videos have been so helpful
@eigenchris6 жыл бұрын
Glad to hear it.
@canyadigit62744 жыл бұрын
Hi again Chris. At 26:36 of the video, you say that the covariant derivative of the metric tensor is zero in all directions of the intrinsically curved space. This doesn’t make complete sense to me since g_22=sin(u)^2 in the metric tensor, meaning that the metric tensor changes in the dR/du basis vector direction. So since the metric tensor changes in the dR/du basis vector direction, then the covariant derivative of the metric tensor shouldn’t be zero right? And from what I understand, in the intrinsic space the covariant derivative is just the normal derivative since there is no “normal” direction. This question is really confusing me and I’ve talked to some other people about it but there answers didn’t satisfy me. Can you explain why it’s zero in all directions even though it changes in the dR/du direction?
@eigenchris4 жыл бұрын
There's a difference between the derivative of the metric tensor COMPONENTS being zero, and the derivative of the metric tensor itself being zero. Remember the covariant derivative of the metric tensor will have 3 parts: one for the components, and two for the basis elements in the tensor product (those have the Christoffel symbols in them).
@eigenchris4 жыл бұрын
Basically the christoffel terms will cancel out with the derivative of the metric components exactly and the result will be zero.
@canyadigit62744 жыл бұрын
eigenchris but at 26:34 you write it as the covariant derivative of the tensor itself being equal to zero (bottom left corner). I understand how computing the covariant derivative of the metric tensor/components gives you zero, but I don’t understand how that makes sense physically. The metric tensor is changing along the u direction, yet computing the covariant derivative gives you zero. Why doesn’t the computation show what’s actually going on?
@eigenchris4 жыл бұрын
I disagree with you when you say "then metric tensor is changing along the u-direction". If you took a unit vector and parallel transported it along the u-direction, it wouldn't change length. If you took a pair of unit vectors and parallel transported them along the u-direction, the angle between them would stay the same. The function g that helps us compute the lengths and angles of vectors doesn't change along the u-direction--the rules for measuring lengths and angles stays the same.. What *is* changing is the components of g, but that doesn't tell us much. A unit vector W pointing east (along v direction) would have components that change as we parallel transport it along the u-direction (this is because the e_v basis vector is changing length along the u-direction), but the vector W itself isn't changing (covariant derivative is zero). I'm sorry if I'm repeating myself, but this is how I think of it. Components on their own don't tell you anything and you should never trust components on their own to be meaningful information. You must look at the combination of components and basis vectors. Any changes you see in the metric components will get balanced out by opposite changes in the epsilon basis (g is covariant but epsilon basis is contravariant).
@canyadigit62744 жыл бұрын
eigenchris but the metric tensor in an intrinsically curved space (sphere projected onto a plane) is a tensor field. Yes every point in the space has a metric tensor with different components but it’s still measured in the same basis no? g=[[1 1][0 sin(u)^2]] in the dR/du^i basis alone. Meaning that not only the components are changing but we are still measuring them with the same basis meaning the metric tensor itself is actually changing through the space not just the components.
@sanidhyasinha57353 жыл бұрын
Thank you very much. one of the best lectures.
@deeperunderground094 жыл бұрын
Hi. There is something I don't understand. The metric compatibility property implies that the covariant derivative of the dot product between two vectors is equal to zero. I read on Wikipedia that the same property also implies that the covariant derivative of the metric tensor is equal to zero. At 14:20 you write that the covariant derivative of two basis vectors is equal to the partial derivative since the dot product gives a scalar. But isn't that the same thing as saying that the covariant derivative of the metric tensor components is equal to the partial derivative of the same metric tensor components? That would also mean that the partial derivative of the metric tensor equals zero, which seems kind of odd... Where is my mistake?
@eigenchris4 жыл бұрын
In mind understanding, there's no such thing as "the partial derivative of the metric tensor". You can take the partial derivative of the metric tensor components (g_ij), but you cannot take the partial derivative of the metric tensor itself (g). The covariant derivative of the metric tensor g results in 3 sets of terms: 1 set involving partial derivatives of the components g_ij, and 2 sets involving covariant derivatives of the basis (which end up being Christoffel symbols). For the slide I show at 14:35, the 1st set of terms (partial derivatives of components g_ij) cancel out exactly with the other 2 sets of terms (christoffel symbols). This is why the covariant derivative of the metric tensor g is zero. If you know the semi-colon notation.... g_ij;k = 0 always, but g_ij,k could be non-zero.
@deeperunderground094 жыл бұрын
@@eigenchris Thank you for your answer. This made things much clearer. So the fact that the covariant derivative of the metric tensor is equal to zero doesn't mean that the single components are equal to zero as well. Yet I still don't understand your last line: g_ij;k = 0 ... g_ij,k =/= 0 . But isn't g_ij just a scalar component of the metric tensor g? By the way, I take the occasion to compliment you on your exceptional work.
@thigadao50866 жыл бұрын
Hi man, in first place, I hope you don’t care about my English, cause it isn’t my nature language. I’m from Brazil, and I really enjoy your videos, I’m following you since the “tensor for beginners” playlist, where, in one of these episodes, you showed us your educational plan, which include, after these tensor calculus season, a differential geometry series, and after that one, general relativity videos. I saw that you aren’t more continuing these plan, and this really worried me, cause I really, as I already said, REALLY enjoy your videos (they help me a lot), and, therefore, I don’t want them to stop. I hope you read this post, and do a forward transformation with your old plans (this would be awesome xD), so, thanks for your attention and for the knowledge you’ve been sharing with us, and, simply by. 😁
@eigenchris6 жыл бұрын
Hi. I'm glad you like my videos. My tensor calculus series basically "became" a differential geometry series starting at video 15 (geodesics). I plan to make 3 more videos in this series on curvature and torsion. After this I will start work on a short series on General Relativity.
@thigadao50866 жыл бұрын
Thanks for your answer 😁. I’m glad too you’re gonna continue the videos. But, something seems really strange for me; in the most part of the books that I’ve looked about differential geometry, a prerequisite was topology, how did we learn it without this topic, and do you have some thoughts about making a playlist about it ? =)
@eigenchris6 жыл бұрын
There are basically two "versions" of differential geometry. There's the "classical" version that Gauss did and the "modern" version. The classical/Gauss version is all about 2D surfaces that live in 3D space (sphere, cylinder, torus, etc.). The modern version is more abstract and is about "Riemannian manifolds", which are abstract curved spaces of any dimension. I feel most of the important ideas can be understood using the classical/Gauss approach. The modern approafh requires you understand the definition of a manifold, which requires topology. I find the definition of manifolds is somewhat overly complicated and not needed if you just want the basics, so I don't talk about it.
@rajanalexander49493 жыл бұрын
This is incredible.
@jamesu80333 ай бұрын
At 24:18 how did u take the covariant derivative of the components of the metric (g_rs) and say it was equal to the partial derivative of the components of the metric?
@eigenchris3 ай бұрын
That's just part of the definition. The covariant derivative of a scalar function is the partial derivative.
@DavidPumpernickel4 жыл бұрын
This helped with my DG course so much
@biblebot39478 ай бұрын
9:57 does this Christoffel expansion work in terms of derivatives though? What I mean is can you expand a second order derivative in terms of the first order derivatives using the connection coefficients? I feel like it wouldn’t be the case but if so, it wouldn’t make much sense to identify vectors and derivatives like this
@eigenchris8 ай бұрын
I believe you should be able to expand any derivative using some combination of Christoffel symbols. The covariant derivative of a vector field gives another vector field. You can continue taking covariant derivatives to get more vector fields as much as you like, provided the field is continuous. You just need to be sure to apply product rule correctly and take the derivatives of both the vector components and the basis vectors.
@ocularisabyssus96285 жыл бұрын
Great series! Thank you
@112BALAGE1126 жыл бұрын
Brilliant explanation. Thank you.
@jacobm51674 жыл бұрын
At 21:34 I'm having a rough time justifying the first term on the right hand side from the third line to the fourth line. Help!! 😯
@eigenchris4 жыл бұрын
I didn't think much of it when I made the video, but I see your point. I think that step requires metric compatibility. The dot product can be viewed as the metric tensor acting on the two vectors. But the metric tensor can also change a vector into a covector if we leave one of the input slots empty. However the question remains how to move the metric tensor g from the outside of the derivative to the inside so that it can act on the a vector. If we assume metric compatibility, we can freely move the metric tensor in and out of covariant derivatives, since the covariant derivative of the metric is zero and the extra term in the product rule vanishes. Does this explanation make sense to you?
@jacobm51674 жыл бұрын
eigenchris -- Yes, it makes lots of sense. You're saying that ∇g_ij = 0 implies that g_ij∇v=∇(g_ij*v).
@jacobm51674 жыл бұрын
@@eigenchris -- You're lectures are extremely helpful. You must put lots of work into it. Are you a professor?? I appreciate the help
@eigenchris4 жыл бұрын
@@jacobm5167 I have an undergrad degree in physics but I'm not a professor. I tried to teach myself general relativity in my free time and found it very hard, so I made these videos to help other people.
@jacobm51674 жыл бұрын
@@eigenchris-- Any favorite textbooks??
@louleke776 жыл бұрын
I did find these video helpful. Thanks a lot for your work, you're great!
@kimchi_taco8 ай бұрын
Covariant derivative on scalar field has same notation of gradient, but they are different, right? Gradient needs inverse metric tensor but former doesn’t. If so, covariant derivative notation, nebula, is a bit confusing.
@takomamadashvili360 Жыл бұрын
U are brilliant! Thanks a lot!!🥳🥳
@yamansanghavi5 жыл бұрын
Excellent lecture. Thank you.
@anantbadal6045 Жыл бұрын
The covariant derivative of a scalar is a scalar, so at 13:34 the RHS of the equations should be the scalar 0, not the zero vector.
@eigenchris Жыл бұрын
Good point.
@pseudolullus13 күн бұрын
This video is also pretty useful for gauge theory :D
@muhammedustaomeroglu34513 жыл бұрын
In definition, is the formula for covariant derivative (which includes Christoffel symbols) essential? or other formulas that obey 4 properties are also defined as covariant derivative?
@eigenchris3 жыл бұрын
I think the definition with the Christoffel Symbols is called a "linear connection" or "affine connection". This is pretty much the only one we care about in General Relativity. The Covariant Derivative can get pretty abstract and appears in other places too. For example, I think Quantum Field Theory has something called a "Gauge Covariant Derivative" and that doesn't use Christoffel Symbols. Instead it uses "Gauge Fields" or something. I'm not super familiar with it.
@muhammedustaomeroglu34513 жыл бұрын
@@eigenchris thank you for your response.
@g3452sgp5 жыл бұрын
Hello, How are you doing? When are your GR series videos coming?
@lixianghe-tf4ro11 ай бұрын
Amazing video. it's so friendly to introduce highly abstract concepts in R^n first so that some ordinary students like me can understand them. Now I have a little question, the covariant derivative in some sense measures the "difference" along certain direction so the covariant derivative of a k-tensor field should also be a k-tensor field right? And accroding to my poor knowledge of manifold, if the operator "d" acts on a scalar field, then we get a covector field rather than a scalar field, so are there any connections between "d" and covariant derivative like coefficients of gradient( corresponds to d) are the directional derivative( corresponds to covariant )? Hoping my poor English and gramma will not be confusing and offensive😥
@g3452sgp6 жыл бұрын
I think it become even better if you use greek letters like μ, ν, γ for summation indices. This practice helps us to make things clear so that we see which index is nominative use and which index is used just for summation in the equations. As a matter of fact, many GR books incorporate this practice.
@eigenchris6 жыл бұрын
I was mostly under the impression that in GR textbooks, greek letters meant "4D spacetime" and latin letters just meant "3D space". Is there another reason to use one vs the other?
@g3452sgp6 жыл бұрын
@@eigenchris There is a very good reason. In this point , let me say the fact. Each time I see the tensor equation in your video, the first thing I do is rewriting the equation on a sheet of paper so that roman letter indices used for summation be replaced by greek letter representation and indices used for nominative use stay the same. This way I can separate main indices from dummy use ones so that I can focus on the main meaning of the equation.
@eigenchris6 жыл бұрын
@@g3452sgp What exactly do you mean when you say "nominative"? Is this for naming things, like the basis vectors? Or is it just any index not used for summations?
@g3452sgp6 жыл бұрын
@@eigenchris I mean index is nominative when it always shows up both side of the equation . Nominative index specifies the specific direction. Most of the time i,j index is used as nominative because they specify the specific direction rather than all directions as summation indices or dummy indices do. On the other hand , summation indices or dummy indices like l,m in your video do not suggest the specific direction of the interest . They simply suggest all directions for repetition. They are local and they will show up only one side of the equation, right? Therefore separating Nominative indices from dummy indices is important. I think this is why many GR textbooks make use of greek letters to mark the indices are dummy.
@j.k.sharma3669 Жыл бұрын
Hi Chris, can you clarify that parallel transport on a sphere is possible through geodesic paath only ? Because by other path rate of change of the vector is not zero .
@swalscha5 жыл бұрын
In the metric compatibility expansion in terms of the basis vectors, you already used the torsion-free property by swapping the lower indices comparing to the definition in the upper-right corner. Also, when you take the covariant derivative of the dot product, you wrote the answer as a zero vector. Shouldn't that be a scalar?We can see in the summary that the covariant derivative is expressed in the same space than the tensor field given as the input. This channel is awesome! Please keep going as we are many to enjoy your videos (which have, clearly, no equivalent on KZbin)! Thanks
@steffenleo59972 жыл бұрын
Good Day Chris, which one of your tensor calculus Video did explained about tensor density.... Have a nice weekend... 👍👍🙏
@eigenchris2 жыл бұрын
I don't think I talk about tensor densities... I briefly touch on the volume form in Tensor Calculus 25, which behaves like a density.
@carsonyanningli33014 жыл бұрын
Hi Chris, thank you very much for a great series of videos. There are plenty of compliments already. So I will save mine. I do have one question, if you use the "boring connection", then it just becomes a regular partial derivative with respect to the coordinates. But isn't that non-tensorial and thus not covariant? A regular partial derivative does not transform like a tensor when you change reference frames. But the Levi-Civita connection does transform as a tensor.
@eigenchris4 жыл бұрын
Note that just because the Boring Connection coefficients are zero in the coordinates I gave, it doesn't mean they are zero in all coordinate systems (if you watch video 17.5 you will see the Christoffel symbols transform with an extra term that gets added on, because they are not tensors). If you change coordinates, the Boring connection coefficients may be non-zero, and I believe the derivative works out to be a tensor in the end.
@carsonyanningli33014 жыл бұрын
@@eigenchris thanks for the clarification. I didn't realize there is a 17.5 video.
@eziooresterivetti56715 жыл бұрын
We all badly need you to keep on sorting out & explain all this tensor thing. Hope will you come to curvature tensor ... & Einsein field equation. I thing mosto of us are willing to reward you for you time (as I do with wiki) but dont see any "donate" key. Greate !!!!
@ElliottCC3 жыл бұрын
Give this guy some money ! I have twice!
@秦强-q7o5 жыл бұрын
@20:30 why the i,j,k arrangement in the 𝝠 (covector connection) is not the style of 𝚪(vector connection)?
@eigenchris5 жыл бұрын
Because alpha is a covector, is (j) components go on the bottom. This means that the (j) components of the 𝝠 in order to do a summation.
@chenlecong9938 Жыл бұрын
From 5:40 to 7:17,I’ll assume we’re sticking on to Intrinsic Geometry since the normal component are dropped…..are we?
@ES-qe1nh Жыл бұрын
intrinsic=based
@JgM-ie5jy6 жыл бұрын
About the Greek and Latin / Roman characters for indexes : in his book No-Nonsense Electrodynamics: A Student Friendly Introduction, Jakob Schwichtenberg states that Greek characters are used for an index which can take the values 0, 1, 2, 3 and Roman characters for an index which can only take values 1, 2 and 3 - not 0. As long as nobody starts to play fast and loose with this "convention" - such as using iota or omicron for a single index with no indication as to whether it is the iota or omicron as opposed to the identical-looking roman i or o characters, I am OK with such an "implied" indication of range of values. And then Mr. Schwichtenberg mentions the Levi-Civita SYMBOL, which feels like a Kronecker δ on LSD, taking on the values +1, 0 and -1 depending upon permutation of the indexes. Wikipedia starts from that symbol and goes on seemingly forever in another world of permutation tensor, etc.
@eigenchris6 жыл бұрын
I think the greek/latin letter convention is mostly used in General Relativity, where the 0-component is "special" because it is time, not space. In these videos I don't deal with time components so I don't think the convention means very much.
@garytzehaylau94325 жыл бұрын
excuse me after watching the lecture for several time i think there is another problem in the video that i don't understand in 21:40 there is a term called ( nabla a delta i ) dot V_= ( nabla alpha delta i ) V (you assume a dot __ = alpha) i dont understand why they are equal i try to derive it by myself but fail *May be the below derivation is wrong and it can be ignored because d (alpha^i)/du^k ( a input slot for vector ) = d ( a^j gij )/du^k { as a^j gij =alpha_i}=/ da^j/du^K gij ( where da^j/du^k is the "dot product" part ) could you explain to me thankk * Since alpha = a dot implied alpha _i = a^j gij
@eigenchris5 жыл бұрын
Every covector can be writtem in the form (vector dot __). Recall from earlier videos how the metric tensor (and its inverse) is used to convert between vector and covector "pairs". The component formulas are alpha_i = a^j g_ij, or in other words a^j = alpha_i g^ij (where g with superscripts is the inverse metric tensor).
@garytzehaylau94325 жыл бұрын
@@eigenchris my problem is the "derivative part" and i dont have problem to understand alpha = a dot something but i have problem when i try to understand (nabla di a) dot___ = (nabla di alpha)___ i derive this for several hours (i understand what you said) the given assumption is alpha = a dot ____ and i also know a^j = alpha_i g^ij but the problem is : given alpha = a dot ___ = true,then we need to show d(alpha)/du^i = d(a)/du^i dot _____ is valid as you said in the video(you skip the math detail behind it which is the (nebla di Alpha) (__) = (nebla di A) dot____ = d(Alpha)/du^i = d(a)/du^i dot _____
@garytzehaylau94325 жыл бұрын
@@eigenchris i will give you more support to make new video ! you help me a lot thankk you what i try to do is : d(alpha)/du^i = d(alpha_i e^i ) /du^i = d(alpha_i )/du^i *e^i + d(e^i)/du^i *(alpha^i) = d( a^j g_ij) /du^m ( g^ix dot ___ ) + d(g^ix dot___)/de^m (a^j g_ij) = d( a^j gij g^ix dot___)/du^m which is not equal to da^j/du^m dot ( ) [[[(i should provide more information: a^j g_ij = alpha_i e^i = [g^(xi)] ( g_xk) e^k and g^(xi) should be the "counterpart of co-vector" as covector W = [W^j] (g_ij) e^i therefore the g^(xi) is the W^j if we compare the covector with the unit covector bais the problem is given a dot__ = alpha is true cannot let us to make the conclusion therefore e^i = g^(xi) dot _____ where da/du^i dot __ = d(alpha)/du^i also true as my derivation shown that d(a dot ___) /du^i seems to be the correct one i just wonder how did you get this conclusion)]]]] i try to expand everything to make d(alpha)/du^i = da/du^i dot ___ given alpha_i gij = a^j is true *in your third and forth line you said d(a)/du^i dot V = d( alpha)/du^i) (V) given a dot V = alpha(V) is true here is the hardest part i try to understand since the dot is not inside d( a dot___ ) but the dot is located outside where d( a) dot___(under my own derivation) it is hard to do and i get confused :( question(2) #by the way it seems you make a mistake in 10:08 when you use nabla w V - nabla v W = [V,W] as i think it should be [W,V] if you put W = d/du^i --> W act on V become W(V) first
@garytzehaylau94325 жыл бұрын
@@eigenchris i try it again and derive it today if (nabla di a) dot _ equal to (nabla di Alpha) __ we only can assume that d(a^j)/du^k gij e^j = d(Alpha)/du^k as d(alpha)/du^K = d(a^j)/du^K dot _ but i dont understand why and it is strange to me because it is assumed that gij is located outside the derivative and it is unaffected by the derivative i try to use alpha _j = gij a^i this property to prove the relationship between d(alpha)/du^k = d( a)/du^k dot__ for example i try to let d(alpha)/du^k = d(a du^k e^j )/du^k and use product rule to expand it all out and hope i can obtain the d(a)/du^k dot__ finally by using the transformation metric gij and e^j(ei) = delta j i but my attempt are all failed since there is a unknown coefficient that i cannot fit into the equation.... there is a logical problem when you use d(a)/du^i dot _ = d(alpha)/du^i ( ) assume a dot___=alpha ( ) is true i just wonder why this is true and you skip the math behind it...
@xuhu81484 жыл бұрын
@@garytzehaylau9432 I got exactly the same confusion as you. I looked up wikipedia and the definition there is: the covariant derivative of {a covector field applied to a vector field pointwise, which is a scalar field} is the covariant derivative of the covector field applied to the vector field pointwise + the covector filed applied to the covariant derivative of the vector field pointwise. With this definition, we can directly derive \Alpha = -\Gamma. No need to use metric compatibility. And I don't think this definition implies metric compatibility. The video seems to imply metric compatibility ==> \Alpha = -\Gamma, or \Alpha = \Gamma requires metric compatibility, which I don't feel is the case. But anyway the video series is great! Much cleaner than many other materials.
@broiled_lemming4533 Жыл бұрын
Is there any actual mathematical basis for declaring that the covariant derivative follows a Leibnitz rule over the tensor product, or is this just convention/necessary for metric compatibility? It just seemed like a strange thing to declare as axiom is all. As an aside, wonderful series, whole workup has been phenomenal in explaining and constructing the covariant derivative!
@eigenchris Жыл бұрын
I think it's just a standard expected behaviour for derivatives, so it is declared as an axiom.
@nortong.dealmeida94404 жыл бұрын
Thank you, your videos are wonderful. I have a remark: In 28:51 you cancel out different terms to get Gamma = - Lambda. Since these different terms are part of a sum, I'm wondering if you did as a "trick" because the alpha's and v's should be put in evidence...
@eigenchris4 жыл бұрын
Yeah, that was a blunder on my part. You can't directly "cancel" them as if they were terms that can be divided. However, the formula should work for ANY choice of alpha and v, so you can conveniently choose alpha and v to each contain all 0s with a single 1, and prove that the elements of Lambda are equal to the elements of Gamma, entry-by-entry. I apologize for not explaining that better.