No video

What is a Tensor 6: Tensor Product Spaces

  Рет қаралды 30,730

XylyXylyX

XylyXylyX

Күн бұрын

What is a Tensor 6: Tensor Product Spaces
There is an error at 15:00 which is annotated but annotations can not be seen on mobile devices. It is a somewhat obvious error! Can you spot it? :)

Пікірлер: 194
@alexkatko431
@alexkatko431 3 ай бұрын
8 years later and still an amazing set of videos
@TheNutellamaster
@TheNutellamaster 3 жыл бұрын
Even though it has been a while since you uploaded this video, I want to express my sincere gratitude to you for having taken the time preparing this series. I‘ve only watched two now, not even starting from the beginning, but this has already cleared up so many questions I carried around since the very beginnings of my studies. Thank you so much!
@johnbest7135
@johnbest7135 6 жыл бұрын
Terrific series so far. For the first time I am getting some clarity on this subject. Most grateful to you.
@jackozeehakkjuz
@jackozeehakkjuz 7 жыл бұрын
Tensors look so natural now. Thank you, again. I'll finish this series for sure. :)
@ozzyfromspace
@ozzyfromspace 4 жыл бұрын
I’ve been trying to understand these lectures for quite some time now. I always come back to them. I’m happy to say I finally get it 😊🙌🏽🎊🥳! I needed a few supplementary videos that introduced geometry with examples, so now I get it. Thank you, your videos are a godsend!
@XylyXylyX
@XylyXylyX 4 жыл бұрын
I am glad to hear that you persisted. To find a path to understanding requires persistence.
@PunmasterSTP
@PunmasterSTP 2 жыл бұрын
TPS? Maybe that stands for "The Perfect Showcase"...of information! Thanks so much for making this series.
@rktiwa
@rktiwa 4 жыл бұрын
Even though you have already maddened me with your manifold, Christffel symbols, curvature and all that but given your teaching prowess I wish you make a series on QFT because you are the one who presume the bare minimum on the part of your viewers while explaining such complex theory. You I want to be mader.
@billguastalla1392
@billguastalla1392 4 жыл бұрын
Thanks for these videos Xyly - I love your operator overloading comment (it could be a template parameter!) and having moved towards algebra after spending time working with C++/Haskell I think this attitude is underrated. Thinking about the underlying "type system" of symbols brings new ways to engage with maths!
@georgeorourke7156
@georgeorourke7156 8 жыл бұрын
Great lecture - I have one suggestion on the Expansion of Tensor Product: at 20:00 to 21:30 you quickly go over how a tensor product acts on the underlying cartesian product spaces. It was done quickly and the T (subscript mu nu) were not included in the demonstration. Although I know that it is straight forward if you could go over the examples in greater detail we would have the certitude that we're tracking 100%.
@XylyXylyX
@XylyXylyX 8 жыл бұрын
Ok, I'm glad you found the answer. Let me know if there is a gap, I'll try to fill it. (BTW I accidentally removed your last comment, sorry)
@enricolucarelli816
@enricolucarelli816 7 жыл бұрын
If I understood things right in 15:00 there is a mistake: the tensor R should map the Cartesian product of covectors to R and the tensor T should map the Cartesian product of vectors to R, and not the other way around as stated there. Am I right? Thank you so much for these excellent lectures.
@XylyXylyX
@XylyXylyX 7 жыл бұрын
Enrico Lucarelli Yes, you are correct. You are probably viewing this on a mobile device? My corrections/annotations do not appear on mobile devices. THis error has been noticed before. Look in the comments for more information. But regardless, you obviously understand.
@jason_liam
@jason_liam 3 жыл бұрын
@21:09 Shouldn't this be opposite? For example it should be instead of . I am talking about the first term in the expansion. Though they are commutative but still shouldn't it be the opposite?
@apm77
@apm77 Ай бұрын
Suppose an expression like [T⊗U](V,W) was written in a notation that does not distinguish between A⊗B and (A,B), that is, in which the tensor product of A and B is indistinguishable from the ordered pair of vectors A and B, so for example [T⊗U](V,W) looks identical to [T⊗U][V⊗W]. Would any information actually be lost? In one sense, clearly yes, but that's because the null operator is overloaded: depending on the arguments we interpret the absence of an operator to mean either "acting upon" or "multiplied by", and we need to know which it is. But if that ambiguity were removed, would information still be lost? Because it seems like the distinction between an ordered pair of vectors and a rank 2 tensor is the same as the distinction between a stick and a lever, it's entirely a question of how it's being applied.
@viktorzelezny1518
@viktorzelezny1518 4 жыл бұрын
Thank you (again) for your amazing lectures! I am currently writing my master's thesis about Hilbert Space Factorization, and your Videos about tensors are a huge help :) I just have one question: How do you rigorously formulate the addition operation for tensor products? In a vector space we can always write a+b=c, with a,b,c Vectors. But for tensors this doesn't work, e.g. the bell state |1}@|0} + |0}@|1} cannot be expressed by a single tensor |a}@|b}. Can this be resolved by simply renaming the tensors |1}@|0} == A, |0}@|1} == B , |1}@|0} + |0}@|1} == C and so on, or is there a more fundamental solution that i am missing? Best regards and again thank you Viktor
@XylyXylyX
@XylyXylyX 4 жыл бұрын
You are missing somethings. The Bell state that you wrote is the sum of two *basis* vectors. If you cast any vector space (including any tensor product space) in a basis you are stuck with those basis vectors. The basis in your case is |0>|0> and |0>|1> and |1>|0> and |1>|1> so EVERY member of the tensor product space must be cast in this basis. So your “a + b = c” is written in *coordinate free* form so it appears to you that they combine, but in a basis it would be a = a_1 e_1 + a_2 e_2 and b= b_1 e_1 + b_2 e_2 so “c” would be (a_1 + b_1) e_1 + (a_2 + b_2) e_2. So you are comparing two different formalisms!
@viktorzelezny1518
@viktorzelezny1518 4 жыл бұрын
@@XylyXylyX Yes! Thank you for your answer :) Me not distinguishing those two cases was the reason for many of my misconceptions about tensor spaces
@PunmasterSTP
@PunmasterSTP 2 жыл бұрын
Hey I know it's been a few years but I just came across your comment and was curious. How did your thesis turn out?
@frankbennett2429
@frankbennett2429 7 жыл бұрын
No, I don't get your "oops" comment at 15:00. You original indices, upper and lower, seem consistent with what you have used earlier in this lecture. For example, when you have a cross product of dual spaces, don't you need basis vectors with upper indices to represent co-vectors? I am enjoying these lectures.
@frankbennett2429
@frankbennett2429 7 жыл бұрын
I think I get it now. If you have a Cartesian product of dual spaces, the corresponding map indices have to come from the underlying vector space, using the fact that a vector in the underlying space is also a map.
@apm77
@apm77 7 жыл бұрын
I don't understand why, at 17:40, you invert the order of the vector spaces when showing the correspondence between the tensor products and the Cartesian products. That is, I don't understand why you say that V*⨂V corresponds to V⨯V* instead of, as I would expect, to V*⨯V (will KZbin let me use Unicode symbols in comments? I do hope so). I do get that it probably doesn't matter a great deal, since in this context the Cartesian products are only a formalism that allows you to define a domain, but why complicate things by inverting the order? You do the same thing at 23:00 so it's not an anomaly. I don't have any kind of background in this stuff. I did a couple of years of undergraduate mathematics last millennium but was never any good at it. I started watching this series essentially because KZbin recommended it, and I was curious.
@XylyXylyX
@XylyXylyX 7 жыл бұрын
Adrian Morgan The ordering with the tensor product symbol *is the map* and the ordering with the Cartesian product ( the plain "x" without a circle around it) is *what is being mapped* The map named "V* @ V" takes an order pair from V X V* to the real numbers.
@samuelmahler5961
@samuelmahler5961 7 жыл бұрын
Thanks, I was confused there for a moment, too.
@aky68956
@aky68956 3 жыл бұрын
I didn't understand the order of vectors and maps within the brackets at 21:25. shouldn't the maps be e0 and e1?
@joelhaggis5054
@joelhaggis5054 7 жыл бұрын
At 8:48 you can hear he's like " Why did I use so many tensor products? This was a mistake."
@Mariusz-rj8ye
@Mariusz-rj8ye 7 жыл бұрын
Thanks for great lecture. I want to ask if at 14:58 subscripts and superscripts in T and R shouldn't be opposite (should be: T sup alfa.. e sub alfa...)?
@XylyXylyX
@XylyXylyX 7 жыл бұрын
Mariusz2718 Yes! You are correct! The tensor that maps V x V x V.... to R must have components with *subscripts* and covectors in the basis vector! Nice catch! I'll make an annotation later today!
@robertbrandywine
@robertbrandywine Жыл бұрын
In the previous video you said "this" is a real number, pointing at the board. Did you mean T_xy as a whole was a real number or did you mean x and y were real numbers?
@XylyXylyX
@XylyXylyX Жыл бұрын
I'm not sure exactly where in the previous video you mean, but from the text of the question I almost certainly was referring to "T_xy" as a whole. That is the component of something, and the components are scalars of the underlying vector space which was probably a real vector space. "x" and "y" should just refer to some orthogonal axes....
@robertbrandywine
@robertbrandywine Жыл бұрын
@@XylyXylyX Sorry, it was at 21:49. You were moving your pointer around under the indices as you said "these are just still a bunch of real numbers". Trying to follow, that left me wondering what T itself resolved to.
@XylyXylyX
@XylyXylyX Жыл бұрын
@@robertbrandywine Ok….yes T_mu_nu is a real component and mu and nu are just the integer index values.
@robertbrandywine
@robertbrandywine Жыл бұрын
@@XylyXylyX 👍
@alangrayson761
@alangrayson761 3 жыл бұрын
1) IIUC, we agreed that using the LT from the pov of the rest frame, yields the objective reality of what an observer in the moving frame would measure, if he chose to do so. Why then is it alleged that the observer in the moving frame does NOT notice that his clock is running slow, and in the extreme case of an observer falling through the EH of a BH, why doesn't the infalling observer notice that his clock has stopped, even though this is what the distant observer calculates for events in the in-falling frame? 2) When climbing a mountain, if one climbs directly to the top as compared to switch-backs, which is the geodesic path? 3) In SR where the path length is defined as ds^2 = dt^2 - (dx^2 + dy^2 + dz^2), does the geodesic path maximize or minimize ds^2? Is the path length defined the same in GR, and is the geodesic path defined similarly?
@XylyXylyX
@XylyXylyX 3 жыл бұрын
1) You may have this backwards? The moving rod looks short to the stationary lab, and the moving clock ticks slow the the stationary lab. The moving observer thinks his/her own clocks and rods are fine. The infalling observer thinks the clocks far away are running fast, but her clock is normal. This is all the consequence of “there is no preferred frame of reference in the universe.” Everyone experiences time and space the same, but they interpret the time and space of other observers to be different. Every inertial observer will think everyone else’s clock is running slow, they will never think their OWN clock is running slow. Likewise they will think distances are compressed, for the OTHER guy. By the way, if time was running slow for you, what possible experiment could you do to figure that out? I don’t think there is any! 2) Neither. Geodesic paths are free falling paths. To take a geodesic path to the top of a mountain, you would have to shoot yourself from a sea-level gun and as soon as you clear the gun barrel, you would be on a geodesic path to the top. The force of the ground against your feet is accelerating you off your natural geodesic path, which in your example, would be a path accelerating you to the earth’s core! 3) The definition of the Lorentzian interval is a assumption of SR. It is minimized in SR via the principle of least action, IIRC, but it has been a while. However beware! If you choose the opposite convention, then it is *maximized*, I think. BTW: I have run across something that probably counts as an “observation” of length contraction! I will mention it in my next lecture, which is about the Lorentz transformation.
@alangrayson761
@alangrayson761 3 жыл бұрын
@@XylyXylyX But then the LT isn't *transforming* the stationary observer *into* the moving frame, to determine what the moving frame observer sees, and according to your interpretation the observer in the moving frame sees no change in his clock rate. Doesn't a transformation have to give us visibility into another frame, to determine what is being observed in that other moving frame? If not, such a transformation is useless. It has no objectivity. It just tells what a stationary observer concludes, from the pov of the stationary frame! Why the heck is it even called a transformation? The only way for any observer to compare clock rates with another frame is to default to the Twin Paradox. In this case we can use SR and imagine a sequence of frame changes to bring the traveling twin home at rest to compare clocks. By applying calculus we can imagine an infinite sequence of such frame transformations to add up the time dilations as observed from the pov of the stationary twin by going to the limit. Using your interpretation of the LT, the final net result will be an unresolvable contradiction between what the twins observe when juxtaposed. That is, the clocks of both twins will *not* agree when they meet since the traveling twin sees no change in his clock for each frame in this sequence as he changes frames, whereas the stationary twin will insist on the time dilation determined by *your* interpretation of the LT. The only way to resolve the differential clock readings when the twins are reunited, is for the LT to give the actual time dilations in the traveling frame, not just apparent dilations from the pov of the stationary frame, making the LT a true transformation of coordinates.
@XylyXylyX
@XylyXylyX 3 жыл бұрын
@@alangrayson761 I'm worried that any attempt to explain would just confuse you more. The only way you can get this straight is by literally doing the calculations yourself. You should get a book on SR and work the problems out. It isn't too difficult: just algebra, and you will literally see how the transformations work. In fact, I plan to do this in the next QED PreReq lecture where the topic is the Lorentz Transformation. You are correct about the reuniting of frames. The acceleration breaks the symmetry and one of the two clocks ends up slower. If you understand how that works, then you got it! But it is always true that the stationary observer see the moving observer's clock as slow and rods as short. When they reunite, the clocks and rods agree, but the accelerated clock is missing time. The dilations and contractions are REAL, not illusions.
@alangrayson761
@alangrayson761 3 жыл бұрын
@@XylyXylyX Hopefully you'll do it in your next lecture, OR you can take another shot at it here! I've done such calculations in the past and now realize I don't understand how the LT works. So I guess that's progress. How can it take us to another frame, and get results different from what was concluded in the rest frame?
@alangrayson761
@alangrayson761 3 жыл бұрын
In my scenario for reuniting frames, I didn't prove anything! I just used the *accepted* clock rate slowing in the moving frame, as seen from the rest frame, to show the rest and moving frames' clocks *likely* disagree (see concluding remark) when juxtaposed. Nor did I show this result has anything to do with acceleration, even though the moving frame's clock does go through repeated accelerations as the calculus limiting process is applied. Further, I didn't show that the discrepancy in final clock readings is due to the minor, accumulative discrepancies accrued during the limiting process for the moving clock. That is, even though there are continuing smaller and smaller clock discrepancies accrued while the limiting process is imagined to occur, their overall sum might be zero. That is, I don't know what this infinite sum of clock discrepancies converges to in this particular model of return of the moving/traveling clock. It could be zero or non zero; if zero there is no discrepancy when the clocks are juxtaposed, but if non zero there will be a discrepancy.
@mohitvarshney1101
@mohitvarshney1101 6 жыл бұрын
Hi XylyXylyX, thanks for the great explanation. I just had one query, at around 21.45 you said that "this (real number) whole thing has to be expanded against that (coefficient of the tensor, T^u_v) index". However am I correct to say that the real number doesnt need to be expanded against Tuv as there are not 16 coefficients but just 2 as u and v are known to be 0 and 1 ( as e_0 tensor product e^1 was used as the map in the example)? (The Einstein's sum should only take place twice not 16 times)
@XylyXylyX
@XylyXylyX 6 жыл бұрын
If T^\mu_ u had only one non-zero component T^0_1 then the Einstein sum would only have one term as you say.
@garytzehaylau9432
@garytzehaylau9432 4 жыл бұрын
in these several video you always say there is no correspondence between the V* and V space what do you actually mean i dont get it actually it is a mapping from V* : V --> R and there is a correspondence? thank you
@drlangattx3dotnet
@drlangattx3dotnet 7 жыл бұрын
Do the basis vectors of the vector space have superscripts or subscripts? Do the covectors have super or sub?
@drlangattx3dotnet
@drlangattx3dotnet 7 жыл бұрын
I went back to lecture 2 and checked and answered my own question.
@drlangattx3dotnet
@drlangattx3dotnet 7 жыл бұрын
I am still confused when I read your inserted correction. Can you help me with an answer? Thanks.
@bhavyasingh355
@bhavyasingh355 4 жыл бұрын
I got very confused in this lecture and after reading the comments it was completely a mess. Can you let me know what the problem is. By the way thank you for your great lectures
@XylyXylyX
@XylyXylyX 4 жыл бұрын
Thank you for your kind comment, despite your stuggle with this lecture. This lecture and the one on tensor contraction are my two most problematic lectures. Can you tell me exactly what the confusion is, then maybe I can address it. What I am trying to do here is actually construct a special kind of vector space called a “Tensor Product Space”. I am doing this by constructing objects called. “Tensors” that map elements of a Cartesian Product space of vectors and covectors to real numbers. It is worth a deep understanding of everything in this lecture, or all that follows is hopeless!
@bhavyasingh355
@bhavyasingh355 4 жыл бұрын
I think around 15:23 you have used superscript for vectors instead of subscript and vice versa.
@jasonbroadway8027
@jasonbroadway8027 4 жыл бұрын
At 25:14, you mentioned the fact that basis elements are not carried with the tensor symbols. I opine that this omission hinders a student of relativity, and I feel that shorthand is the bain of physics. I do not mind the Einstein summation notation, but I feel that Einstein did physics and mathematics a disservice when he omitted the basis elements from the tensor components. Nevertheless, I have enjoyed your excellent videos on the subject of tensors.
@XylyXylyX
@XylyXylyX 4 жыл бұрын
Once you understand them it is best to drop them. The problem is that until you keep them for a while it is hard to understand what is going on.
@jasonbroadway8027
@jasonbroadway8027 2 жыл бұрын
I get this now.
@jasonbroadway8027
@jasonbroadway8027 2 жыл бұрын
At 20:32, I would have used subscripts and superscripts other than mu and nu when expaning the input covector beta and the input vector v. You could then use the Kronecker delta to reel things back in.
@deleogun1717
@deleogun1717 8 жыл бұрын
are the input vector/covector for the tensor from the same vector space or are part of a vector field?
@XylyXylyX
@XylyXylyX 8 жыл бұрын
dele bodede They are from the same vector space and associated covector space.
@juanmanuelpedrosa53
@juanmanuelpedrosa53 3 жыл бұрын
Say in 23:43, I'm having dificulties understanding what kind of "shape" the tensor component would have. I know that T represents scalars but how do you write (or code) a real case scenario, another vector? n-dimensional matrix?, multiple vectors correspondent to each basis?. Could you point me in the right direction to find an example with real values for T?
@XylyXylyX
@XylyXylyX 3 жыл бұрын
You can't really do what you want. You just have to calculate each component of a tensor individually and organize the components into a data structure that can be addressed for calculation.
@juanmanuelpedrosa53
@juanmanuelpedrosa53 3 жыл бұрын
@@XylyXylyX that's what I'm asking, what kind of structure would that be?. Would you recommend any book/website with real examples?. Don't get me wrong, I really appreciate your lectures, but it gets to a point that I need to see real calculation happening. I get the algebra but not the real process.
@XylyXylyX
@XylyXylyX 3 жыл бұрын
@@juanmanuelpedrosa53 Hmm...try the last few videos of "What is General Relativity?" where I solve the Einstein Equation for a black hole. In that lecture we actually calculate the components of the curvature tensor and related tensors.
@juanmanuelpedrosa53
@juanmanuelpedrosa53 3 жыл бұрын
@@XylyXylyX ok, I'll complete this series and then start with general relativity. Thank you! :-)
@h2oplusplus
@h2oplusplus 7 жыл бұрын
Thank you so much for the great lecture on this topic. Actually I want to learn more about tensor for understanding General relativity. In your lecture, you construct the tensor space from two identical dual spaces V*, would it be arbitrary to take any pair of vector space for the tensor product, like V oX W?
@XylyXylyX
@XylyXylyX 7 жыл бұрын
sun fai that is allowed, yes. An object like V*@ W, say, would take a vector from W and return a vector from W .... or.... take a vector from V and a covector from W* and return a real number. No problem!
@h2oplusplus
@h2oplusplus 7 жыл бұрын
XylyXylyX , in that case, we could freely construct a tensor product space from any arbitrary vector spaces, then it is very likely there are no physical meaning attached to, only a pure mathematical object? Right?
@XylyXylyX
@XylyXylyX 7 жыл бұрын
sun fai All mathematical objects, at best, only serve to *model* physics. It turns out that the physics of space and time are best modeled by tensors. So even though we can make many many different types of tensors using many different vector spaces, we are always faced with the problem of figuring out which ones are best to model nature. For example, any (0,2) tensor can be used as a metric for a manifold, but nature only uses symmetric metrics which happen to solve the Einstein equation, as far as we know. There are limitless numbers of differential equations which we can imagine, but only one of them models spacetime. Do we say that "differential equations have no physical meaning?" Well....many of them *don't* and a few of them do. An even harder question is whether or not *all* physics can be modeled well using mathematical objects. I have personally wondered if there is some limit to this, but I don't think about it much :).
@h2oplusplus
@h2oplusplus 7 жыл бұрын
XylyXylyX thanks for your reply, I agree with your comment about modelling physics, personally, sometime it would be easier to understand some abstract mathematical idea if they have some associated physical meaning. Perhaps, like divergent series, no obvious phy. meaning attached, but still very attractive! Or I should say, this is the intrinsic beauty of maths. Thanks anyway, can't wait to finish your remaining lectures! :)
@debendragurung3033
@debendragurung3033 6 жыл бұрын
when you say a Tensor Product, (BxA) of the Dual Vector Space V*xV, that maps a pair of Elements of the Vector Space to Real, What kind of operation is involved after inner product. Is it (B,A)(v,w) = (B.v) (A.w) OR is it (B,v)+(A,w)
@XylyXylyX
@XylyXylyX 6 жыл бұрын
debendra gurung Hmmm....I’m not sure your question is formed correctly. The tensor product space that maps two vectors to R is V*xV* not V*xV. If an element of V*xV* , call it A x B , is fed two vectors v and w the (A x B) (v,w) = < A, v >
@alangrayson761
@alangrayson761 Жыл бұрын
Suppose we want evaluate a tensor in V@V^ whose domain are elements of V x V*, for the pair (v, w) where v is a vector, and w is a co-vector in the respective vector spaces V and V*. If v is looking for a co-vector, and there's no specific definable co-vector corresponding to v, and if w is looking for vector, again with no specific definable vector correspond to w, how do we choose what's necessary to do the calculation?
@XylyXylyX
@XylyXylyX Жыл бұрын
Did you mean that we have a tensor in V*@V? That would take an element of V x V* as an argument. Is that what you mean?
@alangrayson761
@alangrayson761 Жыл бұрын
@@XylyXylyXYes. I reversed the domain name, but I think the same issue arises. Maybe there is no problem! We have a vector seeking a co-vector, and a co-vector seeking a vector, so if we allow the co-vectors to span V*, and the vectors to span V, the map is completely specified.
@XylyXylyX
@XylyXylyX Жыл бұрын
@@alangrayson761 The tensor’s doe main is V x V*, so any pairing(v, w) can be used as an argument.
@alangrayson761
@alangrayson761 Жыл бұрын
@@XylyXylyXOh, I had a typo in my original comment. I meant the tensor V@V*, whose domain presumably is V* x V. If I had written the tensor as V* x V, its domain would be V x V*. Correct? I was just thinking the map as undefined because, say, some vector or co-vector, was seeking a *specific* co-vector or vector respectfully, but that's not the case.
@XylyXylyX
@XylyXylyX Жыл бұрын
@@alangrayson761 Any vector/covector can be inserted into the “machine” of a tensor. It does not only work on certain arguments, as you say. The more subtle part is to understand what happens when you leave some input slots *blank*. Then the result is another tensor, but of lesser rank. Make sure you understand this point as well!
@drlangattx3dotnet
@drlangattx3dotnet 7 жыл бұрын
I am still confused when I read your inserted correction. Can you help me with an answer? Thanks.
@XylyXylyX
@XylyXylyX 7 жыл бұрын
It is a mess....the left side identifies the mappings. The right side identifies the thing that is the actual map. To take vectors to real numbers requires covectors which have superscripts. I wrote subscripts " e_\alpha" for example. To take covectors to real numbers you need vectors which have subscripts. I wrote superscripts "e^\beta" for example. The components, of course, have the *opposite* problem!
@zhangjin7179
@zhangjin7179 6 жыл бұрын
cartesian product of vectors doesn't form a vector spaces right?
@XylyXylyX
@XylyXylyX 6 жыл бұрын
zhang jim A Cartesian product of two vector spaces is NOT a vector space because the Cartesian product does not automatically have a vector sum and scalar product defined. The Direct Sum is very similar to the Cartesian product, but it includes a vector addition rule and a scalar multiplication rule.
@DrIlyas-sq7pz
@DrIlyas-sq7pz 7 жыл бұрын
dear XylyXylyX. cartesian product of two sets is order pair. I think in the way that if one of the sets is one space and cartesian produdt with other give a point in the space with higher dimension. But i wonder what VcrossV will be. can we just consider one V as a set and Vcross V a ordered pair. What would be the sense of that. What is cartesian product of whole vector space is?
@XylyXylyX
@XylyXylyX 7 жыл бұрын
m ilyas V x V is the set of all ordered pairs where the first member of the pair is a point from V and the second member is also a point from V. If x and y are both members of V then (x,y) is a member of the set V x V.
@DrIlyas-sq7pz
@DrIlyas-sq7pz 7 жыл бұрын
Thank you very much. Would you mind if I ask one more thing. Is there easy way to make sense of V x V and V@V in on some manifold instead of just abstract mathematics. I am physicist and to make idea clear, I always try to think of some physical example.
@XylyXylyX
@XylyXylyX 7 жыл бұрын
m ilyas I don't think so. We will show how electrodynamics can be understood as "n-forms" soon, and there is some visualization there, but those pictures are unreliable, in my opinion, and they are not for all tensors, just n-forms. I would say you, as a physicist, should embrace the abstract math. If you said "I'm an engineer" I wouldn't ask you to do this, but as physicist we must think more abstractly. Might as well start now! :)
@alangrayson761
@alangrayson761 3 жыл бұрын
I will think more about the invariant spacetime length for the muon and Earth observer, but for now suppose we have just a muon; no observer falling with it; no clocks, no measurements being made or even possible by the muon. How does the muon "know" its distance to the Earth is shortened if it can't make that measurement; that is, the measurement of the rod passing it, the rod standing for the distance to the Earth? The muon has no direct or indirect knowledge that its distance to the Earth is shortened due to its motion.
@XylyXylyX
@XylyXylyX 3 жыл бұрын
There are no actual measurements involved because these effects are real, not illusory. In the muon frame the Earth *really is* closer than it is in the surface frame. The muon just hits the earth when it hits the earth. Kinda like a car hits a tree when it intersects a tree. The muon’s rest frame has the earth rushing to the muon. The earth gets there when it gets there, hopefully before 2.2 microseconds of muon proper time are up!
@alangrayson761
@alangrayson761 3 жыл бұрын
​@@XylyXylyX Agreed; no measurements. The way to look at the situation is that SR tells us that if we assume the SoL is fixed for all observers INDEPENDENT on their relative motion (as long as it's inertial), then clock rates and distances are DEPENDENT on their relative motion. It's the price paid for the invariance of the SoL.
@XylyXylyX
@XylyXylyX 3 жыл бұрын
@@alangrayson761 Yes, well said.
@alangrayson761
@alangrayson761 3 жыл бұрын
@@XylyXylyX For the first time I think I really understand SR, which doesn't mean I won't have some questions about it from time to time. And the first one is this; whereas we usually analyze problems in SR assuming *symmetric* frames of reference, the Twin Paradox being the only exception I am aware of, is the Muon situation a case of an *asymmetric* situation where the Earth frame can be considered truly at rest?
@XylyXylyX
@XylyXylyX 3 жыл бұрын
@@alangrayson761 No, but it is a good question. The way to think of this is whether or not at the end of the scenario are all observers back in the same frame. If they are, then someone switched inertial frames which is the nature of the asymmetry. The muon decays while in flight. On the other hand, "bhremstrahung" (spelling?) radiation occurs when electron come to a stop at some metal target, so in that case we have a particle in an asymmetric circumstance, as you put it. In a circular particle accelerator, however, the particles are not in a truly inertial frame either, because of the centripetal acceleration. This is evident by a type of radiation called "cyclotron" radiation, which is also an "asymmetric" effect.
@surajdevadasshankar5270
@surajdevadasshankar5270 4 жыл бұрын
I've got a question: why is it that only two covariants make up a tensor with the tensor product? why can't a single covector from the dual space work as a tensor?
@XylyXylyX
@XylyXylyX 4 жыл бұрын
A single covector is a rank (0,1) tensor. A single vector is a rank (1,0) tensor. Two covectors can build a rank (0,2) tensor. Two vectors can build a rank (2,0) tensor. Does that help? What timestamp in the lecture is confusing to you?
@surajdevadasshankar5270
@surajdevadasshankar5270 4 жыл бұрын
@@XylyXylyX sorry if I'm getting this wrong; so if I were mapping a vector of rank 1 to a real number, I'd use a tensor of rank 1? I.e.; only a single covector from the dual space would serve as the tensor. Am I wrong?
@XylyXylyX
@XylyXylyX 4 жыл бұрын
suraj devadas shankar That is correct. THe only quibble I have is with the word “rank”. Often rank is used to mean the total number of vectors and covectors combined to make a tensor. But in this class we define the rank as an ordered pair so V* @ V* is a (0,2) tensor V @ V is a (2,0) tensor, and V @ V* is a (1,1) tensor. Some would call the first two “Rank 2 tensors and they would not have a rank defined for mixed tensors. So, to your question, V* is a (0,1) tensor that maps an element of V to R.
@surajdevadasshankar5270
@surajdevadasshankar5270 4 жыл бұрын
@@XylyXylyX great. That clears it up. Thank you! I just wanna say, I've just completed my bachelor's and I still understand your lectures inspite of never coming accross tensors in my life before. Thank you and keep up the great work! :D
@zhangjin7179
@zhangjin7179 6 жыл бұрын
can you define a cartesian product before a 3 dimension vector space and 4 dimension vector space?
@XylyXylyX
@XylyXylyX 6 жыл бұрын
zhang jim I do not understand your question.
@zhangjin7179
@zhangjin7179 6 жыл бұрын
XylyXylyX can you define cartesian product of a 3 d vector space and a 4d vector space? if yes, Can you define tensors that maps such a cartesian product into a real number/
@XylyXylyX
@XylyXylyX 6 жыл бұрын
zhang jim Yes. A Cartesian product will work for ANY two sets. Yes, it is easy to define multilinear maps involving different vector spaces. However we don’t call those maps “tensors” but they definitely are possible.
@zhangjin7179
@zhangjin7179 6 жыл бұрын
XylyXylyX so tensor is only for vector/covector of same dimension? is that part of the definition of tensor in addition to the rules you pointed out in the video... such as v*xvxv is *not* a tensor but vxvxv*xv* is ...
@XylyXylyX
@XylyXylyX 6 жыл бұрын
zhang jim Yes, that is correct. “Tensors” all involve V and V* only. Also, a “Tensor” must have a rank given by (p,q) which means that all of the vector are before all of the covectors. However, this is a rather technical matter which is not particularly enforced. That is, we often call V*@V a “tensor” anyway.
@alangrayson761
@alangrayson761 3 жыл бұрын
At 3:35 how do you distinguish between the tensor you defined, from an inner product on single vector space V, which is linear, takes two elements in V, and produces a real number?
@XylyXylyX
@XylyXylyX 3 жыл бұрын
The tensor product space contains *all possible maps*. The inner product on V is one specific map. However, you are on the edge of a MAJOR point: There is a map in V*@V* that perfectly mimics the inner product on V given by (V,V). That particular map is the metric tensor!
@XylyXylyX
@XylyXylyX 3 жыл бұрын
I mean “there is a map on V* @ V* that perfection mimics (V,V)
@alangrayson761
@alangrayson761 3 жыл бұрын
@@XylyXylyX If there exists a tensor in V*@V* that perfectly mimics the inner product -- which is an arbitrary function -- why is this significant?
@XylyXylyX
@XylyXylyX 3 жыл бұрын
@@alangrayson761 Because in GR the metric tensor is the thing that solves the Einstein Equations. That is, the physics of the Einstein Equation tells us the metric and once we have the metric we have all the relevant details of the spacetime.
@alangrayson761
@alangrayson761 3 жыл бұрын
I anticipated your reply. ISTM that the inner product is irrelevant. It can be chosen arbitrarily. But what's apparently important is the metric determined by Einstein Equations. What *exactly* does it tell us about spacetime?
@toxicore1190
@toxicore1190 7 жыл бұрын
are tensors matrices then? (just wondered)
@XylyXylyX
@XylyXylyX 7 жыл бұрын
Toxi Core There is a formalism regarding tensors that treats all vectors as column matrices and all dual vectors as row matrices and rank (1,1) tensors as a matrix. However this approach is very limited. So, no, tensors are not matrices. Tensors are mappings that live in a vector space called the "tensor product space" as demonstrated in these lessons.
@toxicore1190
@toxicore1190 7 жыл бұрын
Ok thats what I thought about. Thank you for answering! nice videos :)
@XylyXylyX
@XylyXylyX 7 жыл бұрын
Toxi Core I should point out that much of the formalism of tensors can be developed using matrices, but you must be willing to consider a "matrix" to sometimes mean, for example, a column vector where every entry is itself a row vector. That is how a (0,2) rank tensor might be represented. It is a bit of a mess, but it can be done.
@paras3681
@paras3681 4 жыл бұрын
@@XylyXylyX I started studying general relativity from a book 'A most incomprehensible thing' , the way it started teaching tensors is using this formalism of tensors being considered as matrices. Am I learning the right way? P.S Thanks for this amazing series. You are a hero !
@XylyXylyX
@XylyXylyX 4 жыл бұрын
Paras I don’t have the book, but I certainly don’t like tensors as matrices. However, it certainly can be done and my preferences should not drive you away. GR is not “incomprehensible” in my opinion so I don’t even like the title. There is NO harm in learning from any book. You learn by *comparing* different approaches. At least, that worked for me. Good luck!
@HotPepperLala
@HotPepperLala 7 жыл бұрын
10:00 what you mean v:V* ->R ?? A vector v \in V, its not a map?? I think you could do an example of V (x) V
@XylyXylyX
@XylyXylyX 7 жыл бұрын
rinwhr it is important to understand that the dual of a covector is a vector so vectors are maps taking covectors to the real numbers.
@HotPepperLala
@HotPepperLala 7 жыл бұрын
I think we are in the double dual right now, so I think what you really meant is to write the map v -> g_v, where g_v is an element of the double dual V^** (which you called a "vector"). So here g_v(f) = f(v) where f is in the V^*. I'll leave this up if anyone got confused.
@alangrayson761
@alangrayson761 5 жыл бұрын
I'm viewing this on a laptop and all your annotations/corrections are gone. Thought you'd want to know.
@alangrayson761
@alangrayson761 5 жыл бұрын
For your information, I have completed the tensor lessons through Lesson 15, viewing several multiple times, and all annotations/corrections, if any existed other than on this lecture, 6, are gone. I'm now fairly comfortable with tensors, the only exception being your use of Lambda inverse in going from hatted to unhatted basis in the dual space.
@XylyXylyX
@XylyXylyX 5 жыл бұрын
All annotations have been removed from KZbin. It was an “upgrade”.
@alangrayson761
@alangrayson761 Жыл бұрын
I'm going through lessons 5 & 6 of your tensor lectures. I seemed to have missed or forgotten a lot from my initial views. Why is it that some vector v in some vector space V is called a degenerate tensor, when it isn't a tensor at all! It's just a vector, not a map to the real numbers? I suppose you can call it a map if its domain is V*.
@XylyXylyX
@XylyXylyX Жыл бұрын
This sounds odd to me….what is the relevant time stamp?
@alangrayson761
@alangrayson761 Жыл бұрын
@@XylyXylyX No timestamp. Not in your lecture. But I've read it stated many times that a vector, and even a constant, can be considered as tensors. Or it's claimed a tensor is a generalization of a vector.
@XylyXylyX
@XylyXylyX Жыл бұрын
@@alangrayson761 Yes that is true. A vector is a rank (1,0) tensor in our notation. A covector is rank (0,1) tensor. But….confusingly…. A tensor is a member of a tensor product space and tensor product spaces are vector spaces so in that sense a “tensor is a vector”….. it is quite a mess of language!
@XylyXylyX
@XylyXylyX Жыл бұрын
Oh…and a constant (or a scalar valued funciton on a manifold) is a (0,0) tensor (or tensor field)…
@alangrayson761
@alangrayson761 Жыл бұрын
@@XylyXylyX Consider a tensor product space consisting of a single multiplication, of V or V*. Then a vector from V or a co-vector from V* are the (1,0) and (0,1) tensors respectively, and are considered "degenerate" because there is only a single instance in the product of what we've been calling the TPS. In such case, to preserve the definition of a tensor as a MAP, for a fixed v in V, the domain of v must be V*, and the domain of a fixed co-vector is V. I think this is consistent with the more general case of non-degenerate TPS's.
@no-one-in-particular
@no-one-in-particular 9 ай бұрын
Again, the set of tensor products is not a vector space. You need all linear combinations of products, i.e. the set which is SPANNED by tensor products. The video starts with an incorrect definition of tensor product space. If what you are saying is true there would be no quantum entanglement.
@XylyXylyX
@XylyXylyX 9 ай бұрын
I think at this point in the lesson series the students know this. I am certainly not suggesting that a vector space has only a finite number of elements.
@no-one-in-particular
@no-one-in-particular 9 ай бұрын
@@XylyXylyX I don't think you understood my point, I didn't say anything about a finite number of elements. You claim the vector space is formed of all elements like beta tensor gamma with any choice of beta or gamma. This is not true. These are the "simple" or "pure" tensors and do not by themselves form a vector space. You need to include all linear combinations of elements of the form beta tensor gamma. In general the sum of two tensor products is not another tensor product.
@XylyXylyX
@XylyXylyX 9 ай бұрын
@@no-one-in-particular Yes, i know that point was addressed fully in the lectures on vector spaces, that is certainly what I meant. I really don’t think that particular gaffe was confusing to anyone however. I have had plenty of real serious gaffes, though, so thank you for pointing this out.I wasn’t clear on exactly what aspect of it was confusing to you….I though you implied that i was saying a set of just two members was a vector space. I am quite sure that we all know that all linear combinations are to be included. This lesson is WAy past that point.
@jvcmarc
@jvcmarc 6 жыл бұрын
You make maths boring generalize too much, run over concepts too quickly, give no examples of usages and don't connect it to other knowledge we might have (such as in physics), I know you want to abstract it out, but it gets too overwhelming with all letters going on in this 16 dimensional space, it's easy too get lost, especialy for someone who is starting tensors from your videos
@XylyXylyX
@XylyXylyX 6 жыл бұрын
Yes, I do maintain an entirely abstract approach. There are many good texts to get you started on this subject. These lectures are not for beginners, they are for ppl who want to go over this material a *second* time actually. Good Luck!
@moonglaive
@moonglaive 5 жыл бұрын
It's really helpful for those of us who don't have any prior explanation of what the hell is going on with the notation as well.
@darkknight3305
@darkknight3305 2 ай бұрын
Thank you so much
@mohamedmoussa9635
@mohamedmoussa9635 2 жыл бұрын
Can you have a tensor product of completely different vector spaces? e.g. elements of V*⊗W* which map V x W -> ℝ
@XylyXylyX
@XylyXylyX 2 жыл бұрын
Yes! Not done a lot in elementary GR, but mathematically it is fine.
@mohamedmoussa9635
@mohamedmoussa9635 2 жыл бұрын
@@XylyXylyX I was thinking in continuum mechanics, you have the initial and deformed configurations of the body. The gradient of the mapping between them would be one of these tensors I think.
@XylyXylyX
@XylyXylyX 2 жыл бұрын
@@mohamedmoussa9635 Are those really two different vector spaces?
@mohamedmoussa9635
@mohamedmoussa9635 2 жыл бұрын
@@XylyXylyX I think so, would these not be tangent spaces on different manifolds?
What is a Tensor 7: Rank of a TSP
17:21
XylyXylyX
Рет қаралды 20 М.
What is a Tensor 5: Tensor Products
24:47
XylyXylyX
Рет қаралды 55 М.
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 13 МЛН
Prank vs Prank #shorts
00:28
Mr DegrEE
Рет қаралды 8 МЛН
Bony Just Wants To Take A Shower #animation
00:10
GREEN MAX
Рет қаралды 7 МЛН
If Barbie came to life! 💝
00:37
Meow-some! Reacts
Рет қаралды 71 МЛН
Tensor product state spaces
13:52
Professor M does Science
Рет қаралды 23 М.
What is a Tensor 14: Vector and Tensor Fields
28:52
XylyXylyX
Рет қаралды 15 М.
What the HECK is a Tensor?!?
11:47
The Science Asylum
Рет қаралды 750 М.
What's a Tensor?
12:21
Dan Fleisch
Рет қаралды 3,6 МЛН
Foucault - Patron Saint of Child Indoctrination | Logan Lancing
11:58
Jordan B Peterson
Рет қаралды 101 М.
What ARE atomic orbitals?
21:34
Three Twentysix
Рет қаралды 288 М.
Why light has energy, but no mass? (Understanding E = mc2)
21:58
FloatHeadPhysics
Рет қаралды 1,1 МЛН
What is a Manifold? Lesson 6: Topological Manifolds
51:32
XylyXylyX
Рет қаралды 27 М.
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 13 МЛН