The problems for this video can be found in this link: github.com/LetsSolveMathProblems/Navigating-Linear-Algebra/blob/main/Episode%205.pdf. I encourage you to post solutions to these problems in the comment section, as well as peer-review other people's proofs.
@mohammadzuhairkhan86613 жыл бұрын
Loving this series!
@LetsSolveMathProblems3 жыл бұрын
Thank you! :)
@cantcommute3 жыл бұрын
For problem 2: If n>m then yes, see 12:46 (just need to have rows that are linear combinations of the rows above them). If n
@LetsSolveMathProblems3 жыл бұрын
For Problem 2, this is correct.
@axemenace66373 жыл бұрын
Why does linear dependence imply infinitely many solutions?
@LetsSolveMathProblems3 жыл бұрын
@@axemenace6637 Such linear dependence doesn't have to imply infinitely many solutions (unless the field is infinite), but it does imply that there is more than one solution. This is due to the fact that we have at least one free variable that can take on the value of 0 or 1, resulting in at least two different solutions.
@yassinezaoui45553 жыл бұрын
Pb2: Let's call the system we talk about (S), let's try to solve it using the appropriated matrix M associated to (S) ( M has in each line the coefficients of the m variables and the result of its correspondent line in (S) for exemple for m=3 and the 1st line of (S) is x+2y-z=5 then the correspondent line in M is 1 2 -1 5 ) so M is a n*(m+1) matrix. Now, we should reduce M to M' its reduced row echelon form. Case m>n: Since we need exactly m pivot variables for having a unique solution for (S) , this is impossible in this case because we can have at best n pivot variables and we know that n
@LetsSolveMathProblems3 жыл бұрын
For Problem 2, this is correct. Your answer is more comprehensive than what I was expecting for case that "m < n". :)
@yassinezaoui45553 жыл бұрын
@@LetsSolveMathProblems thank you ^^
@cycl0n319113 жыл бұрын
Thank you so much
@cantcommute3 жыл бұрын
For problem 1: (a) Because linear dependence is invariant with elementary row operations, we see that the third vector is linearly dependent on the first 2. Hence a possible solution is a1=5, a2=-9, a3=1. (b) No solution exists because of the last row in the reduced row echelon form, and a solution exists iff it exists for the row echelon form.
@LetsSolveMathProblems3 жыл бұрын
For Problem 1, this is correct.
@cantcommute3 жыл бұрын
For problem 5: Suppose it does have 2 solutions, this would mean that one vector representing this system of equations is linearly dependent on another (it's either rank one or rank 0). This gives the freedom for at least 3 solutions for every choice of the free variable (0,1 or 2.) So we can't have one with exactly 2 solutions. We can with 3 though! x1=0 has solutions (0,0),(0,1),(0,2). We can also get 9 solutions from the trivial zero matrix case.
@LetsSolveMathProblems3 жыл бұрын
For Problem 5, this is correct.
@yassinezaoui45553 жыл бұрын
Pb3: (a) Yes, in fact if we talk about its correspondent linear maps from R^n to R^n: a, b and id if ab=id then we can see easily that a is surjective since for any x in R^n we can find x0=b(x) such that a(x0)=ab(x)=x so a is surjective meaning Im(a)=R^n and by the rank-nullity theorem we find that dim(ker(a))=0 so ker(a) = {zero vector of R^n} so a is as well injective hence bijective so we can talk about its bijection a^-1. ab=id => a^-1.a.b=a^-1.id => b=a^-1 so b is indeed a^-1 so we have indeed ba=id as well meaning BA=I. (b) Yes, it changes since the rank-nullity thereom will prove a is not injective. let's consider a simple exemple like for n=1 and m=2 we can consider A=( 1 1) and B=transpose (1 0) , we have indeed AB=I1 where I1 is theidentity matrix 1x1 but BA = (1 1) which is not I2 the identity matrix 2x2 (0 0) (c) I'll call the two functions f and g. Here we can't be so sure about f and g if they are not linear meaning the answer is no: if fg=id then gf must not be as well id, the best thing we can say is that f is surjective and g is injective so if we want to construct a counter-exemple we should ensure that f is surjective but not injective and g is injective but not surjective otherwise if one of them is bijective then the other must be its bijection and we will have gf(x) = x for all x in R. Let's look at this counter-exemple: Let f(x)= x if x2 g(x) = x if x2 We do have fg(x)=x for all x in R ( if x2 fg(x)=f(x²) but x>2 => x²>4>2 then f(x²)=sqrt(x²)=|x|=x since x>2>0) But..... we can take a counter-exemple to show that there is x such that gf(x)x. for exemple for x =3, f(3)=sqrt(3)
@LetsSolveMathProblems3 жыл бұрын
For problem 3, this is correct.
@yassinezaoui45553 жыл бұрын
@@LetsSolveMathProblems awesome, thx :)
@yassinezaoui45553 жыл бұрын
Pb4: The only thing that may be difficult here is to find the reduced row echelon form, by difficult I mean that we can make mistakes but since it is given everything should be quite simple. Let's keep in mind that f is injective iff dim(ker(f))=0 and f is surjective iff rank(f)=dim(R^m)=m (f is from R^n to R^m= I will refer to the reduced form by adding ' to the matrix name and small letter for the linear map. (a) a is from R^4 to R^3. A' has exactly 2 pivot variables so rank(A)=rank(a)=2 and dim(ker(a))=4-2=2. rank(a) a is not surjective. dim(ker(a))>0 => f is not injective. (b) b is from R^5 to R^3. B' has exactly 3 pivot variables so rank(B)=rank(b)=3 and dim(ker(b))=5-3=2. rank(b)=3=dim(R^3) => b is surjective. dim(ker(b)) > 0 => b is not injective. (c) c is from R^3 to R^4. C' has exactly 3 pivot variables so rank(C)=rank(c)=3 and dim(ker(c))=4-3=1. rank(c) c is not surjective. dim(ker(c))>0 => c is not injective. (d) d is from R^3 to R^3. D' has exactly 3 pivot variables so rank(d)=3 and dim(ker(d))=3-3=0. rank(d)=dim(R^3)=3 => d is surjective. dim(ker(d))=0 => d is injective. => d is bijective.
@LetsSolveMathProblems3 жыл бұрын
Parts (a), (b), and (d) are correct. For part (c), you made a small error in using rank-nullity: dim(ker(C)) should be 3-3 = 0.
@yassinezaoui45553 жыл бұрын
@@LetsSolveMathProblems yep indeed, I got confused, yess it should be 0 so it is injective.
@yassinezaoui45553 жыл бұрын
Pb 1: (a) Let a, b and c be real numbers a.v1+b.v2+c.v3 = 0 zero vector of R^3 is the same as looking for a vector (a,b,c) in the kernel of the linear map f such that its matrix M with respect to the standard basis of R^3 is [v1 v2 v3] ( vi is the i-th coulomn of M). We can get more information about ker(f) by searching the rank(M) which is 2 because by the reduced row echelon form of M leaves us with M' :a 3x3 matrix having two pivot variables. Using the rank-nullity theorem : dim(ker(f))=dim(R^3)-dim(Im(f)) = 3-2 = 1 we conclude that there is a non-zero vector u=(a,b,c) in R^3 such that ker(f)=span(u), we may get inspired by M' since trying to find a relation bteween the coulomns of M is pretty hard, we have so -5.v1+9.v2-v3= (0,0,0) and we have indeed some ai not = 0 like -5 (b) Let's consider like in (a) the appropriate linear map f from R^3 to R^3 and its matrix M = [v1 v2 v3]. Unlike (a), we will be focusing here on Im(f) rather ker(f) since we want to find a,b and c reals such that a.v1+b.v2+c.v3 = v' (a non-zero vector) so we would like to work on the matrix M' = [M v']. Reducing M' to its reduced row echelon form M'' we find an absurdity!!! the last of M'' means that a.0+b.0+c.0 = 1 which is impossible so we can conclude that there isn't such reals a, b and c and thus v' is not in the span(v1,v2,v3)=Im(f)
@LetsSolveMathProblems3 жыл бұрын
For Problem 1, this is correct. I point out that for part (a), it's not necessary to find dim(ker(f)). You can deduce "-5.v1+9.v2-v3= (0,0,0)" directly from the reduced row echelon form.
@yassinezaoui45553 жыл бұрын
@@LetsSolveMathProblems yess, indeed, I was too excited XD
@cantcommute3 жыл бұрын
Your solutions are so detailed! I learn a lot from reading them
@yassinezaoui45553 жыл бұрын
@@cantcommute thx, im glad to hear that ^^
@axemenace66373 жыл бұрын
Strangely enough it doesn't seem that anyone has posted a solution to 6, so I'll post mine: We establish that rank A = rank A^T through the following lemmas: 1. rank B = rank B^T 2. rank A^T = rank B^T The result follows from these two lemmas because rank A = rank B = rank B^T = rank A^T. To prove the first lemma, note that the rank of B is the number of pivot variables in B, as established in the video. Then, observe that in B^T, each pivot variable corresponds to a linearly independent vector because no other column vector in B^T can have a nonzero value in the corresponding entry (by the requirements of reduced row echelon form). Conversely, every column vector which does not contain a pivot variable must be the zero vector. So in total, the number of linearly independent column vectors in B^T is just the number of pivot variables, and rank B^T = rank B. To prove the second lemma, we prove that the rank of the transpose of a matrix is invariant under row operations. In fact, this is easier to prove than the claim made in the video that the rank of a matrix is invariant under row operations. Consider the subspace which is spanned by all the row vectors in A, and let this space be V'. The dimension of V' is defined to be the rank of A^T. Note that multiplying a row vector in A by a constant clearly does not change V'. Likewise, adding one row to another does not change V' either. So the rank of A^T is invariant under row operations. Therefore, because B can be obtained from A through only row operations, rank A^T = rank B^T, as desired.
@LetsSolveMathProblems3 жыл бұрын
For Problem 6, this is correct. I suppose one tiny error is that you should say "multiplying a row vector in A by a *nonzero* constant".
@cantcommute3 жыл бұрын
For problem 3: (a) A must be the unique inverse of B so it is true (a bijective map from n dimension domain to another n dimension codomain.) (b) Can't happen because there cannot be a bijective mapping between the two linear domains if they have different dimensions. An easy example would be 1x2 and 2x1 matrices, supposing it's possible leads to a contradiction. (c) ln(e^x)=x for all x but e^(lnx)=x only if ln(x) exists which is when x>0. (Now that I think about it, the reason an example exists in (c) is the same as in with (b), where injectivity/surjectivity does not imply bijectivity. I guess that's what makes n by n matrices unique!) Update (see comment): To show it's invertible, just notice that AB=I implies that when we take an input vector v we will get it back. This would then mean that v is inputted into B which then gives a new vector v', which when we input into A gives back v. Notice that for this to work, B must be injective as otherwise we would have to different vectors v' and v'' that output the same vector. If B is injective, then it is bijective by the rank-nullity theorem because we must have kerB={0} and dimkerB+dimimB=n hence dimimB=n hence it is surjective. So B is both injective and surjective hence bijective and is invertible. Hence there must exist an inverse D such that DB=BD=I. Because AB=I multiplying both sides by D gives A(BD)=D implying A=D. Hence A is the (unique) inverse of B. For (c) just let that function be zero for x=
@LetsSolveMathProblems3 жыл бұрын
For (a), you probably want to add an explanation on why A ought to be invertible. For (c), you technically can't use "ln x" because its domain is not R (an easy fix is to use any map from R to R that equals ln x for x > 0). For (b), you got it. :)
@cantcommute3 жыл бұрын
For problem 4: (a) Neither injective nor surjective, rank 2. (b) Surjective but not injective. Rank 3. (c) Injective but not surjective. Rank 3. (d) Injective and Surjective, Rank 3.