solving an infinite differential equation

  Рет қаралды 118,524

Michael Penn

Michael Penn

Күн бұрын

Пікірлер: 414
@ashtabarbor3346
@ashtabarbor3346 Жыл бұрын
Props to the editor of these videos for adding the best video descriptions on KZbin
@MichaelPennMath
@MichaelPennMath Жыл бұрын
Awww thank you very much! that means a lot to me. -Stephanie MP Editor
@danyilpoliakov8445
@danyilpoliakov8445 Жыл бұрын
Don't you dare to like Editors reply one more time. It is nice as it is😅
@jonasdaverio9369
@jonasdaverio9369 Жыл бұрын
​@@danyilpoliakov8445 It's still holding
@jongyon7192p
@jongyon7192p Жыл бұрын
An infinite differential equation SCP that becomes a bear and eats you
@Errenium
@Errenium Жыл бұрын
nice pfp
@a52productions
@a52productions Жыл бұрын
Arguably the first method is also sketchy! I was always taught that that recursive method of dealing with infinite sums is dubious unless you can prove it converges another way afterwards. In this case convergence and equality is very easy to show, but that method can fail pretty badly for not-obviously-divergent divergent sums.
@TaladrisKpop
@TaladrisKpop Жыл бұрын
Yes, for example, you can get the infamous 1+2+4+8+16+...=-1 or 1-1+1-1+1+...=1/2
@thomasdalton1508
@thomasdalton1508 Жыл бұрын
Yes, if you are going to use that kind of method you really should check the solution actually works. In this case, you'll get 1/2+1/4+1/8+... which does converge and converges to 1, which is exactly what we need.
@Owen_loves_Butters
@Owen_loves_Butters Жыл бұрын
Yep. Hence why you'll find videos online claiming 1+2+3+4+5...=-1/12, or 1+2+4+8+16...=-1 (both are nonsense results because you're trying to assign a value to a series that doesn't have one)
@gauthierruberti8065
@gauthierruberti8065 Жыл бұрын
Thank you for your comment, I was having that same doubt but I didn't remember if the first method was or wasn't allowed
@plasmaballin
@plasmaballin Жыл бұрын
This is correct. However, the solution obtained in the video can easily be shown to converge, so it is valid.
@terpiscoreis9908
@terpiscoreis9908 Жыл бұрын
Hi, Michael! This is a great problem. You can see that the original does have infinitely many solutions (well, let's say candidates for solutions) by making a different choice of where to start the infinite sum on the right hand side. For instance, with y = y' + y'' + y''' + y^(4)..., instead move y' and y'' to the left hand side to obtain: y - y' - y'' = y''' + y^(4) = D^2(y'+y''+y''') = y'' Thus the solutions to y - y' - 2y'' = 0 are also solutions to the infinite order differential equation. We recover e^(x/2) as a solution but also obtain a "new" one: e^(-x). However, the infinite sum of derivatives here doesn't converge. By an analogous argument, it looks like the solutions to y - y' - y'' - ... - 2y^(n) = 0 for a positive integer n might solve the infinite order differential equation -- assuming the infinite sum of derivatives converges.
@trevorkafka7237
@trevorkafka7237 Жыл бұрын
Answer to the question about the finite version: If y=y'+y''+...+y^(n) and substitute y=e^(kx), we get 1=k+k²+...+k^n, so 1=((1-k^(n+1))/(1-k))-1. This can be rearranged to k^(n+1)+2k-1=0. In the limit as n->infinity, we can see that we must restrict |k|≤1. Furthermore, it's obvious k≠0, so 0
@fmaykot
@fmaykot Жыл бұрын
I'm afraid the limiting procedure in case 2 is a bit more subtle than that. You did not take into account the fact that both r and θ can (and in fact do) depend on n. If θ ~ α/(n+1) as n -> inf, for example, then R ~ 1 as n -> inf and α = 2*pi*m for integers 0
@lucyg00se
@lucyg00se Жыл бұрын
This was such a fun one. You're absolutely killing it man
@DanielBakerN
@DanielBakerN Жыл бұрын
The sketchy solution is similar to using the Laplace transform.
@nafrost2787
@nafrost2787 Жыл бұрын
I think using a Laplace transform is a slightily better solution, because it justfies treating the derivative operator as a number in the geometric series formula, because (if I remember things correctly) in the s domain the derivative operator is a number. Using Laplace transform also, if not solve completely, then at least, simplify the ODE's given in the end of the video, to polynomial equations that can be solved numerically, and it also helps explain why there is only one solution to the ODE of the infinite degree, even though in every finite case, there are n soltuions. This comes from the fact that a power series can have any number of roots, even though the nth partial sum, have n roots (it is a polynomial of nth degree), for example exp doesn't have any roots, even complex ones, and of course sin and cos have an infinite number of roots.
@thegozer100
@thegozer100 Жыл бұрын
Some information I found on the question: solving the differential equation for finite amount of terms is the same as solving the equation 1=sum_{j=1}^n a^j, where I used y=exp(a*x) as a trial function. When I plot all the solutions for a large n, the solutions lie on the unit circle in the complex plane, except for one point. The point that is supposed to be at a=1 lies at a=1/2. This would mean that when we take the limit as n goes to infinity all the points on the unit circle would somehow "cancel out" and the point at a=1/2 would remain.
@oni8337
@oni8337 Жыл бұрын
how could i have forgotten about complex number branches
@16sumo41
@16sumo41 Жыл бұрын
Lovely problem! And lovely follow up question ^^. Something really aesthetically pleasing in this problem. Maybe it has to do with the perceived difficulty of solving it, ending in a really nice and simple solution. Lovely.
@ManuelFortin
@ManuelFortin Жыл бұрын
Regarding the missing infinity of solutions, one way of seeing where they go seems to be as follows. Differential equations of the form y = y'+y''+...+y(n) (y(n) is the nth derivative, not to be confused with y evaluated at n) are known to have solutions that are linear combinations of e^(ax), and we need to find the right "a". There are n "a" values. However, only one of them has |a|1. At least this is what it seems from playing with Wolfram Alpha up to n = 20. The problem is that y(n) = (a^n) y. Since |y|>0, if |a|>1, the value of y(n) diverges as n goes to infinity, whatever x is in y(x). Therefore, these solutions are not well-behaved, and we need to set their coefficient to zero in the general solution (linear combination of e^(ax), otherwise y is not defined). I guess there is a way to prove that only one of the roots has |a|
@whatthehelliswrongwithyou
@whatthehelliswrongwithyou Жыл бұрын
but isnt y diverges if a>0, not a > 1? Also leaving only non divergent solutions is a great argument in physics, but here they are still solutions, nothing bad about divergence at infinity. at least that's what i think, might be wrong
@whatthehelliswrongwithyou
@whatthehelliswrongwithyou Жыл бұрын
oh, the sum of derivatives doesn't converge at fixed x, the its a problem
@user-sk5zz5cq9y
@user-sk5zz5cq9y Жыл бұрын
@@whatthehelliswrongwithyou yes y diverges as x aproaches infinity if a is positive, he was talking about the existance of the solution
@ManuelFortin
@ManuelFortin Жыл бұрын
@@whatthehelliswrongwithyou Yes, that's what I meant. Sorry for the late reply.
@martinkuffer5643
@martinkuffer5643 Жыл бұрын
We know the Cs are the roots of the characteristic polynomial of the equation. There are n roots (counting multiplicity) of a polynomial of degree n and thus n solutions. In the new equation this still holds, but now you have a "polynomial of infinite degree" i.e. an non-polynomic analytic function. These can have any number of roots (by the procedure you shown, where the roots go to infinity as you add terms to the series), and thus there can be any number of solutions to our original equation :)
@kasiphia
@kasiphia Жыл бұрын
I think a really good idea for a follow-up video would be an explanation of why we don't have infinitely many linearly independent functions that solve the equation. Or perhaps they do exist, and that could be shown. I've noticed that when substituting in these infinitely recursive relationships, we often lose generality. For example, for the function y=x^x^x^x^x... we can do a similar substitution as we did in the video and find that y=x^y, which produces many solutions but only for 1/e
@driksarkar6675
@driksarkar6675 Жыл бұрын
I think for the general problem at 9:07, you can just apply the first method, so you y+y’+... y(n)=y(n+1)+(y+y’+...y(n))’. When you expand the derivative, everything except y(n+1) and y cancel out, so you get y=2*y(n+1). From there it’s relatively straightforward, and you get y=C*e^(x/(2^(1/(n+1))*e^(2*pi*i*m/(n+1)))) for a real number C and an integer m. That means that you actually have n+1 families in general, so the full solution is a linear combination of these.
@rohitashwaKundu91
@rohitashwaKundu91 Жыл бұрын
Yes, I have done the same thing but isn't the solution coming as y=Ce^(x/(2^(1/n)))?
@mathieuaurousseau100
@mathieuaurousseau100 Жыл бұрын
@@rohitashwaKundu91 It should be y=Ce^(ax) where a^(n+1)=1 (with C a complex number, I don't know why, they said real) and the number such as a^(n+1)=1 are the 2*m*pi/(n+1) with m integer between 0 and n (included)
@stratehorthy3351
@stratehorthy3351 Жыл бұрын
Here's one simplification to the last set of differential equations : y+y'+y'' + ... + y^(n) = y^(n+1) + y^(n+2) + .... --- (1) Adding y'+y''+ ... + y^(n) to both sides we get : y+2(y'+y''+ ... + y^(n)) = y' + y'' + .... --- (2) Differentiating (1) then adding y'+...+y^(n+1) to both sides we get : 2(y'+y''+...+y^(n+1)) = y' + y'' + .... --(3) Comparing (2) and (3) we get : y=2y^(n+1) which matches with the start of the problem. If y=Ce^(ax), we can find that a is the (n+1)'th root of 1/2. I wonder if there are other solutions too !
@danielrettich3083
@danielrettich3083 Жыл бұрын
I really liked the "sketchy" method, probably because I'm a physicist xD, and thus tried it on this generalized form of the problem. And it actually leads to the same simplified differential equation you got, namely y=2y^(n+1), which I find absolutely amazing
@PleegWat
@PleegWat Жыл бұрын
@@danielrettich3083 Same here. Remember to include all n+1 (complex) branches of (n+1)√2 to get all solutions.
@weeblol4050
@weeblol4050 11 ай бұрын
good job
@aceofhearts37
@aceofhearts37 Жыл бұрын
For the follow-up questions, you can bracket the first one as (y + y') = (y'' + y''') + (y^(4) + y^(5)) + ..., and therefore defining z = y + y' this becomes z = z'' + z^(4) + ..., so the differential equation can be solved in two steps. This generalizes to the n case by defining z = y + y' + ... + y^(n) so that the DE can be rewritten as z = z^(n+1) + z^(2n+2) + ..., which by the same method used in the first half can simplify to z = 2z^(n+1). Then you get a sum of exponentials in the complex roots of 1/2 and throw that mess into the RHS of y + y' + ... + y^(n) = z. So y(x) will ultimately be a sum of complex exponentials but I imagine the coefficients would get messy fairly quickly. Edit: changed n to n+1 in the RHS of the rewritten equation, I had counted that wrong. Edit 2: actually not that bad, check replies.
@aceofhearts37
@aceofhearts37 Жыл бұрын
So, actually not that messy. From now on I'll use Σ to mean the sum from k=0 to k=n. The solution to z = 2z^(n+1) is a function of the form z(x) = Σ (A_k)exp[(λ_k)x], where the A_k are any complex numbers and λ_k = [(1/2)^(n+1)] exp(2kπi/(n+1)) is one of the (n+1)st roots of 1/2. Therefore, the solution to y + ... + y^(n) = z will have a homogeneous part (a sum of exponentials involving the roots of 1 + λ + ... + λ^n = 0) and a particular solution, which we can assume has the form z(x) = Σ (B_k)exp[(λ_k)x], for some coefficients B_k that we have to compute. By comparing with the RHS we get (1+λ_k+...+λ_k^n)B_k = A_k, which by the partial sum of a geometric series and λ_k^(n+1) = 1/2 simplifies to B_k = 2A_k(1-λ_k). Since A_k can be chosen to be any complex number, B_k is also any complex number since 2(1-λ_k) is always nonzero. Then if we want real solutions we can pick the B_k to be complex conjugates as needed.
@Joe-nh9fy
@Joe-nh9fy Жыл бұрын
@@aceofhearts37 This is what I worked out as well. Well actually I got y = 2y^(n+1) instead of z. I get this by using the original equation, and a second equation which is the derivative of the first equation. Solve for y^(1) in both equations. Then set those expression equal to each other and solve for y. But I believe your general function is the solution for y
@matteopriotto5131
@matteopriotto5131 Жыл бұрын
​@@aceofhearts37 lambda_k should be {(1/2)^[1/(n+1)]}exp(2k(pi)i/(n+1)) I think
@aceofhearts37
@aceofhearts37 Жыл бұрын
@@matteopriotto5131 You're right, good catch.
@matteopriotto5131
@matteopriotto5131 Жыл бұрын
@@aceofhearts37 glad I helped
@TaladrisKpop
@TaladrisKpop Жыл бұрын
Like everytime when using algebraic manipulations with series (or more generally, limits), one should carefully check about the convergence. Without it, the first method only shows that, IF a solution exists, then it has to be of the form y=Ce^(x/2)
@honourabledoctoredwinmoria3126
@honourabledoctoredwinmoria3126 Жыл бұрын
It's a fair point, but Y(n) of Ce^ax = (a^n)Ce^ax. So what we actually have here on the RHS is a geometric series (1/2 + 1/4 + 1/8...)Ce^(x/2), and on the left: Ce^(x/2). They equal each other if and only if that geometric series converges to 1, and of course it does. It's a valid solution, and I suspect it is the only valid solution. There are other apparent solutions, but they do not actually converge.
@TaladrisKpop
@TaladrisKpop Жыл бұрын
@@honourabledoctoredwinmoria3126 Yes, convergence is not difficult to check, but it shouldn't be left out
@broccoloodle
@broccoloodle Жыл бұрын
Well, you first assume a solution exists, you find all solutions, then later on you remove all solutions that do not converge. I find nothing wrong about that logic
@TaladrisKpop
@TaladrisKpop Жыл бұрын
@Khanh Nguyen Ngoc Did I say the opposite? But where in the video do they eliminate the divergent solutions? If not done, the solution of the problem is incomplete.
@broccoloodle
@broccoloodle Жыл бұрын
@@TaladrisKpop I think verifying the solutions not diverging is too obvious that Michael chose not to show it on the video. What he wanted to deliver to us is actually the second way and triggering our curiosity on additional problems in the video.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
If you decide to associate the 3rd derivative and so on, you get that's the 2nd derivative of y, in which you get y = y' + y'' + y'', and you get the family of solutions y = c_1e^(x/2) +c_2e^(-x). So we do get infinite families of solutions, but it's a matter of where we associate. If we start with the 4th derivative, we'll get 3 solutions as we have y in terms of the first, second, and third derivative. And so on.
@patato5555
@patato5555 Жыл бұрын
You can take this a bit further by noting the characteristic polynomial of keeping the first n derivatives will factor as (r-1/2)(1+r+r^2+…+r^(n-1)). In general, y=ce^(rx) where r=1/2 or r is a root of 1+x^2+…+r^n for some n. Of course, there could be more solutions than these.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
@@patato5555 I agree. Except they'd be roots of 2x^n + x^(n - 1) + ... + x - 1 if I'm not mistaken.
@patato5555
@patato5555 Жыл бұрын
@@mizarimomochi4378 if you set the expression equal to 0, divide by 1/2 and then factor out the r-1/2 they will be equivalent.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
@patato5555 Sorry, I didn't notice the first time. My bad.
@patato5555
@patato5555 Жыл бұрын
@@mizarimomochi4378 No worries!
@DiracComb.7585
@DiracComb.7585 Жыл бұрын
This honestly doesn’t look awful. The real issue is you need to be careful of the functional analysis of the derivative operator.
@matthewrorabaugh1497
@matthewrorabaugh1497 Жыл бұрын
For one of the follow-on questions there is a cute result which pops up. f=f'+...+f(n) when n is congruent to 1 mod 4. In that case you can use a sine function because the other derivatives cancel themselves out. I was looking for ways to fit this self-canceling concept into the other finite equations, but I have been unsuccessful.
@josephon63
@josephon63 Жыл бұрын
I don’t understand why : - you can say that D(y+y’+…) = y’ + y’’+… - on which vector space 1-D is an isomorphism and the series (D^n) converges ?
@Horinius
@Horinius Жыл бұрын
@10:15 y + y' ≠ y'' + y''' "But I'll let you do it as homework" 😆😆
@weeblol4050
@weeblol4050 11 ай бұрын
trivial y + y' = y' + 2y''
@Horinius
@Horinius 6 ай бұрын
@@weeblol4050 No, it is not. I don't know how you got the y + y' = y' + 2y'' My comment actually told viewers that Michael made a mistake at @10:15. The correct answer should be y + y' = 2 y'' + 2 y'''
@weeblol4050
@weeblol4050 6 ай бұрын
@@Horinius y + y' = y''+(y''+ y'''+...)' = y''+(y+ y')' = y'' + y' + y'' = y'+2y'' If you can find a mistake it would be really helpful
@weeblol4050
@weeblol4050 6 ай бұрын
@@Horinius but yours also works y + y' = y'' + y''' + (y'' + y'''...)''=2y''+2y''' Lets observe 2x^2 - 1=0 and 2x^3 + 2x^2 - x - 1 = (2x^2-1)(x+1)=0. now lets observe y + y' = y'' + y''' + y^(IV) + (y'' + y''' +...)'''=y'' + 2y''' + 2y^(IV) and 2x^4 + 2x^3 + x^2 - x-1=(2x^3 + 2x^2 - x -1)(x+1) - 2x^3 + x=2(x^2-1/2)(x+1)^2 - 2x^3 + x=0 so there are god knows how many solutions ( oo ). Some of the solutions are y(x)= Ae^(-x) + Be^(x/sqrt(2)) + Ce^(-x/sqrt(2)). So you are correct in some way you found also the solution that I wrote with a constant A I found only 2
@weeblol4050
@weeblol4050 6 ай бұрын
@@HoriniusLets also check y = y' + y'' + (y' + y''+...)'' =y' + 2y'' , 2x^2+x-1 = (2x-1)(x+1) so here also is the e^(-x) a solution so 3:32 is also incomplete. Just for sanity lets check y = y' + y'' + y''' + (y' + y'' + y''' +...)'''= y' + y'' + 2y''' and 2x^3 + x^2 + x -1=(2x^2 + x-1)(x+1)-x^2+x=0 so this one doesnt work and yields even more solutions god I dont want to check anymore this is cursed. I guess it is to be expected from an infinite order differential equasion to have infinitly many solutions
@davidblauyoutube
@davidblauyoutube Жыл бұрын
I immediately thought of the "sketchy" solution with D as a linear operator 😆. When the characteristic "polynomial" is actually not a polynomial because it lacks a finite degree, then usually there's some formula that can be applied to its coefficients (otherwise, how would you define it?). In that case, my hunch is that there's some manipulation that can be performed along the lines of techniques used with generating functions and recursive sequences that will produce a diffeq having an order equal to the degree of the formula.
@PeterBarnes2
@PeterBarnes2 Жыл бұрын
I prefer using a slightly more direct approach to using linear operators. [1]y = [1/(1-D_x) - 1]y {|y'/y| < 1 (?)} (This is equivalent to the given equation, in terms of Differential Operators, with the condition (which might not be necessary) coming from 1/1-s having a pole at s=1. This pole should manifest as divergence in certain exponential solutions, namely those with parameter 's' (from e^sx) outside the radius of convergence of this 'definition of 1/1-s.' I say it 'should' manifest this way, but this theory is not developed enough to be certain of the divergence, at least to my knowledge. Fortunately the final solution satisfies this condition anyway, so it is not repeated.) 0 = [1/(1-D_x) - 2]y (Moving terms between sides of the equation, as both operators are operating on the same term 'y.') 1/(1-s) - 2 = 0 (The exponential solutions of any (there is a theorem I've discovered, more or less, to this generalization from polynomials to any function, indeed) Constant-Coefficient Linear DE are found by using the characteristic equation to find the eigenfunctions of the form e^sx, with s the characteristic equation's independent variable.) 1 - 2(1-s) = 0 -1+2s = 0, s=1/2 (Just algebra, here. Having solved for 's,' e^sx are our eigenfunctions, thus:) y = Ce^(x/2) Really a very short and simple approach. Now, if you want a more difficult approach, you can use the fact that [1/(s-D_x)] is a variation of the Laplace transform, remembering that [e^(bD_x)]f(x) = f(x+b) and int{0, inf} e^-at dt = 1/a and then you can try to solve the resulting integral equation. It's a good bit of fun, and certainly possible, if a little unnecessary in this problem. [Edit: I did this without watching the video first. My mistake, it's almost exactly as presented! Oh well...]
@ilonachan
@ilonachan Жыл бұрын
What's really great here is that we don't actually need to get all that convoluted to get rid of the sketchiness, and just not do the step with the weird "function division" thing. While we often write the geometric formula as that ratio, its derivation works in any ring if we just skip that final simplification! So with our present ring of linear functors, where addition is adding the results, multiplication is chained application, and division is not generally defined, we can still just skip directly from the (1)y=(sum)y description to the (1-D)y=Dy statement. ...although, does D^(n+1) "converge" in some meaningful way? that'd be required for the infinite case, right? the finite case ofc just gives us a relatively simple degree n+1 differential equation, but I forget how exactly those are solved rn...
@PeterBarnes2
@PeterBarnes2 Жыл бұрын
​@@ilonachan x^n doesn't converge over all x. The domain for D^n to converge over is the space of functions. That's a pretty broad domain, so I prefer to stay within the complex meromorphic functions. (Which, despite including complex functions, is much more restrictive and well-behaved.) I'm pretty sure of these two things: One of these extended differential operators f(D_x) converges for an exponential function e^sx if and only if the function f(s) converges at 's.' As well, polynomials converge if f(0) converges, and polynomials times exponentials P(x)e^sx converge when e^sx converges. This much I'm fairly confident about. Further, other functions than exponentials or polynomials converge for a given differential operator depending on how the function is expressed. For example, a taylor series may diverge on its terms alone, but an exponential times a taylor series may converge absolutely, even when the exponential times the series equals the original series. More than that, integral expressions of some function might converge or diverge if they contain exponential terms that remain inside or go outside, respectively, the domain of convergence of the differential operator. This much is actually given (I think) by the previous thing. I have no idea about functions which are in no way expressed as exponentials or polynomials. Not just regarding their convergence under various differential operators, but even how to evaluate them. There is something which can, theoretically, help. Functions of the derivative applied to functions of the variable can be reversed: [f(D_x)] (g(x)*y(x)) = [[g(D_z + s)]{z=D_x} f(z)]{s=x} (y(x)) It's messy, but cleans up when y=1: [f(D_x)] g(x) = [g(D_z + x)]{z=0} f(z) This allows you to evaluate some expressions more easily. Because it's easy to evaluate exponentials of derivative operators (e^bD is the shift operator by 'b'), and polynomials are basically given (D^p is the pth derivative operator for p a natural number) you can basically evaluate any differential operator on functions expressed in terms of exponentials and polynomials. This works when the exponentials or polynomials are under an integral, or in a sum, or up a tree, anything! (By 'up a tree' I'm not actually referring to anything specific. For example, I don't mean towers of exponentials: I am still working on exponentials of polynomials e^(x^p), as they do not behave at all. [e^e^D]y=0 might be the DE for which the gamma function is the solution. Or maybe not, it's hard to tell. Maybe with a minus sign somewhere, but then it doesn't work, it's rather confusing, actually.) The fact that exponentials behave better than polynomials motivates me to try and express one in terms of the other. So far I've found one expression which requires a limit, which isn't satisfactory. I've looked at distributions (a generalization of functions), and found a way of getting to it from what are basically derivatives of the sign() function. This, interestingly, gives the exact same result with the limit and everything. I've looked at expressing the logarithm, which also gives the same exact result. Maybe thinking from polylogarithms, or something else entirely? Very uncertain.
@sirlight-ljij
@sirlight-ljij Жыл бұрын
D is an unbounded operator, so the geometric series requires some assumptions to be made for it to converge
@PennyAfNorberg
@PennyAfNorberg Жыл бұрын
@@sirlight-ljij I guess that why the soloution was schecty, and i start thinking how to check that |D|
@jakubszczesnowicz3201
@jakubszczesnowicz3201 Жыл бұрын
I love the sketchy proof!!! Operator analysis looks so wild without context though. Like, that whole segment around 5:30 is crazy. If I saw (1 - D)^-1 as a high school student I would be mindblown, my teacher wouldn’t be able to hear the end of it
@Yossus
@Yossus Жыл бұрын
I love these videos for two reasons: one, the insight on the maths itself, two, the insight on how to cleanly draw the symbols!
@BackflipsBen
@BackflipsBen Жыл бұрын
That perfect infinity symbol at 4:45 touched my soul
@chimetimepaprika
@chimetimepaprika Жыл бұрын
Ahh, three seconds in, "The trivial solution works beautifully."
@cara-seyun
@cara-seyun Жыл бұрын
0 = 0 + 0 + 0 + 0…
@Tehom1
@Tehom1 Жыл бұрын
Did Michael escape? Will he be able to cut his way out of the belly of beast with only the Heaviside operator? Stay tuned, viewers! 😮
@marchenwald4666
@marchenwald4666 Жыл бұрын
As a general solution to the problem around 9:00 : For n terms on the left, the functions satisfying the equation are y = C * e ^ ( ( (1/2) ^ (1/n) ) * x )
@anggalol
@anggalol Жыл бұрын
Well, that is totally unexpected to separate the differential operator💀
@kitochizxik5786
@kitochizxik5786 Жыл бұрын
Hi Kurisu
@brianlane723
@brianlane723 Жыл бұрын
The differential equation essentially becomes 1=1/2+1/4+1/8+1/16...
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
Here's an operator ordering issue. You have to prove D commutes with 1/(1-D) before acting on both LHS and RHS an 1-D. (1-D)y=((1-D)D(1-D)^(-1))y it truly is.
@jamiewalker329
@jamiewalker329 Жыл бұрын
Err, that's trivial, the commutator of any function of an operator with any other function of that same operator is 0. Non trivial commutation relations come from operators being distinct, or distinct components of vector operators.
@reeeeeplease1178
@reeeeeplease1178 Жыл бұрын
You can "factor" a D out from the series *to the right* and then use the geometric series trick to avoid this problem
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
@@jamiewalker329 yes. You are right. [f(D), g(D)]=0
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
@@reeeeeplease1178 yes. Thanks.
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
f and g has D^n as basis and D^n's coefficient should be constant.
@jdsahr
@jdsahr Жыл бұрын
note that the leading constant for the original solution need not be limited to the reals. There's really no reason that it couldn't be a quaternion.
@dmytryk7887
@dmytryk7887 Жыл бұрын
For the truncated version: y=y'+y''+y'''+...+y(n) let r be a root of x+x^2+x^3+...+x^n=1. Then it is easy to show that y=exp(rx) is a solution to the truncated equation. Since there are n such roots this gives you the basis of the expected n dimensional solution space: exp(r_1 x), exp(r_2 x), ...,exp(r_n) x Now the hand-wavey part : as n approaches infinity, the equation x+x^2+...+x^n=1 approaches x/(1-x)=1 which has the unique solution x=1/2 as found in the video. Not really satisfying. I feel there is a nicer geometric argument, but I don't see it as of now.
@alexsokolov1729
@alexsokolov1729 Жыл бұрын
You can simplify your characteristic equation using formula for sum of geometric series: (x^(n+1) - x) / (x - 1) = 1 which is the same as x^(n+1) - 2*x + 1 = 0, x != 1 It is easy to show that the function f(x) = x^(n+1) - 2*x + 1 has exactly 2 real roots for odd n and 3 real roots for even n. Excluding x=1 will give us 1 or 2 real solutions depending on parity of n. I guess these observations show that an infinite equation from the video has no more than 2 real solutions. However, there are complex solutions, which should also be considered
@JohnSmith-zq9mo
@JohnSmith-zq9mo Жыл бұрын
Note that we have a similar case for ordinary algebraic equations: the equation 1+x+x^2/2+..+x^n/n!=0 has n complex solutions, but if we take the limit we get an equation with no solutions.
@JamesLewis2
@JamesLewis2 Жыл бұрын
When you started the "sketchy solution" I thought that you were going to start grouping from later in the equation, something like noting that y=y′+y″+(terms of the original expansion)″ and then getting the spurious solution family y=ce^−x, which if back-substituted results in basically saying that Grandi's series converges to −1; related to that, if you group it off after the nth derivative, you get an equation with characteristic polynomial 2r^n+r^(n−1)+r^(n−2)+…+r^2+r−1, which factors as (2r−1)(r^(n−1)+r^(n−2)+…+r^2+r+1), and the zeroes are ½ and the roots of unity other than 1, corresponding to spurious solutions equating 1 to the sum of a divergent series with terms that oscillate around the unit circle.
@olli3686
@olli3686 Жыл бұрын
10:14 Wait, what happened? He just completely ignored the remainder of D4y to DNy. If y + D1y = D2y + D3y + D4y + … + DNy, then why why just entirely drop the 4th derivative etc ????
@stewartcopeland4950
@stewartcopeland4950 Жыл бұрын
it's more like y + y' = 2 * (y'' + y''')
@CISMarinho
@CISMarinho Жыл бұрын
As @stewart said: y’’ + y’’’ + y⁽⁴⁾ + y⁽⁵⁾ +… = (y+y’+y’’ + y’’’ + )’’ = (y+y’ +(y+y’) )’’ = 2(y+y’)’’ = 2(y’’ + y’’’)
@LzizmaLaya
@LzizmaLaya Жыл бұрын
How to prove that the series of D^n converges and for which norm ?
@kennethvalbjoern
@kennethvalbjoern 6 ай бұрын
The most general setting is to say that (sum(D^n))f, for an analytic function f, exists if the sequence ((sum^k(D^n))f)_k converges pointwise, where the limit-function defines (sum(D^n))f. This is f.ex. true for analytic functions where sum( ||f^(n)||_infty ) converges (absolutely), where ||f||_infty is the supremum norm. F.ex. e^(x/2) has this property on any interval [a;b]. The operator sum(D^n) is not defined for all analytic functions, f.ex. not for e^x on any interval [a;b]. The subset of functions in f.ex. C^infty([0;1]) (with the supremum norm), where sum(D^n) is defined, is even a vector subspace of C^infty([0;1]). I hope this helps you.
@tontonbeber4555
@tontonbeber4555 Жыл бұрын
4:18 ... why using letter D while everybody uses letter s for that ? In fact you are using Laplace transform ...
@guerom00
@guerom00 Жыл бұрын
Is there a justification that the geometric series formula "seems to work" with a differential operator ?
@DTDTish
@DTDTish Жыл бұрын
Not a mathematician, but my guess is that it is linear We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... And use geometric sum from there. This basically does the same thing as the linear operator method, but a bit more simple (adding numbers instead of operators)
@guerom00
@guerom00 Жыл бұрын
​@@DTDTish yeah... Somehow, i don't have a problem with an object like exp(D) cause this series has an infinite radius of convergence. Here, i try to wrap my head around what a finite radius of convergence for this series means when applied to differential operators :)
@theonearney205
@theonearney205 Жыл бұрын
Elite thumbnail
@of_discourse
@of_discourse Жыл бұрын
didn't you exchange the derivative and sum in your not sketchy solution?
@ntuneric
@ntuneric Жыл бұрын
i think some insight for the question at 7:23 is that the differential equation with finite number of terms n corresponds to a characteristic polynomial of degree n that has n roots, whereas the infinite one's polynomial is a power series which has a single root
@deehobee1982
@deehobee1982 Жыл бұрын
The differentiation operator is unbounded, so it's dubious to factor it out of an infinite sum like you did. I think what you've done here is solve the corresponding "infinite characteristic equation" for this DE, but that certainly doesn't show that the infinite sum of exponentials converges to e^(x/2).
@habibullah-ki7ok
@habibullah-ki7ok Жыл бұрын
You are absolutely right. The differential operator is not continuous on tje space of smooth functions C^(\infty). Moreover,, you need the norm of D to be less than 1 to guarantee the sum makes sense. Nonetheless, this can be saved. Restrict the domain to the set of functions with norm less than one. Certainly the family Ce^{ax} is un this set for |a|
@deehobee1982
@deehobee1982 Жыл бұрын
@habib ullah Haha, that sounds right. Thanks. I think another comment gave a procedure to generate a completely different DE using the same logic
@anasselmoubaraki9410
@anasselmoubaraki9410 Жыл бұрын
To answer your question Mr Penn i think that having one solution is a consequence of the analytical property of the solution and having an infinite sum forces the coefficient (a_k) in the analytical expression to be defined uniquely. Thank you for your amazing videos.
@Qhartb
@Qhartb Жыл бұрын
The question I thought of as soon as I saw it was: y = y'/1! + y''/2! + y'''/3! + ... So a Taylor-series-looking differential equation. Possibly an application of your "what's exp(D)" result from another video?
@Kapomafioso
@Kapomafioso Жыл бұрын
I also thought about that and how the argument shifts when exp(D) is applied. Then the equation essentially becomes: f(x+1) = f(x), which is a functional equation for any periodic function with period 1, instead of a differential equation. Infinite series of derivatives be weird and exotic like that. Sometimes it's not a differential equation at all, despite looking like one.
@techno2371
@techno2371 Жыл бұрын
I did it in a less elegant way: Since this is a homogeneous differential equation with constant coefficients, you assume the solution is in the form of ce^(rx). Differentiating this solution and diving by ce^(rx) (it can never be 0) you get 1=r+r^2+r^3... adding 1 to both sides gives you 2=1+r+r^r^3...=1/(1-r) (|r|
@kennethvalbjoern
@kennethvalbjoern 6 ай бұрын
LOL. The sum(D^n)=D/(1-D) operator expression is so cool. It won't surprise me, if the manipulations you did can make perfect sense in some formal way.
@NathanSimonGottemer
@NathanSimonGottemer Жыл бұрын
How do you know the sum on the right hand side converges? If you are working with a domain of real numbers for y the sum should diverge if x is positive, which makes me feel like this is a sort of Ramanujan-tier cheat code solution. Of course I still think it means something, just not the whole picture…if we take y(0)=0 then the Laplace transform will converge for |s|
@petersievert6830
@petersievert6830 Жыл бұрын
10:09 That is most definitely wrong. I think, it must be y + y' = y' + 2y''
@krisbrandenberger544
@krisbrandenberger544 Жыл бұрын
No. y+y'=2(y"+y''') from doing something similar with the goal equation.
@petersievert6830
@petersievert6830 Жыл бұрын
@@krisbrandenberger544 Well, I am not wrong, I dare say. your equation is correct as well though. You cut off beginning after y''' and made the rest into (y+y')'' , while I did after y'' and made the rest into (y+y')' Honestly my equation seems much more futile to get to a solution though.
@donmoore7785
@donmoore7785 Жыл бұрын
Very thought provoking. I honestly found the "sketchy" solution very sketchy - I didn't understand the manipulations of the D operator.
@GeoffryGifari
@GeoffryGifari Жыл бұрын
On the follow-up question, in the video you shifted the equal sign to the nth sum. Now, can we do this indefinitely, shifting the equal sign to the right to somehow "inverting" the sum of derivatives? y + y' + ... + y^(n) = y^(n+1) + y^(n+2) + ..... to maybe lim m -> infinity { y + y' + ... + y^(m-1) = y^(m) ..... } ?
@krisbrandenberger544
@krisbrandenberger544 Жыл бұрын
Hey, Michael! So for the general case of the follow up question, we would have: y+y'+...+y^(n)=2*(y+y'+...+y^(n))^(n+1)
@diszno20
@diszno20 Жыл бұрын
I would love to see what happens when you choose different constants for the different derivatives, e.g. y = sum {from k=1 to inf} 1/k y^{(k)} Also it would be fun to plug some crazy sequence as constants. I.e. define a_n to be the nth digit of pi and calculate y = sum a_n y^k
@Blackmuhahah
@Blackmuhahah Жыл бұрын
Extending the case with finite n to solutions of the form y=e^(a x) you get 1=a+a^2+...+a^n. In the limit as n->\infty you get a=e^(i\phi), where 0
@jevinleno2670
@jevinleno2670 Жыл бұрын
Hey Michael, for the first method - doesn't the sum law for derivatives only hold for finite sums? This method seems like it needs further justification.
@chrisdupre2862
@chrisdupre2862 Жыл бұрын
I don’t know if this has been answered or not already, but one way to look at the non-existence is via the Fourier transform (a favorite for constant coefficient linear ODE). After some manipulation, you can see that the solution must solve \Lambda^{n+1} =2\Lambda -1. Now suppose n goes off to infinity. We break up looking for roots into three options: the modulus of lambda is greater than, equal to or less than one. In the greater than case, we cannot solve this as the left hand side is much much bigger than the right. In the equal to, the left hand side does not have a limit, so what do we even mean! In the less than case, the term tends to 0, so 2\Lambda -1 = 0 which recovers our start. Heres a follow up: is there a distribution of solutions around the unit circle that this approach’s? Is there a meaningful “Distribution of other oscillatory solutions at infinity “ ? Great video! It’s fun to see the resolvent pop up in the sketchy side!
@59de44955ebd
@59de44955ebd Жыл бұрын
On the first follow-up question y + y' = y{2} + y{3} + y{4} + ... : Taking the second derivative on both sides we get: y{2} + y{3} = y{4} + y{5} + y{6} + ... and hence: y + y' = 2 * (y{2} + y{3}) (this factor 2 was missing in the video) By substituting z for y + y' we get z'' = 1/2 * z and therefor a solution z = c * e^(x/sqrt(2)). A simple real solution that solves the substitution and therefor the original equation is y = c * e^(x/sqrt(2)).
@59de44955ebd
@59de44955ebd Жыл бұрын
Concerning the general equation y + y{1} + ... + y{n} = y{n+1} + ..., if we substitute z for y + y{1} + ... + y{n}, we get z{n+1} = 1/2 * z, and y = c * e^(x/(2^(1/(n+1))) is always a (trivial) solution.
@princeofrain1428
@princeofrain1428 Жыл бұрын
My first intuition of the solution (as someone who doesn't like to think too much) was "what if it was some exponential function whose power was a series that converged to 1 on the range 1 to infinity?"
@caiobarros585
@caiobarros585 Ай бұрын
Actually, you can truncate the infinite series the way you want in the given expression and get other solutions. In the video, Michael "factorizes" an y' from the expression -> y=y' + y'' + y''' + .... = y' + (y' + y'' + ...)' = y' + y' -> y = 2y' -> y = c*exp(x/2). If you "factorize" an y'' you get another differential equation -> y=y' + y'' + y''' + .... = y' + y'' + (y'' + y''' + ...)' = y' + y'' + y'' -> y = y' + 2y'' -> y = c1*exp(x/2) + c2*exp(-x) "Factorizing" y'''' you get y = y' + y'' + 2y''' -> y = c1*exp(x/2) + c2*exp(-x/2)*sin(sqrt(3)*x/2) + c3*exp(-x/2)*cos(sqrt(3)*x/2) For y = y' + y'' + y''' + 2.y'''', you get y = c1*exp(x/2) + c2*exp(-x) + c3*sin(x) + c4*cos(x) And so on... In the end you are solving an n-th order differential equation of the form y = y' + y'' + y''' + ... + y^(n-1) + 2*y^(n). For each n you get n solutions. Extending that appear to be non-trivial as the solutions don't follow a pattern. But, maybe, because c*exp(x/2) is always a solution that appear (the characteristic polynomial assures that), you can propose an iterative method for the other solution by the algorithm: y_0(x) = c*exp(x/2) y_k(x) = u_k(x)*y_k-1(x), for k >= 1. Where u_k(x) is determined in each step (the method probably doesn't converge and sure is slow af)
@HarmonicEpsilonDelta
@HarmonicEpsilonDelta Жыл бұрын
I find absolutely game changing the fact that applying the geometric series worked 😮😮
@blackfalcon594
@blackfalcon594 Жыл бұрын
A nice (and seemingly related) parallel: The polynomial 1 = sum_{j=1}^n x^j is a degree n polynomial and so has n (possibly complex) solutions. But when we take the infinite sum, 1 = sum_{j>=1} x^j = (e^x-1) for |x| < 1 we only get one solution, not infinitely many.
@TechnocratiK
@TechnocratiK Жыл бұрын
The 'sketchy' approach is probably made a bit more formal by taking the Laplace transform of both sides. The result is then that Y = (s / (1 - s)) Y, and the solution follows multiplying through by (1 - s) and taking the inverse transform. This also permits us to consider solutions to y + y' + ... y(n) = y(n + 1) + ..., (where y(k) is the kth derivative of y) since we would have: (1 - s ^ (n + 1)) / (1 - s) Y = (s ^ (n + 1)) / (1 - s) Y Rearranging, Y = 2 s ^ (n + 1) Y and transforming back: y = 2 y(n + 1) The resulting basis of n+1 functions is z_k ^ x for k = 0..n where z_k are the n+1 complex roots of 1/2 (a real basis also exists). The case solved in this video was n = 0. There are two assumptions made here. First, that the solution y has a Laplace transform and, second, that the resulting geometric series converges (i.e., |s| < 1). Disregarding the second assumption, we can then ask (for n = 0) whether there exists |s| >= 1 for which s + s ^ 2 + ... = 1.
@dexio85
@dexio85 Жыл бұрын
Out of curiosity - are those new thumbnails working for this channel? Usually math/science channel don't do sensational thumbnails like that, you know, with arrows, highlighed text, open mouth, outlines, etc. This somehow seems out of place here. So, is this really working ?
@anthonypazo1872
@anthonypazo1872 Жыл бұрын
"Okay. Nice." 😂😂❤❤ love it every time I hear that.
@scarletevans4474
@scarletevans4474 Жыл бұрын
So... will there be some follow-up videos? Or we are just left with these questions that will never be answered??
@sebaufiend
@sebaufiend Жыл бұрын
The first method I thought I was neat. I used geometric series but I didn't see a need to go through all that operator business. Something we learned in diffeq is that any linear differential equation system with constant coefficients will have solutions of the form A*exp(mx). And thus making this substitution into the equation we get 1=m+m^2+m^3.... The right hand side is very close to a geometric series which has the sum: 1+r+r^2+r^3...=1/(1-r), so if we subtract 1 from both sides we get r/(1-r)=r+r^2+r^3... So we sub this into our equation we get 1=m/(1-m) The only value that gives us a solution is m=1/2. Thus the solution is y=C*exp(1/2*x)
@juancristi376
@juancristi376 Жыл бұрын
Nice video! I think you can apply the first method to also get y = y' + y'' + y''' + y'''' ... = y' + y'' + D²(y' + y'' + y''' + y'''' ...) = y' + 2y'' For which you can get the solutions y = A exp(x/2) + B exp(-×) For all A, B real. This can be generalized to be equivalent to all differential equations of the form y = sum from n =1 to N of D^n y + D^N y For any N Natural Please correct me if I did any mistake!
@knisleyjr
@knisleyjr Жыл бұрын
So there are in fact infinitely many solutions!!! Good find!!
@alexsokolov1729
@alexsokolov1729 Жыл бұрын
Hmm, I agree with your idea, however, it doesn't seem to work properly. Let's substitute exp(-x) as our solution and divide by it both parts of equation. Then we get 1 = (-1) + 1 + (-1) + 1 +... It is clear that the series in the right part is not convergent, since its partial sums are -1 and 0. However, the substitution of exp(x/2) will give us the correct identity 1 = 1/2 + 1/4 + 1/8 +... The equality above can be easily verified using formula for sum of geometric progression.
@MK-13337
@MK-13337 Жыл бұрын
You get an infinite family of "solutions", but the RHS of the DE will not converge anywhere for any of the other "solutions" and thus they can't really be solutions.
@tonybluefor
@tonybluefor Жыл бұрын
The solutions y=A exp(x/2)+B exp(-x) satisfy the equation y=y'+2y''=(A/2) exp(x/2) -B exp(-x) +2((A/4) exp(-x/2) +B exp(-x))= A exp(-x/2) +B exp(-x) and are linearly independent. So it seems that there are indeed infinitely many solutions.
@tonybluefor
@tonybluefor Жыл бұрын
Oh. I've just understood Alex Sokolov's comment. That means the assumptions that y'= y''+y'''+..... or y''=y'''+... can be misleading in infinite series.
@hqTheToaster
@hqTheToaster Жыл бұрын
I had a fantasy image in my head that looks like this: " (derivative[sqrt(10)timesover] y) + (derivative[10timesover] y) + (derivative[10sqrt(10)timesover] y) + (derivative[100timesover] y) + ... " and soon enough I knew it was time to try this video. These videos are good for those that do and don't listen alike. I'm sure you probably prefer the people that do or are more likely to listen; just wanted to let you know that I thought of you and/or your channel in a sincere way. Also, I think ?/2 shows up in your video because of the way the inherent limit would work as the tally marks approach infinity in 3 or more different ways. I'm not a calculus expert. That is just what I think.
@seneca983
@seneca983 Жыл бұрын
9:20 Is there a nice solution. My answer is "yes". Just try a function of the form C*exp(k*x). You get the equation: 1+k+k^2+...+k^n=k^(n+1)+k^(n+2)+k^(n+3)... Take k^(n+1) as a common factor from the right side. 1+k+k^2...+k^n=k^(n+1)*(1+k+k^2...) Apply the formula for the geometric sum to the left and that of the geometric series to the right. (1-k^(n+1))/(1-k)=k^(n+1)/(1-k) Cancel out the common denominator and rearrange to get the following equation. k^(n+1)=1/2 The solutions for k are just (1/2)^(1/(n+1)) times the appropriate roots of unity. Technically, I've not proven that there aren't solutions that aren't of exponential form but that seems pretty intuitive.
@DavidSavinainen
@DavidSavinainen Жыл бұрын
For the case y + y' + ... + y(n) = y(n+1) + ... you get, by the sketchy solution, y + y' + ... + y(n) = (D^[n+1]/(1-D)) y (1-D)(y+y'+...+y(n) = y(n+1) Notice that the LHS telescopes, giving only y - y(n+1) = y(n+1) or in other words, y(n+1) = y/2 which has the solution set y = C exp[x/α] where α = 2^[1/(n+1)] * exp[ikπ/(n+1)] for all integers k such that 0 ≤ k ≤ n
@lunstee
@lunstee Жыл бұрын
Careful with the telescoping; it only works correctly on the RHS infinite series when abs(D)
@danrakes2667
@danrakes2667 Жыл бұрын
In response to your question at around 8:10, in the infinite series of y' example you DO get an infinite result. The infinite series is trapped inside e!
@garyknight8966
@garyknight8966 Жыл бұрын
For class II, using the same method as first used, y+y' = y''+D(y+y')=2y''+y'; so y=2y'' with solution y=Cexp(x/\sqrt 2)+Dexp(-x/sqrt2) . The two independent parts arise because we implicitly involve the second derivative. Note the exponent factors 1, -1 are square roots of 1. The next class produces y=2y''' with solution y = Cexp(x/(2^1/3))+Dexp([]x/(2^1/3))+Eexp([]x/2^1/3) with [] the other cube roots of 1: -1/2+-\sqrt3/2 . Three independent parts due to a third derivative. And so forth ...
@garyknight8966
@garyknight8966 Жыл бұрын
Oops .. the last [] factors I meant to be complex: -1/2+- i\sqrt3 /2 (of course). So these involve trigonometric functions (the even or odd components of exp (i \theta) )
@MarcoMate87
@MarcoMate87 Жыл бұрын
At 9:40 it's FALSE that the RHS is the second derivative of the LHS, I don't really understand what he is saying. The second derivative of the LHS is (y+y')'' = y'' + y''' which is not equal to the RHS in general. Instead, we can simply apply the first method in this way: deriving both members we obtain y' + y'' = y''' + y'''' + ... . So we can substitute into the original equation to get: y+y' = y'' + y' + y''. Simplifying we get y = 2y'' which is easily solvable.
@JD_Rockets
@JD_Rockets Жыл бұрын
Thank you sir, really helpful 🙏🇮🇳
@landy4497
@landy4497 Жыл бұрын
why does that D manipulation work?
@sleepycritical6950
@sleepycritical6950 Жыл бұрын
Jesus christ you came at the right time i love yooooooouuuuuu i needed this desperately
@blabberblabbing8935
@blabberblabbing8935 Жыл бұрын
2nd solution: A) Why can we assume D represents a square linear transformation such that its power series makes sense? B) How can we justify that the geometric series transformation (for |base| < 1) is valid for such matrices?
@DavidSavinainen
@DavidSavinainen Жыл бұрын
This is precisely why he called it a sketchy method
@haziqthebiohazard3661
@haziqthebiohazard3661 Жыл бұрын
Off the top of my head my guess was exp(x/2)
@skvortsovalexey
@skvortsovalexey Жыл бұрын
C*exp(x/2)
@kennethvalbjoern
@kennethvalbjoern 6 ай бұрын
Me too. It's 25+ years since I did a differential equation, so I missed the c.
@rainerzufall42
@rainerzufall42 Жыл бұрын
How do you know, that |D| < 1 for convergence?
@thiagodonascimento7926
@thiagodonascimento7926 4 ай бұрын
At 5:30 there's unfortunately absolutely no information given, how you get to this result... (sum D^n to D/(1-D))??
@nablahnjr.6728
@nablahnjr.6728 Жыл бұрын
i love how you can also easily build even and odd parts of this equation using the solution that satisfy y_even = [y(x)+y(-x)]/2 = y''_even + y""_even and so on
@NikitaGrygoryev
@NikitaGrygoryev Жыл бұрын
I have a pretty wavehandy explanation for the uniqueness of the solution, for something more precise you might need to start thinking harder about what functions are we talking about. So for finite n you would solve the equation by substitution y=Exp(Ax). The characteristic equation is 1-2A+A^(n+1)=0 (where you should discard A=1). It's easy to see that in the limit n goes to infinity theres unique solution |A|
@petersamantharadisich6095
@petersamantharadisich6095 Жыл бұрын
For the finite sum, I get Cexp(ax) as a solution where a is a solution to the polynomial 2a(1-a^n)-1=0 I get this by noting y'=y''+y'''+...+y[n+1] so we have y=2y'-y[n+1] if you use y=Cexp(ax) then you get Cexp(ax)=2aCexp(ax)-a^(n+1)Cexp(ax) or... 1=2a-a^(n+1)=2a(1-a^n) In the limit as n goes to infinity, it requires |a|
@IntegralKing
@IntegralKing Жыл бұрын
Oh, I've got one! what about y = y'' + y''' + y(5) + ... where the primes are all prime (2,3,5,7, etc). Will that question wrap back to the Reimann Zeta function?
@8_by_8_battleground
@8_by_8_battleground Жыл бұрын
Hi, Michael. For the general differential equation, I am getting two solutions. Either y can be ce^x or it can be a polynomial of degree (n+1) with the coefficient of the highest power being 0.5/(n+1)!.
@DTDTish
@DTDTish Жыл бұрын
We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... We know that the geometric sum is 1+k+k^2 = 1/(1-k), which is the RHS plus 1. So we have 1 =1/(1-k) -1 We get K=1/2 Or Y y=Ae^(k/2) The video did something very similar, but with operators
@wolfmanjacksaid
@wolfmanjacksaid Жыл бұрын
I would've never looked at that and gone "wow that's a geometric series!" Haha
@mathunt1130
@mathunt1130 Жыл бұрын
The answer to the question is simple. Look for a trial solution y=exp(mx), and you'll end up with a polynomial equation. Demonstrating that there are a finite number of solutions. You can't do this for an infinite series. My first thought was to take Fourier transforms.
@not_vinkami
@not_vinkami Жыл бұрын
Differentiation is so weird that it kinda just acts normally even when you treat it as fractions or exponents
@davidz5525
@davidz5525 Жыл бұрын
The sketchy solution is well, not so sketchy in my opinion. It’s actually called the von Neumann series, and the geometric series formula holds if the sum of the operators converge in the operator norm sense (which, in our case, is sort of given). The first solution however lacks formality of why you can take an infinite amount of derivatives and claim that it’s still well-behaved!
@honourabledoctoredwinmoria3126
@honourabledoctoredwinmoria3126 Жыл бұрын
I believe that it's just Neumann series. von Neumann was a different person.
@victorrielly4588
@victorrielly4588 Жыл бұрын
I believe it can be proven that the only solution is the one presented in the video. This is because any solution to y= y’ + y’’+… must also, by simple algebra satisfy y = 2y’ as was shown in the video, and the only solution that satisfies y = 2y’ is the presented solution. Indeed, suppose some other function solved the problem, call that function p(x), then p(x) = p’(x) + p’’(x) + …, but this means p(x) = p’(x) + (p’(x) + p’’(x) + …)’ so p(x) = 2p’(x) thus p satisfies y = 2y’
@-mystic-93
@-mystic-93 Жыл бұрын
I have never heard someone say "and so on and so forth" before
@IntegralKing
@IntegralKing Жыл бұрын
wait what? you can do those operations on operators?
@srahcir
@srahcir Жыл бұрын
Given the geometric operator (partial)sums has a (1-D)^-1 in the denominator, take a look at what happens if you apply (1-D) to both sides of the equations: In the question, you get y - y' = y' - y^(n+1) or y = y'' + y^(n+1). Looking at this as a matrix system of differential equation, you can solve this to get the n linearly independent solutions. In the follow-up, you go from y+y'+...+y^(k) = y^(k+1) +... to y-y^(k+1) = y^(k+1). But this is just y=2y^(k+1), which can also be solved as a system of equations C_0 e^(r_0 x) + ...+ C_k e^(r_k x). Afterwards you would still need to show these constructed solution is actually solve the original system.
@jamesn.5721
@jamesn.5721 Жыл бұрын
Can someone explain the reasoning behind the sum of the powers of the differential operator converging? Doesn't seem intuitive for me.
@tobysomething3742
@tobysomething3742 Жыл бұрын
I think the reason you don't have more solutions in the infinite case, is that the solutions are of the form c*e^(ax) where (sum from i=1 to n of a^i)=1, and apart from a around 0.5, the solutions for a approach the unit circle, in the limit once they "reach" the unit circle their powers can't sum to one, they must cancel to be 0, so the solutions apart from a=0.5 in the finite case don't have a corresponding infinite case solution
@shubhamjoshi1843
@shubhamjoshi1843 Жыл бұрын
At 3:30 why is C a real number? Couldn't C be complex as well?
@r.maelstrom4810
@r.maelstrom4810 Жыл бұрын
No way it's y + y' = y'' + y ''' at 10:00. It's y + y' = 2(y'' + y''').
@winter9753
@winter9753 Жыл бұрын
Do you think it would be possible to use the Fourier transform to solve this?
@usptact
@usptact Жыл бұрын
Are there any practical applications where such differential equations appear?
@sebaufiend
@sebaufiend Жыл бұрын
In quantum mechanics you can have operators such as exp[d/dx]. If you expand this in taylor series this means exp[d/dx]=1+d/dx+1/2*d^2/dx^2+1/6*d^3/dx^3....
@BenfanichAbderrahmane
@BenfanichAbderrahmane Жыл бұрын
The operator d/dx is not continuous so you can't do that I think ?
@byronwatkins2565
@byronwatkins2565 Жыл бұрын
C can also be complex. Since all of the terms are positive (except y), the vast majority of the characteristic equation roots are complex and the solutions oscillate. The infinite case has an infinite series as its characteristic equation and all of the coefficients (except a_0=-1) are +1. This infinite set of complex roots may well provide a corresponding infinite set of linearly independent solutions, but I suspect that very few will be useful.
can you solve this exponential equation?
12:45
Michael Penn
Рет қаралды 7 М.
«Жат бауыр» телехикаясы І 26-бөлім
52:18
Qazaqstan TV / Қазақстан Ұлттық Арнасы
Рет қаралды 434 М.
Air Sigma Girl #sigma
0:32
Jin and Hattie
Рет қаралды 45 МЛН
A deceivingly difficult differential equation
16:52
Michael Penn
Рет қаралды 250 М.
A differential equation from the famous Putnam exam.
20:21
Michael Penn
Рет қаралды 26 М.
Can you really use ANY activation function? (Universal Approximation Theorem)
8:21
An Exact Formula for the Primes: Willans' Formula
14:47
Eric Rowland
Рет қаралды 1,4 МЛН
7 Outside The Box Puzzles
12:16
MindYourDecisions
Рет қаралды 499 М.
A very interesting differential equation.
16:28
Michael Penn
Рет қаралды 960 М.
the parabolic trig functions
23:03
Michael Penn
Рет қаралды 48 М.
A famous number and its logical dilemma.
11:31
Michael Penn
Рет қаралды 62 М.
A Nice Differential Equation #calculus
12:18
SyberMath Shorts
Рет қаралды 6 М.
Green's functions: the genius way to solve DEs
22:52
Mathemaniac
Рет қаралды 659 М.
«Жат бауыр» телехикаясы І 26-бөлім
52:18
Qazaqstan TV / Қазақстан Ұлттық Арнасы
Рет қаралды 434 М.