solving an infinite differential equation

  Рет қаралды 109,691

Michael Penn

Michael Penn

Күн бұрын

Chalk found Smol Math Man pacing back and forth. "what's wrong Michael? Cat got your tongue?" said Chalk in a pompous manner. "This differential equation...it's...it's infinite...I don't know if I can solve it." Chalk looked Michael in the eye, "I believe in you, Michael. You can solve it." Then the differential equation swallowed the smol math man whole. Did Michael escape? Find out at the end of the credits of the video after this one.
🌟Support the channel🌟
Patreon: / michaelpennmath
Merch: teespring.com/stores/michael-...
My amazon shop: www.amazon.com/shop/michaelpenn
🟢 Discord: / discord
🌟my other channels🌟
mathmajor: / @mathmajor
pennpav podcast: / @thepennpavpodcast7878
🌟My Links🌟
Personal Website: www.michael-penn.net
Instagram: / melp2718
Twitter: / michaelpennmath
Randolph College Math: www.randolphcollege.edu/mathem...
Research Gate profile: www.researchgate.net/profile/...
Google Scholar profile: scholar.google.com/citations?...
🌟How I make Thumbnails🌟
Canva: partner.canva.com/c/3036853/6...
Color Pallet: coolors.co/?ref=61d217df7d705...
🌟Suggest a problem🌟
forms.gle/ea7Pw7HcKePGB4my5

Пікірлер: 394
@ashtabarbor3346
@ashtabarbor3346 Жыл бұрын
Props to the editor of these videos for adding the best video descriptions on KZbin
@MichaelPennMath
@MichaelPennMath Жыл бұрын
Awww thank you very much! that means a lot to me. -Stephanie MP Editor
@danyilpoliakov8445
@danyilpoliakov8445 Жыл бұрын
Don't you dare to like Editors reply one more time. It is nice as it is😅
@jonasdaverio9369
@jonasdaverio9369 Жыл бұрын
​@@danyilpoliakov8445 It's still holding
@jongyon7192p
@jongyon7192p Жыл бұрын
An infinite differential equation SCP that becomes a bear and eats you
@Errenium
@Errenium Жыл бұрын
nice pfp
@a52productions
@a52productions Жыл бұрын
Arguably the first method is also sketchy! I was always taught that that recursive method of dealing with infinite sums is dubious unless you can prove it converges another way afterwards. In this case convergence and equality is very easy to show, but that method can fail pretty badly for not-obviously-divergent divergent sums.
@TaladrisKpop
@TaladrisKpop Жыл бұрын
Yes, for example, you can get the infamous 1+2+4+8+16+...=-1 or 1-1+1-1+1+...=1/2
@thomasdalton1508
@thomasdalton1508 Жыл бұрын
Yes, if you are going to use that kind of method you really should check the solution actually works. In this case, you'll get 1/2+1/4+1/8+... which does converge and converges to 1, which is exactly what we need.
@Owen_loves_Butters
@Owen_loves_Butters Жыл бұрын
Yep. Hence why you'll find videos online claiming 1+2+3+4+5...=-1/12, or 1+2+4+8+16...=-1 (both are nonsense results because you're trying to assign a value to a series that doesn't have one)
@gauthierruberti8065
@gauthierruberti8065 Жыл бұрын
Thank you for your comment, I was having that same doubt but I didn't remember if the first method was or wasn't allowed
@plasmaballin
@plasmaballin Жыл бұрын
This is correct. However, the solution obtained in the video can easily be shown to converge, so it is valid.
@terpiscoreis9908
@terpiscoreis9908 Жыл бұрын
Hi, Michael! This is a great problem. You can see that the original does have infinitely many solutions (well, let's say candidates for solutions) by making a different choice of where to start the infinite sum on the right hand side. For instance, with y = y' + y'' + y''' + y^(4)..., instead move y' and y'' to the left hand side to obtain: y - y' - y'' = y''' + y^(4) = D^2(y'+y''+y''') = y'' Thus the solutions to y - y' - 2y'' = 0 are also solutions to the infinite order differential equation. We recover e^(x/2) as a solution but also obtain a "new" one: e^(-x). However, the infinite sum of derivatives here doesn't converge. By an analogous argument, it looks like the solutions to y - y' - y'' - ... - 2y^(n) = 0 for a positive integer n might solve the infinite order differential equation -- assuming the infinite sum of derivatives converges.
@DanielBakerN
@DanielBakerN Жыл бұрын
The sketchy solution is similar to using the Laplace transform.
@nafrost2787
@nafrost2787 Жыл бұрын
I think using a Laplace transform is a slightily better solution, because it justfies treating the derivative operator as a number in the geometric series formula, because (if I remember things correctly) in the s domain the derivative operator is a number. Using Laplace transform also, if not solve completely, then at least, simplify the ODE's given in the end of the video, to polynomial equations that can be solved numerically, and it also helps explain why there is only one solution to the ODE of the infinite degree, even though in every finite case, there are n soltuions. This comes from the fact that a power series can have any number of roots, even though the nth partial sum, have n roots (it is a polynomial of nth degree), for example exp doesn't have any roots, even complex ones, and of course sin and cos have an infinite number of roots.
@ManuelFortin
@ManuelFortin Жыл бұрын
Regarding the missing infinity of solutions, one way of seeing where they go seems to be as follows. Differential equations of the form y = y'+y''+...+y(n) (y(n) is the nth derivative, not to be confused with y evaluated at n) are known to have solutions that are linear combinations of e^(ax), and we need to find the right "a". There are n "a" values. However, only one of them has |a|1. At least this is what it seems from playing with Wolfram Alpha up to n = 20. The problem is that y(n) = (a^n) y. Since |y|>0, if |a|>1, the value of y(n) diverges as n goes to infinity, whatever x is in y(x). Therefore, these solutions are not well-behaved, and we need to set their coefficient to zero in the general solution (linear combination of e^(ax), otherwise y is not defined). I guess there is a way to prove that only one of the roots has |a|
@whatthehelliswrongwithyou
@whatthehelliswrongwithyou Жыл бұрын
but isnt y diverges if a>0, not a > 1? Also leaving only non divergent solutions is a great argument in physics, but here they are still solutions, nothing bad about divergence at infinity. at least that's what i think, might be wrong
@whatthehelliswrongwithyou
@whatthehelliswrongwithyou Жыл бұрын
oh, the sum of derivatives doesn't converge at fixed x, the its a problem
@user-sk5zz5cq9y
@user-sk5zz5cq9y Жыл бұрын
@@whatthehelliswrongwithyou yes y diverges as x aproaches infinity if a is positive, he was talking about the existance of the solution
@ManuelFortin
@ManuelFortin Жыл бұрын
@@whatthehelliswrongwithyou Yes, that's what I meant. Sorry for the late reply.
@martinkuffer5643
@martinkuffer5643 Жыл бұрын
We know the Cs are the roots of the characteristic polynomial of the equation. There are n roots (counting multiplicity) of a polynomial of degree n and thus n solutions. In the new equation this still holds, but now you have a "polynomial of infinite degree" i.e. an non-polynomic analytic function. These can have any number of roots (by the procedure you shown, where the roots go to infinity as you add terms to the series), and thus there can be any number of solutions to our original equation :)
@lucyg00se
@lucyg00se Жыл бұрын
This was such a fun one. You're absolutely killing it man
@trevorkafka7237
@trevorkafka7237 Жыл бұрын
Answer to the question about the finite version: If y=y'+y''+...+y^(n) and substitute y=e^(kx), we get 1=k+k²+...+k^n, so 1=((1-k^(n+1))/(1-k))-1. This can be rearranged to k^(n+1)+2k-1=0. In the limit as n->infinity, we can see that we must restrict |k|≤1. Furthermore, it's obvious k≠0, so 0
@fmaykot
@fmaykot Жыл бұрын
I'm afraid the limiting procedure in case 2 is a bit more subtle than that. You did not take into account the fact that both r and θ can (and in fact do) depend on n. If θ ~ α/(n+1) as n -> inf, for example, then R ~ 1 as n -> inf and α = 2*pi*m for integers 0
@Horinius
@Horinius Жыл бұрын
@10:15 y + y' ≠ y'' + y''' "But I'll let you do it as homework" 😆😆
@weeblol4050
@weeblol4050 4 ай бұрын
trivial y + y' = y' + 2y''
@16sumo41
@16sumo41 Жыл бұрын
Lovely problem! And lovely follow up question ^^. Something really aesthetically pleasing in this problem. Maybe it has to do with the perceived difficulty of solving it, ending in a really nice and simple solution. Lovely.
@stratehorthy3351
@stratehorthy3351 Жыл бұрын
Here's one simplification to the last set of differential equations : y+y'+y'' + ... + y^(n) = y^(n+1) + y^(n+2) + .... --- (1) Adding y'+y''+ ... + y^(n) to both sides we get : y+2(y'+y''+ ... + y^(n)) = y' + y'' + .... --- (2) Differentiating (1) then adding y'+...+y^(n+1) to both sides we get : 2(y'+y''+...+y^(n+1)) = y' + y'' + .... --(3) Comparing (2) and (3) we get : y=2y^(n+1) which matches with the start of the problem. If y=Ce^(ax), we can find that a is the (n+1)'th root of 1/2. I wonder if there are other solutions too !
@danielrettich3083
@danielrettich3083 Жыл бұрын
I really liked the "sketchy" method, probably because I'm a physicist xD, and thus tried it on this generalized form of the problem. And it actually leads to the same simplified differential equation you got, namely y=2y^(n+1), which I find absolutely amazing
@PleegWat
@PleegWat Жыл бұрын
@@danielrettich3083 Same here. Remember to include all n+1 (complex) branches of (n+1)√2 to get all solutions.
@weeblol4050
@weeblol4050 4 ай бұрын
good job
@driksarkar6675
@driksarkar6675 Жыл бұрын
I think for the general problem at 9:07, you can just apply the first method, so you y+y’+... y(n)=y(n+1)+(y+y’+...y(n))’. When you expand the derivative, everything except y(n+1) and y cancel out, so you get y=2*y(n+1). From there it’s relatively straightforward, and you get y=C*e^(x/(2^(1/(n+1))*e^(2*pi*i*m/(n+1)))) for a real number C and an integer m. That means that you actually have n+1 families in general, so the full solution is a linear combination of these.
@rohitashwaKundu91
@rohitashwaKundu91 Жыл бұрын
Yes, I have done the same thing but isn't the solution coming as y=Ce^(x/(2^(1/n)))?
@mathieuaurousseau100
@mathieuaurousseau100 6 ай бұрын
@@rohitashwaKundu91 It should be y=Ce^(ax) where a^(n+1)=1 (with C a complex number, I don't know why, they said real) and the number such as a^(n+1)=1 are the 2*m*pi/(n+1) with m integer between 0 and n (included)
@kasiphia
@kasiphia Жыл бұрын
I think a really good idea for a follow-up video would be an explanation of why we don't have infinitely many linearly independent functions that solve the equation. Or perhaps they do exist, and that could be shown. I've noticed that when substituting in these infinitely recursive relationships, we often lose generality. For example, for the function y=x^x^x^x^x... we can do a similar substitution as we did in the video and find that y=x^y, which produces many solutions but only for 1/e
@TaladrisKpop
@TaladrisKpop Жыл бұрын
Like everytime when using algebraic manipulations with series (or more generally, limits), one should carefully check about the convergence. Without it, the first method only shows that, IF a solution exists, then it has to be of the form y=Ce^(x/2)
@honourabledoctoredwinmoria3126
@honourabledoctoredwinmoria3126 Жыл бұрын
It's a fair point, but Y(n) of Ce^ax = (a^n)Ce^ax. So what we actually have here on the RHS is a geometric series (1/2 + 1/4 + 1/8...)Ce^(x/2), and on the left: Ce^(x/2). They equal each other if and only if that geometric series converges to 1, and of course it does. It's a valid solution, and I suspect it is the only valid solution. There are other apparent solutions, but they do not actually converge.
@TaladrisKpop
@TaladrisKpop Жыл бұрын
@@honourabledoctoredwinmoria3126 Yes, convergence is not difficult to check, but it shouldn't be left out
@broccoloodle
@broccoloodle Жыл бұрын
Well, you first assume a solution exists, you find all solutions, then later on you remove all solutions that do not converge. I find nothing wrong about that logic
@TaladrisKpop
@TaladrisKpop Жыл бұрын
@Khanh Nguyen Ngoc Did I say the opposite? But where in the video do they eliminate the divergent solutions? If not done, the solution of the problem is incomplete.
@broccoloodle
@broccoloodle Жыл бұрын
@@TaladrisKpop I think verifying the solutions not diverging is too obvious that Michael chose not to show it on the video. What he wanted to deliver to us is actually the second way and triggering our curiosity on additional problems in the video.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
If you decide to associate the 3rd derivative and so on, you get that's the 2nd derivative of y, in which you get y = y' + y'' + y'', and you get the family of solutions y = c_1e^(x/2) +c_2e^(-x). So we do get infinite families of solutions, but it's a matter of where we associate. If we start with the 4th derivative, we'll get 3 solutions as we have y in terms of the first, second, and third derivative. And so on.
@patato5555
@patato5555 Жыл бұрын
You can take this a bit further by noting the characteristic polynomial of keeping the first n derivatives will factor as (r-1/2)(1+r+r^2+…+r^(n-1)). In general, y=ce^(rx) where r=1/2 or r is a root of 1+x^2+…+r^n for some n. Of course, there could be more solutions than these.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
@@patato5555 I agree. Except they'd be roots of 2x^n + x^(n - 1) + ... + x - 1 if I'm not mistaken.
@patato5555
@patato5555 Жыл бұрын
@@mizarimomochi4378 if you set the expression equal to 0, divide by 1/2 and then factor out the r-1/2 they will be equivalent.
@mizarimomochi4378
@mizarimomochi4378 Жыл бұрын
@patato5555 Sorry, I didn't notice the first time. My bad.
@patato5555
@patato5555 Жыл бұрын
@@mizarimomochi4378 No worries!
@matthewrorabaugh1497
@matthewrorabaugh1497 Жыл бұрын
For one of the follow-on questions there is a cute result which pops up. f=f'+...+f(n) when n is congruent to 1 mod 4. In that case you can use a sine function because the other derivatives cancel themselves out. I was looking for ways to fit this self-canceling concept into the other finite equations, but I have been unsuccessful.
@thegozer100
@thegozer100 Жыл бұрын
Some information I found on the question: solving the differential equation for finite amount of terms is the same as solving the equation 1=sum_{j=1}^n a^j, where I used y=exp(a*x) as a trial function. When I plot all the solutions for a large n, the solutions lie on the unit circle in the complex plane, except for one point. The point that is supposed to be at a=1 lies at a=1/2. This would mean that when we take the limit as n goes to infinity all the points on the unit circle would somehow "cancel out" and the point at a=1/2 would remain.
@oni8337
@oni8337 Жыл бұрын
how could i have forgotten about complex number branches
@jakubszczesnowicz3201
@jakubszczesnowicz3201 Жыл бұрын
I love the sketchy proof!!! Operator analysis looks so wild without context though. Like, that whole segment around 5:30 is crazy. If I saw (1 - D)^-1 as a high school student I would be mindblown, my teacher wouldn’t be able to hear the end of it
@Yossus
@Yossus Жыл бұрын
I love these videos for two reasons: one, the insight on the maths itself, two, the insight on how to cleanly draw the symbols!
@dmytryk7887
@dmytryk7887 Жыл бұрын
For the truncated version: y=y'+y''+y'''+...+y(n) let r be a root of x+x^2+x^3+...+x^n=1. Then it is easy to show that y=exp(rx) is a solution to the truncated equation. Since there are n such roots this gives you the basis of the expected n dimensional solution space: exp(r_1 x), exp(r_2 x), ...,exp(r_n) x Now the hand-wavey part : as n approaches infinity, the equation x+x^2+...+x^n=1 approaches x/(1-x)=1 which has the unique solution x=1/2 as found in the video. Not really satisfying. I feel there is a nicer geometric argument, but I don't see it as of now.
@alexsokolov8009
@alexsokolov8009 Жыл бұрын
You can simplify your characteristic equation using formula for sum of geometric series: (x^(n+1) - x) / (x - 1) = 1 which is the same as x^(n+1) - 2*x + 1 = 0, x != 1 It is easy to show that the function f(x) = x^(n+1) - 2*x + 1 has exactly 2 real roots for odd n and 3 real roots for even n. Excluding x=1 will give us 1 or 2 real solutions depending on parity of n. I guess these observations show that an infinite equation from the video has no more than 2 real solutions. However, there are complex solutions, which should also be considered
@davidblauyoutube
@davidblauyoutube Жыл бұрын
I immediately thought of the "sketchy" solution with D as a linear operator 😆. When the characteristic "polynomial" is actually not a polynomial because it lacks a finite degree, then usually there's some formula that can be applied to its coefficients (otherwise, how would you define it?). In that case, my hunch is that there's some manipulation that can be performed along the lines of techniques used with generating functions and recursive sequences that will produce a diffeq having an order equal to the degree of the formula.
@PeterBarnes2
@PeterBarnes2 Жыл бұрын
I prefer using a slightly more direct approach to using linear operators. [1]y = [1/(1-D_x) - 1]y {|y'/y| < 1 (?)} (This is equivalent to the given equation, in terms of Differential Operators, with the condition (which might not be necessary) coming from 1/1-s having a pole at s=1. This pole should manifest as divergence in certain exponential solutions, namely those with parameter 's' (from e^sx) outside the radius of convergence of this 'definition of 1/1-s.' I say it 'should' manifest this way, but this theory is not developed enough to be certain of the divergence, at least to my knowledge. Fortunately the final solution satisfies this condition anyway, so it is not repeated.) 0 = [1/(1-D_x) - 2]y (Moving terms between sides of the equation, as both operators are operating on the same term 'y.') 1/(1-s) - 2 = 0 (The exponential solutions of any (there is a theorem I've discovered, more or less, to this generalization from polynomials to any function, indeed) Constant-Coefficient Linear DE are found by using the characteristic equation to find the eigenfunctions of the form e^sx, with s the characteristic equation's independent variable.) 1 - 2(1-s) = 0 -1+2s = 0, s=1/2 (Just algebra, here. Having solved for 's,' e^sx are our eigenfunctions, thus:) y = Ce^(x/2) Really a very short and simple approach. Now, if you want a more difficult approach, you can use the fact that [1/(s-D_x)] is a variation of the Laplace transform, remembering that [e^(bD_x)]f(x) = f(x+b) and int{0, inf} e^-at dt = 1/a and then you can try to solve the resulting integral equation. It's a good bit of fun, and certainly possible, if a little unnecessary in this problem. [Edit: I did this without watching the video first. My mistake, it's almost exactly as presented! Oh well...]
@ilonachan
@ilonachan Жыл бұрын
What's really great here is that we don't actually need to get all that convoluted to get rid of the sketchiness, and just not do the step with the weird "function division" thing. While we often write the geometric formula as that ratio, its derivation works in any ring if we just skip that final simplification! So with our present ring of linear functors, where addition is adding the results, multiplication is chained application, and division is not generally defined, we can still just skip directly from the (1)y=(sum)y description to the (1-D)y=Dy statement. ...although, does D^(n+1) "converge" in some meaningful way? that'd be required for the infinite case, right? the finite case ofc just gives us a relatively simple degree n+1 differential equation, but I forget how exactly those are solved rn...
@PeterBarnes2
@PeterBarnes2 Жыл бұрын
​@@ilonachan x^n doesn't converge over all x. The domain for D^n to converge over is the space of functions. That's a pretty broad domain, so I prefer to stay within the complex meromorphic functions. (Which, despite including complex functions, is much more restrictive and well-behaved.) I'm pretty sure of these two things: One of these extended differential operators f(D_x) converges for an exponential function e^sx if and only if the function f(s) converges at 's.' As well, polynomials converge if f(0) converges, and polynomials times exponentials P(x)e^sx converge when e^sx converges. This much I'm fairly confident about. Further, other functions than exponentials or polynomials converge for a given differential operator depending on how the function is expressed. For example, a taylor series may diverge on its terms alone, but an exponential times a taylor series may converge absolutely, even when the exponential times the series equals the original series. More than that, integral expressions of some function might converge or diverge if they contain exponential terms that remain inside or go outside, respectively, the domain of convergence of the differential operator. This much is actually given (I think) by the previous thing. I have no idea about functions which are in no way expressed as exponentials or polynomials. Not just regarding their convergence under various differential operators, but even how to evaluate them. There is something which can, theoretically, help. Functions of the derivative applied to functions of the variable can be reversed: [f(D_x)] (g(x)*y(x)) = [[g(D_z + s)]{z=D_x} f(z)]{s=x} (y(x)) It's messy, but cleans up when y=1: [f(D_x)] g(x) = [g(D_z + x)]{z=0} f(z) This allows you to evaluate some expressions more easily. Because it's easy to evaluate exponentials of derivative operators (e^bD is the shift operator by 'b'), and polynomials are basically given (D^p is the pth derivative operator for p a natural number) you can basically evaluate any differential operator on functions expressed in terms of exponentials and polynomials. This works when the exponentials or polynomials are under an integral, or in a sum, or up a tree, anything! (By 'up a tree' I'm not actually referring to anything specific. For example, I don't mean towers of exponentials: I am still working on exponentials of polynomials e^(x^p), as they do not behave at all. [e^e^D]y=0 might be the DE for which the gamma function is the solution. Or maybe not, it's hard to tell. Maybe with a minus sign somewhere, but then it doesn't work, it's rather confusing, actually.) The fact that exponentials behave better than polynomials motivates me to try and express one in terms of the other. So far I've found one expression which requires a limit, which isn't satisfactory. I've looked at distributions (a generalization of functions), and found a way of getting to it from what are basically derivatives of the sign() function. This, interestingly, gives the exact same result with the limit and everything. I've looked at expressing the logarithm, which also gives the same exact result. Maybe thinking from polylogarithms, or something else entirely? Very uncertain.
@sirlight4954
@sirlight4954 Жыл бұрын
D is an unbounded operator, so the geometric series requires some assumptions to be made for it to converge
@PennyAfNorberg
@PennyAfNorberg Жыл бұрын
@@sirlight4954 I guess that why the soloution was schecty, and i start thinking how to check that |D|
@aceofhearts37
@aceofhearts37 Жыл бұрын
For the follow-up questions, you can bracket the first one as (y + y') = (y'' + y''') + (y^(4) + y^(5)) + ..., and therefore defining z = y + y' this becomes z = z'' + z^(4) + ..., so the differential equation can be solved in two steps. This generalizes to the n case by defining z = y + y' + ... + y^(n) so that the DE can be rewritten as z = z^(n+1) + z^(2n+2) + ..., which by the same method used in the first half can simplify to z = 2z^(n+1). Then you get a sum of exponentials in the complex roots of 1/2 and throw that mess into the RHS of y + y' + ... + y^(n) = z. So y(x) will ultimately be a sum of complex exponentials but I imagine the coefficients would get messy fairly quickly. Edit: changed n to n+1 in the RHS of the rewritten equation, I had counted that wrong. Edit 2: actually not that bad, check replies.
@aceofhearts37
@aceofhearts37 Жыл бұрын
So, actually not that messy. From now on I'll use Σ to mean the sum from k=0 to k=n. The solution to z = 2z^(n+1) is a function of the form z(x) = Σ (A_k)exp[(λ_k)x], where the A_k are any complex numbers and λ_k = [(1/2)^(n+1)] exp(2kπi/(n+1)) is one of the (n+1)st roots of 1/2. Therefore, the solution to y + ... + y^(n) = z will have a homogeneous part (a sum of exponentials involving the roots of 1 + λ + ... + λ^n = 0) and a particular solution, which we can assume has the form z(x) = Σ (B_k)exp[(λ_k)x], for some coefficients B_k that we have to compute. By comparing with the RHS we get (1+λ_k+...+λ_k^n)B_k = A_k, which by the partial sum of a geometric series and λ_k^(n+1) = 1/2 simplifies to B_k = 2A_k(1-λ_k). Since A_k can be chosen to be any complex number, B_k is also any complex number since 2(1-λ_k) is always nonzero. Then if we want real solutions we can pick the B_k to be complex conjugates as needed.
@Joe-nh9fy
@Joe-nh9fy Жыл бұрын
@@aceofhearts37 This is what I worked out as well. Well actually I got y = 2y^(n+1) instead of z. I get this by using the original equation, and a second equation which is the derivative of the first equation. Solve for y^(1) in both equations. Then set those expression equal to each other and solve for y. But I believe your general function is the solution for y
@matteopriotto5131
@matteopriotto5131 Жыл бұрын
​@@aceofhearts37 lambda_k should be {(1/2)^[1/(n+1)]}exp(2k(pi)i/(n+1)) I think
@aceofhearts37
@aceofhearts37 Жыл бұрын
@@matteopriotto5131 You're right, good catch.
@matteopriotto5131
@matteopriotto5131 Жыл бұрын
@@aceofhearts37 glad I helped
@chimetimepaprika
@chimetimepaprika Жыл бұрын
Ahh, three seconds in, "The trivial solution works beautifully."
@cara-seyun
@cara-seyun Жыл бұрын
0 = 0 + 0 + 0 + 0…
@BackflipsBen
@BackflipsBen Жыл бұрын
That perfect infinity symbol at 4:45 touched my soul
@ntuneric
@ntuneric Жыл бұрын
i think some insight for the question at 7:23 is that the differential equation with finite number of terms n corresponds to a characteristic polynomial of degree n that has n roots, whereas the infinite one's polynomial is a power series which has a single root
@anasselmoubaraki9410
@anasselmoubaraki9410 Жыл бұрын
To answer your question Mr Penn i think that having one solution is a consequence of the analytical property of the solution and having an infinite sum forces the coefficient (a_k) in the analytical expression to be defined uniquely. Thank you for your amazing videos.
@diszno20
@diszno20 Жыл бұрын
I would love to see what happens when you choose different constants for the different derivatives, e.g. y = sum {from k=1 to inf} 1/k y^{(k)} Also it would be fun to plug some crazy sequence as constants. I.e. define a_n to be the nth digit of pi and calculate y = sum a_n y^k
@petersievert6830
@petersievert6830 Жыл бұрын
10:09 That is most definitely wrong. I think, it must be y + y' = y' + 2y''
@krisbrandenberger544
@krisbrandenberger544 Жыл бұрын
No. y+y'=2(y"+y''') from doing something similar with the goal equation.
@petersievert6830
@petersievert6830 Жыл бұрын
@@krisbrandenberger544 Well, I am not wrong, I dare say. your equation is correct as well though. You cut off beginning after y''' and made the rest into (y+y')'' , while I did after y'' and made the rest into (y+y')' Honestly my equation seems much more futile to get to a solution though.
@JamesLewis2
@JamesLewis2 Жыл бұрын
When you started the "sketchy solution" I thought that you were going to start grouping from later in the equation, something like noting that y=y′+y″+(terms of the original expansion)″ and then getting the spurious solution family y=ce^−x, which if back-substituted results in basically saying that Grandi's series converges to −1; related to that, if you group it off after the nth derivative, you get an equation with characteristic polynomial 2r^n+r^(n−1)+r^(n−2)+…+r^2+r−1, which factors as (2r−1)(r^(n−1)+r^(n−2)+…+r^2+r+1), and the zeroes are ½ and the roots of unity other than 1, corresponding to spurious solutions equating 1 to the sum of a divergent series with terms that oscillate around the unit circle.
@krisbrandenberger544
@krisbrandenberger544 Жыл бұрын
Hey, Michael! So for the general case of the follow up question, we would have: y+y'+...+y^(n)=2*(y+y'+...+y^(n))^(n+1)
@sleepycritical6950
@sleepycritical6950 Жыл бұрын
Jesus christ you came at the right time i love yooooooouuuuuu i needed this desperately
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
Here's an operator ordering issue. You have to prove D commutes with 1/(1-D) before acting on both LHS and RHS an 1-D. (1-D)y=((1-D)D(1-D)^(-1))y it truly is.
@jamiewalker329
@jamiewalker329 Жыл бұрын
Err, that's trivial, the commutator of any function of an operator with any other function of that same operator is 0. Non trivial commutation relations come from operators being distinct, or distinct components of vector operators.
@reeeeeplease1178
@reeeeeplease1178 Жыл бұрын
You can "factor" a D out from the series *to the right* and then use the geometric series trick to avoid this problem
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
@@jamiewalker329 yes. You are right. [f(D), g(D)]=0
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
@@reeeeeplease1178 yes. Thanks.
@jiantaoxiao2481
@jiantaoxiao2481 Жыл бұрын
f and g has D^n as basis and D^n's coefficient should be constant.
@Tehom1
@Tehom1 Жыл бұрын
Did Michael escape? Will he be able to cut his way out of the belly of beast with only the Heaviside operator? Stay tuned, viewers! 😮
@anthonypazo1872
@anthonypazo1872 Жыл бұрын
"Okay. Nice." 😂😂❤❤ love it every time I hear that.
@anggalol
@anggalol Жыл бұрын
Well, that is totally unexpected to separate the differential operator💀
@kitochizxik5786
@kitochizxik5786 Жыл бұрын
Hi Kurisu
@donmoore7785
@donmoore7785 Жыл бұрын
Very thought provoking. I honestly found the "sketchy" solution very sketchy - I didn't understand the manipulations of the D operator.
@5911_Rockets
@5911_Rockets Жыл бұрын
Thank you sir, really helpful 🙏🇮🇳
@8_by_8_battleground
@8_by_8_battleground Жыл бұрын
Hi, Michael. For the general differential equation, I am getting two solutions. Either y can be ce^x or it can be a polynomial of degree (n+1) with the coefficient of the highest power being 0.5/(n+1)!.
@NathanSimonGottemer
@NathanSimonGottemer Жыл бұрын
How do you know the sum on the right hand side converges? If you are working with a domain of real numbers for y the sum should diverge if x is positive, which makes me feel like this is a sort of Ramanujan-tier cheat code solution. Of course I still think it means something, just not the whole picture…if we take y(0)=0 then the Laplace transform will converge for |s|
@Blackmuhahah
@Blackmuhahah Жыл бұрын
Extending the case with finite n to solutions of the form y=e^(a x) you get 1=a+a^2+...+a^n. In the limit as n->\infty you get a=e^(i\phi), where 0
@chrisdupre2862
@chrisdupre2862 Жыл бұрын
I don’t know if this has been answered or not already, but one way to look at the non-existence is via the Fourier transform (a favorite for constant coefficient linear ODE). After some manipulation, you can see that the solution must solve \Lambda^{n+1} =2\Lambda -1. Now suppose n goes off to infinity. We break up looking for roots into three options: the modulus of lambda is greater than, equal to or less than one. In the greater than case, we cannot solve this as the left hand side is much much bigger than the right. In the equal to, the left hand side does not have a limit, so what do we even mean! In the less than case, the term tends to 0, so 2\Lambda -1 = 0 which recovers our start. Heres a follow up: is there a distribution of solutions around the unit circle that this approach’s? Is there a meaningful “Distribution of other oscillatory solutions at infinity “ ? Great video! It’s fun to see the resolvent pop up in the sketchy side!
@Qhartb
@Qhartb Жыл бұрын
The question I thought of as soon as I saw it was: y = y'/1! + y''/2! + y'''/3! + ... So a Taylor-series-looking differential equation. Possibly an application of your "what's exp(D)" result from another video?
@Kapomafioso
@Kapomafioso Жыл бұрын
I also thought about that and how the argument shifts when exp(D) is applied. Then the equation essentially becomes: f(x+1) = f(x), which is a functional equation for any periodic function with period 1, instead of a differential equation. Infinite series of derivatives be weird and exotic like that. Sometimes it's not a differential equation at all, despite looking like one.
@deathguitarist12
@deathguitarist12 Жыл бұрын
I haven't worked this out, but I see a common element between the infinite differential problem and the finite problem. The solution to the infinite differential can infact be written as a linear combination of functions y_i, if you were to expand the exponential Cexp(-x/2) as a taylor series. My suspicion as that the solution to the finite differential version of this would just be the n-term taylor expansion of the exponential solution. But I can't be sure with out working it out.
@scarletevans4474
@scarletevans4474 Жыл бұрын
So... will there be some follow-up videos? Or we are just left with these questions that will never be answered??
@GeoffryGifari
@GeoffryGifari Жыл бұрын
On the follow-up question, in the video you shifted the equal sign to the nth sum. Now, can we do this indefinitely, shifting the equal sign to the right to somehow "inverting" the sum of derivatives? y + y' + ... + y^(n) = y^(n+1) + y^(n+2) + ..... to maybe lim m -> infinity { y + y' + ... + y^(m-1) = y^(m) ..... } ?
@JohnSmith-zq9mo
@JohnSmith-zq9mo Жыл бұрын
Note that we have a similar case for ordinary algebraic equations: the equation 1+x+x^2/2+..+x^n/n!=0 has n complex solutions, but if we take the limit we get an equation with no solutions.
@nablahnjr.6728
@nablahnjr.6728 Жыл бұрын
i love how you can also easily build even and odd parts of this equation using the solution that satisfy y_even = [y(x)+y(-x)]/2 = y''_even + y""_even and so on
@thespiciestmeatball
@thespiciestmeatball Жыл бұрын
That was sick!
@sergeyd5777
@sergeyd5777 Жыл бұрын
Brilliant!
@marchenwald4666
@marchenwald4666 Жыл бұрын
As a general solution to the problem around 9:00 : For n terms on the left, the functions satisfying the equation are y = C * e ^ ( ( (1/2) ^ (1/n) ) * x )
@HarmonicEpsilonDelta
@HarmonicEpsilonDelta 8 ай бұрын
I find absolutely game changing the fact that applying the geometric series worked 😮😮
@srahcir
@srahcir Жыл бұрын
Given the geometric operator (partial)sums has a (1-D)^-1 in the denominator, take a look at what happens if you apply (1-D) to both sides of the equations: In the question, you get y - y' = y' - y^(n+1) or y = y'' + y^(n+1). Looking at this as a matrix system of differential equation, you can solve this to get the n linearly independent solutions. In the follow-up, you go from y+y'+...+y^(k) = y^(k+1) +... to y-y^(k+1) = y^(k+1). But this is just y=2y^(k+1), which can also be solved as a system of equations C_0 e^(r_0 x) + ...+ C_k e^(r_k x). Afterwards you would still need to show these constructed solution is actually solve the original system.
@BrandonJensen-sl8sk
@BrandonJensen-sl8sk Жыл бұрын
For a finite number of terms can you use formula for the sum of a finite geometric series and manipulate the equation that way?
@byronwatkins2565
@byronwatkins2565 Жыл бұрын
C can also be complex. Since all of the terms are positive (except y), the vast majority of the characteristic equation roots are complex and the solutions oscillate. The infinite case has an infinite series as its characteristic equation and all of the coefficients (except a_0=-1) are +1. This infinite set of complex roots may well provide a corresponding infinite set of linearly independent solutions, but I suspect that very few will be useful.
@tobysomething3742
@tobysomething3742 Жыл бұрын
I think the reason you don't have more solutions in the infinite case, is that the solutions are of the form c*e^(ax) where (sum from i=1 to n of a^i)=1, and apart from a around 0.5, the solutions for a approach the unit circle, in the limit once they "reach" the unit circle their powers can't sum to one, they must cancel to be 0, so the solutions apart from a=0.5 in the finite case don't have a corresponding infinite case solution
@patato5555
@patato5555 Жыл бұрын
When you move the equals sign around you don’t actually change the problem much. If our cut off is y+y’+…+y^(n) = y^(n+1) + … Then let g=y+y’+…+y^(n), and rewrite the RHS in terms of derivatives of g.
@TechnocratiK
@TechnocratiK 7 ай бұрын
The 'sketchy' approach is probably made a bit more formal by taking the Laplace transform of both sides. The result is then that Y = (s / (1 - s)) Y, and the solution follows multiplying through by (1 - s) and taking the inverse transform. This also permits us to consider solutions to y + y' + ... y(n) = y(n + 1) + ..., (where y(k) is the kth derivative of y) since we would have: (1 - s ^ (n + 1)) / (1 - s) Y = (s ^ (n + 1)) / (1 - s) Y Rearranging, Y = 2 s ^ (n + 1) Y and transforming back: y = 2 y(n + 1) The resulting basis of n+1 functions is z_k ^ x for k = 0..n where z_k are the n+1 complex roots of 1/2 (a real basis also exists). The case solved in this video was n = 0. There are two assumptions made here. First, that the solution y has a Laplace transform and, second, that the resulting geometric series converges (i.e., |s| < 1). Disregarding the second assumption, we can then ask (for n = 0) whether there exists |s| >= 1 for which s + s ^ 2 + ... = 1.
@jevinleno2670
@jevinleno2670 Жыл бұрын
Hey Michael, for the first method - doesn't the sum law for derivatives only hold for finite sums? This method seems like it needs further justification.
@blackfalcon594
@blackfalcon594 Жыл бұрын
A nice (and seemingly related) parallel: The polynomial 1 = sum_{j=1}^n x^j is a degree n polynomial and so has n (possibly complex) solutions. But when we take the infinite sum, 1 = sum_{j>=1} x^j = (e^x-1) for |x| < 1 we only get one solution, not infinitely many.
@Minecraft2331
@Minecraft2331 Жыл бұрын
I'm pretty sure you would just apply an infinite reduction of order, and basically end up with the sum from n=0 to infinity of (c_n*x^n*e^(x/2)) basically creating a general solution for the infinite roots. I think some sort of induction proof that this format of (c_n*x^n*e^(x/2)) works for any n in the naturals combined with a sequence proof to show this works as n->infinity would be necessary to prove this is the case, but that's my initial intuition of how to show there are in general infinite general solutions.
@haziqthebiohazard3661
@haziqthebiohazard3661 Жыл бұрын
Off the top of my head my guess was exp(x/2)
@skvortsovalexey
@skvortsovalexey Жыл бұрын
C*exp(x/2)
@deehobee1982
@deehobee1982 Жыл бұрын
The differentiation operator is unbounded, so it's dubious to factor it out of an infinite sum like you did. I think what you've done here is solve the corresponding "infinite characteristic equation" for this DE, but that certainly doesn't show that the infinite sum of exponentials converges to e^(x/2).
@habibullah-ki7ok
@habibullah-ki7ok Жыл бұрын
You are absolutely right. The differential operator is not continuous on tje space of smooth functions C^(\infty). Moreover,, you need the norm of D to be less than 1 to guarantee the sum makes sense. Nonetheless, this can be saved. Restrict the domain to the set of functions with norm less than one. Certainly the family Ce^{ax} is un this set for |a|
@deehobee1982
@deehobee1982 Жыл бұрын
@habib ullah Haha, that sounds right. Thanks. I think another comment gave a procedure to generate a completely different DE using the same logic
@DavidSavinainen
@DavidSavinainen Жыл бұрын
For the case y + y' + ... + y(n) = y(n+1) + ... you get, by the sketchy solution, y + y' + ... + y(n) = (D^[n+1]/(1-D)) y (1-D)(y+y'+...+y(n) = y(n+1) Notice that the LHS telescopes, giving only y - y(n+1) = y(n+1) or in other words, y(n+1) = y/2 which has the solution set y = C exp[x/α] where α = 2^[1/(n+1)] * exp[ikπ/(n+1)] for all integers k such that 0 ≤ k ≤ n
@lunstee
@lunstee Жыл бұрын
Careful with the telescoping; it only works correctly on the RHS infinite series when abs(D)
@techno2371
@techno2371 Жыл бұрын
I did it in a less elegant way: Since this is a homogeneous differential equation with constant coefficients, you assume the solution is in the form of ce^(rx). Differentiating this solution and diving by ce^(rx) (it can never be 0) you get 1=r+r^2+r^3... adding 1 to both sides gives you 2=1+r+r^r^3...=1/(1-r) (|r|
@IntegralKing
@IntegralKing Жыл бұрын
Oh, I've got one! what about y = y'' + y''' + y(5) + ... where the primes are all prime (2,3,5,7, etc). Will that question wrap back to the Reimann Zeta function?
@pedroricardomartinscasella641
@pedroricardomartinscasella641 Жыл бұрын
That swas a very interesting way to solve the problem.
@wolfmanjacksaid
@wolfmanjacksaid Жыл бұрын
I would've never looked at that and gone "wow that's a geometric series!" Haha
@DM31415
@DM31415 6 ай бұрын
How would be if you have and alternating infinite sum of derivatives with different coefficients?
@NikitaGrygoryev
@NikitaGrygoryev Жыл бұрын
I have a pretty wavehandy explanation for the uniqueness of the solution, for something more precise you might need to start thinking harder about what functions are we talking about. So for finite n you would solve the equation by substitution y=Exp(Ax). The characteristic equation is 1-2A+A^(n+1)=0 (where you should discard A=1). It's easy to see that in the limit n goes to infinity theres unique solution |A|
@seneca983
@seneca983 Жыл бұрын
9:20 Is there a nice solution. My answer is "yes". Just try a function of the form C*exp(k*x). You get the equation: 1+k+k^2+...+k^n=k^(n+1)+k^(n+2)+k^(n+3)... Take k^(n+1) as a common factor from the right side. 1+k+k^2...+k^n=k^(n+1)*(1+k+k^2...) Apply the formula for the geometric sum to the left and that of the geometric series to the right. (1-k^(n+1))/(1-k)=k^(n+1)/(1-k) Cancel out the common denominator and rearrange to get the following equation. k^(n+1)=1/2 The solutions for k are just (1/2)^(1/(n+1)) times the appropriate roots of unity. Technically, I've not proven that there aren't solutions that aren't of exponential form but that seems pretty intuitive.
@LarkyLuna
@LarkyLuna Жыл бұрын
That's just a frequency domain transform and a reverse, right? For the sketchy part You'd just have to worry about convergence before using it
@winter9753
@winter9753 Жыл бұрын
Do you think it would be possible to use the Fourier transform to solve this?
@AJMansfield1
@AJMansfield1 Жыл бұрын
I'd suspect that there's a complex-valued exponent that generates another family of solutions here. Using a sinusoid you get the same sort of geometric sequence behavior along the alternating terms that correspond to the sin and cos coefficients... though I'm not sure how you'd be able to remove the phase offset that this process would produce.
@sebaufiend
@sebaufiend Жыл бұрын
The first method I thought I was neat. I used geometric series but I didn't see a need to go through all that operator business. Something we learned in diffeq is that any linear differential equation system with constant coefficients will have solutions of the form A*exp(mx). And thus making this substitution into the equation we get 1=m+m^2+m^3.... The right hand side is very close to a geometric series which has the sum: 1+r+r^2+r^3...=1/(1-r), so if we subtract 1 from both sides we get r/(1-r)=r+r^2+r^3... So we sub this into our equation we get 1=m/(1-m) The only value that gives us a solution is m=1/2. Thus the solution is y=C*exp(1/2*x)
@mathunt1130
@mathunt1130 Жыл бұрын
The answer to the question is simple. Look for a trial solution y=exp(mx), and you'll end up with a polynomial equation. Demonstrating that there are a finite number of solutions. You can't do this for an infinite series. My first thought was to take Fourier transforms.
@grayjphys
@grayjphys Жыл бұрын
It doesn't seem that the result is the standard result from the geometric series: (1-x^n)/(1-x) which goes to 1/(1-x) when x infinity. Is it different for operators? Also, what does it mean for D < 1?
@IBH94
@IBH94 Жыл бұрын
Well I got the initial solution with realizing that an exponential function will result a sum of a geometric sequence converging to 1 that’s how I got the 1/2… for there I realized that any parameter smaller than 1 would make a converging geometric sum and you can just subtract whatever the sequence converges to and add 1 to balance the equation (the constant will disappear after the first derivative
@elephantdinosaur2284
@elephantdinosaur2284 Жыл бұрын
Looking at y = y' + ... + y^(n) has n independent solutions of the form y = a*exp(rx) where r is a root of r^n + ... + r = 1. This polynomial equation has the same roots as r^(n+1) - 2r + 1 = 0 excluding r = 1. Most of the roots of this polynomial equation lie outside the unit circle. If there was a root inside the complex unit circle with |r| < 1 then |1 - 2r| = |r|^(n+1) ~ 0 which heuristically implies r ~ 1/2. Working in the real numbers similarly shows there's a real root close to 1/2. Thus besides r ~ 1/2 all the other roots have |r| > 1. This of course isn't a rigorous proof, but just shows the intuition behind it. So in a non-rigorous way in the limit as n goes to infinity, the r ~ 1/2 solutions to the nth degree DEs converges to y = c exp(x/2) but all the |r| > 1 solutions have to die off otherwise they would introduce divergences.
@hqTheToaster
@hqTheToaster Жыл бұрын
I had a fantasy image in my head that looks like this: " (derivative[sqrt(10)timesover] y) + (derivative[10timesover] y) + (derivative[10sqrt(10)timesover] y) + (derivative[100timesover] y) + ... " and soon enough I knew it was time to try this video. These videos are good for those that do and don't listen alike. I'm sure you probably prefer the people that do or are more likely to listen; just wanted to let you know that I thought of you and/or your channel in a sincere way. Also, I think ?/2 shows up in your video because of the way the inherent limit would work as the tally marks approach infinity in 3 or more different ways. I'm not a calculus expert. That is just what I think.
@Pklrs
@Pklrs Жыл бұрын
how can we be sure about the convergence of (1/1-D)Y ? How is it even defined?
@Firnen13
@Firnen13 Жыл бұрын
Are there any physical problems where this kind of differential equation appears in physics or higher-order math theory?
@user-wt6vo5jg7k
@user-wt6vo5jg7k 10 ай бұрын
How to prove that the series of D^n converges and for which norm ?
@petersamantharadisich6095
@petersamantharadisich6095 Жыл бұрын
For the finite sum, I get Cexp(ax) as a solution where a is a solution to the polynomial 2a(1-a^n)-1=0 I get this by noting y'=y''+y'''+...+y[n+1] so we have y=2y'-y[n+1] if you use y=Cexp(ax) then you get Cexp(ax)=2aCexp(ax)-a^(n+1)Cexp(ax) or... 1=2a-a^(n+1)=2a(1-a^n) In the limit as n goes to infinity, it requires |a|
@jamesn.5721
@jamesn.5721 Жыл бұрын
Can someone explain the reasoning behind the sum of the powers of the differential operator converging? Doesn't seem intuitive for me.
@davidz5525
@davidz5525 Жыл бұрын
The sketchy solution is well, not so sketchy in my opinion. It’s actually called the von Neumann series, and the geometric series formula holds if the sum of the operators converge in the operator norm sense (which, in our case, is sort of given). The first solution however lacks formality of why you can take an infinite amount of derivatives and claim that it’s still well-behaved!
@honourabledoctoredwinmoria3126
@honourabledoctoredwinmoria3126 Жыл бұрын
I believe that it's just Neumann series. von Neumann was a different person.
@EqSlay
@EqSlay Ай бұрын
The similarities between this diffeq and the power series of exp() is interesting.
@olli3686
@olli3686 Жыл бұрын
10:14 Wait, what happened? He just completely ignored the remainder of D4y to DNy. If y + D1y = D2y + D3y + D4y + … + DNy, then why why just entirely drop the 4th derivative etc ????
@stewartcopeland4950
@stewartcopeland4950 Жыл бұрын
it's more like y + y' = 2 * (y'' + y''')
@CISMarinho
@CISMarinho Жыл бұрын
As @stewart said: y’’ + y’’’ + y⁽⁴⁾ + y⁽⁵⁾ +… = (y+y’+y’’ + y’’’ + )’’ = (y+y’ +(y+y’) )’’ = 2(y+y’)’’ = 2(y’’ + y’’’)
@dzuchun
@dzuchun Жыл бұрын
I have a quite strong feel of anxiety because of using limit on OPERATORS or factoring out derivative of INFINITE SUM. I'm quite sure there is an additional condition on this sort of operations
@qcard76
@qcard76 Жыл бұрын
How could you solve y = sum(d^i y /dx^i) where the sum is taken over only prime indices i? i.e., the RHS is the sum of prime-th derivatives of y
@typha
@typha Жыл бұрын
notice y = y'+ y'' + (y'+y''+y'''+...)'' = y'+2y'' This gives you additional extraneous solutions that look like e^-x since that doesn't actually converge. Similarly we can find that y = y'+y''+2y''', and get a few more 'solutions' but they don't converge either actually. Maybe there are ones that do converge, maybe there aren't, I could do some more work and see but I'm in a bit of a hurry right now so someone else will have to :P
@Anonymous-zp4hb
@Anonymous-zp4hb Жыл бұрын
Isn't the general solution just e^(x/2^(1/n))? If the n=1 case is the main problem and n=2 is the first follow-up etc.. In each problem, the right-hand side remains unchanged after taking the derivative, then adding the nth derivative of y. Do the same to the left and most terms cancel: 2 (d/dx)^n f_n(x) = f_n(x) And so: f_n(x) = e^(x/2^(1/n)) is a solution to the nth case.
@rainerzufall42
@rainerzufall42 8 ай бұрын
How do you know, that |D| < 1 for convergence?
@guerom00
@guerom00 Жыл бұрын
Is there a justification that the geometric series formula "seems to work" with a differential operator ?
@DTDTish
@DTDTish Жыл бұрын
Not a mathematician, but my guess is that it is linear We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... And use geometric sum from there. This basically does the same thing as the linear operator method, but a bit more simple (adding numbers instead of operators)
@guerom00
@guerom00 Жыл бұрын
​@@DTDTish yeah... Somehow, i don't have a problem with an object like exp(D) cause this series has an infinite radius of convergence. Here, i try to wrap my head around what a finite radius of convergence for this series means when applied to differential operators :)
@59de44955ebd
@59de44955ebd Жыл бұрын
On the first follow-up question y + y' = y{2} + y{3} + y{4} + ... : Taking the second derivative on both sides we get: y{2} + y{3} = y{4} + y{5} + y{6} + ... and hence: y + y' = 2 * (y{2} + y{3}) (this factor 2 was missing in the video) By substituting z for y + y' we get z'' = 1/2 * z and therefor a solution z = c * e^(x/sqrt(2)). A simple real solution that solves the substitution and therefor the original equation is y = c * e^(x/sqrt(2)).
@59de44955ebd
@59de44955ebd Жыл бұрын
Concerning the general equation y + y{1} + ... + y{n} = y{n+1} + ..., if we substitute z for y + y{1} + ... + y{n}, we get z{n+1} = 1/2 * z, and y = c * e^(x/(2^(1/(n+1))) is always a (trivial) solution.
@marc-andredesrosiers523
@marc-andredesrosiers523 Жыл бұрын
Would have been nice to explore it as a limit of linear system of differential equations. 🙂
@curtiswfranks
@curtiswfranks Жыл бұрын
I kinda think that this is why there is one basic solution.
@9WEAVER9
@9WEAVER9 Жыл бұрын
This is refreshing to learn around exposure to the Ricatti equation.
@danrakes2667
@danrakes2667 Жыл бұрын
In response to your question at around 8:10, in the infinite series of y' example you DO get an infinite result. The infinite series is trapped inside e!
@gonzalodiaz2752
@gonzalodiaz2752 Жыл бұрын
How do you know that series converges? It its a linear operator. That is not an obvious assumption
@garyknight8966
@garyknight8966 Жыл бұрын
For class II, using the same method as first used, y+y' = y''+D(y+y')=2y''+y'; so y=2y'' with solution y=Cexp(x/\sqrt 2)+Dexp(-x/sqrt2) . The two independent parts arise because we implicitly involve the second derivative. Note the exponent factors 1, -1 are square roots of 1. The next class produces y=2y''' with solution y = Cexp(x/(2^1/3))+Dexp([]x/(2^1/3))+Eexp([]x/2^1/3) with [] the other cube roots of 1: -1/2+-\sqrt3/2 . Three independent parts due to a third derivative. And so forth ...
@garyknight8966
@garyknight8966 Жыл бұрын
Oops .. the last [] factors I meant to be complex: -1/2+- i\sqrt3 /2 (of course). So these involve trigonometric functions (the even or odd components of exp (i \theta) )
@princeofrain1428
@princeofrain1428 Жыл бұрын
My first intuition of the solution (as someone who doesn't like to think too much) was "what if it was some exponential function whose power was a series that converged to 1 on the range 1 to infinity?"
@DTDTish
@DTDTish Жыл бұрын
We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... We know that the geometric sum is 1+k+k^2 = 1/(1-k), which is the RHS plus 1. So we have 1 =1/(1-k) -1 We get K=1/2 Or Y y=Ae^(k/2) The video did something very similar, but with operators
just an average recursion...OR IS IT?
18:24
Michael Penn
Рет қаралды 53 М.
This integral looks crazy
16:14
Michael Penn
Рет қаралды 38 М.
World’s Deadliest Obstacle Course!
28:25
MrBeast
Рет қаралды 84 МЛН
小女孩把路人当成离世的妈妈,太感人了.#short #angel #clown
00:53
A deceivingly difficult differential equation
16:52
Michael Penn
Рет қаралды 244 М.
Approximating (1.998)^4 by using differential
8:36
blackpenredpen
Рет қаралды 330 М.
The craziest definition of the derivative you have ever seen!
20:42
a beautiful differential equation
15:29
Michael Penn
Рет қаралды 39 М.
The most interesting differential equation you have seen.
21:16
Michael Penn
Рет қаралды 132 М.
Non-linear differential equations have strange solutions!
13:46
Michael Penn
Рет қаралды 41 М.
WHY are we finding pi HERE?
14:29
Michael Penn
Рет қаралды 57 М.
the craziest of differential equations!
16:43
Michael Penn
Рет қаралды 19 М.
A deceivingly difficult differential equation.
18:44
Michael Penn
Рет қаралды 113 М.
The Bernoulli Integral is ridiculous
10:00
Dr. Trefor Bazett
Рет қаралды 669 М.