Memoization: The TRUE Way To Optimize Your Code In Python

  Рет қаралды 113,438

Indently

Indently

Күн бұрын

Пікірлер: 154
@maroofkhatib3421
@maroofkhatib3421 2 жыл бұрын
Its good that you showed working of the memoization, but there are some inbuilt decorators for this exact same process, we can use cache or lru_cache from functools library. So that we don't need to write the memoization function every time.
@Indently
@Indently 2 жыл бұрын
True
@abdelghafourfid8216
@abdelghafourfid8216 2 жыл бұрын
also have more robust mapping than key = str(args) + str(kwargs) which is very risky. and more efficient if the standard library used some optimisations with C for the caching functions. So yeah there is really not too many reasons to create your own caching
@d4138
@d4138 2 жыл бұрын
@@abdelghafourfid8216 what would be a more robust mapping? And why the current one is not robust?
@abdelghafourfid8216
@abdelghafourfid8216 2 жыл бұрын
@@d4138 imagine a function with two arguments `arg1` and `arg2` the current mapping will confuse (arg1="12, 45", arg2="67, 89") with (arg1="12", arg2="45, 67, 89") and ofc you can find infinite other cases like this. this behavior is certainly not what you want in your code. you can make it safer by including the arguments name and also make sure your mapping dont confuse different object types. So I'll just recommend to use the built in caching function which you can safely trust without worrying about the implementation.
@capsey_
@capsey_ 2 жыл бұрын
@@abdelghafourfid8216 i agree with your point that `str(args) + str(kwargs)` isn't great due to many reasons, but your example of confusing arguments is not one of them, because repr of tuple and dict (which are what args and kwargs are respectively) will automatically add parenthesis, curly brackets and quotation marks around them. def func(*args, **kwargs): print(str(args) + str(kwargs)) func("12, 45", "67, 89") prints ('12, 45', '67, 89'){} func("12", "45, 67, 89") prints ('12', '45, 67, 89'){} func(arg1="12, 45", arg2="67, 89") prints (){'arg1': '12, 45', 'arg2': '67, 89'} func(arg1="12", arg2="45, 67, 89") prints (){'arg1': '12', 'arg2': '45, 67, 89'}
@Bananananamann
@Bananananamann 2 жыл бұрын
To add to this nice video: Memoization isn't just some random word, it is an optimization technique from the broader topic of "dynamic programming", where we try to remember steps of a recursive function. Recursive functions can be assholes and turn otherwise linear-time algorithms into exponential beasts. Dynamic programming is there to counter that, because sometimes it may be easier to reason about the recursive solution.
@Indently
@Indently 2 жыл бұрын
Very well said!
@Bananananamann
@Bananananamann 2 жыл бұрын
@@Indently Great video though, I learned 2 new things! That we can create our own decorators easily and how easy it is to apply memoization. I'm sure I'll use both in the future.
@7dainis777
@7dainis777 2 жыл бұрын
Memoization is very important concept to understand for code performance improvement. 👍 I have used different approach in the past for this exact issue. As a quick way, you can pass a dict as second argument, which will work as cache def fib(numb: int, cache: dict = {}) -> int: if numb < 2: return numb else: if numb in cache: return cache[numb] else: cache[numb] = fib(numb - 1, cache) + fib(numb - 2, cache) return cache[numb]
@rick-lj9pc
@rick-lj9pc 2 жыл бұрын
Memoization is a very useful technique, but it is trading off increased memory usage to hold the cache to get the extra speed. In many case it is a good tradeoff, but it could also use up all of your memory if overused. For the fibonacci function an iterative calculation is very fast and uses a constant amount of memory.
@capsey_
@capsey_ 2 жыл бұрын
I remember one time I was tinkering around with memoization of fibonacci function and it was so fast i was kinda frustrated how effective it was and for curiosity went for higher and higher numbers to see if it will ever start slowing down at least for half a second and when i went for billionth fibonacci number my computer completely froze and i had to physically shut it down 💀
@Trizzi2931
@Trizzi2931 2 жыл бұрын
Yes for Fibonacci the iterative solution is better in terms of space complexity. But generally in dynamic programming both the top down (Memoization) and bottom up (iterative) solutions have same time and space complexity because the height of the recursion tree will be same as the size of the array which you create for iterative solution which better than brute force solution or normal recursion.
@ResolvesS
@ResolvesS 2 жыл бұрын
Or instead, you don't use the recursive or the iterative approach. Fibonacci can be calculated by formula in constant time and with constant memory
@thisoldproperty
@thisoldproperty 2 жыл бұрын
Let's be honest, there is a Fibonacci formula that can be implemented. The point is ideas on caching, which id like to see this expanded on. Great intro video to this topic.
@zecuse
@zecuse 2 жыл бұрын
Memory can be less of an issue if the application allows you to eliminate less frequently used cache items. In the fibonacci case, the cache is really just generating an iterative implementation (backwards) and looking up the values.
@erin1569
@erin1569 2 жыл бұрын
Maybe some people don't realize why it's so good with fibonacci and why they aren't getting similar results with their loops inside functions. This caches the function return (taking args and kwargs into account), which is mega helpful because the Fibonacci function is recursive, it calls itself, so each fibonacci(x) has to be calculated only 1 time. Without caching, the fibonacci function has to calculate each previous fibonacci number from 1, requiring rerunning the same function(x) a huge number of times.
@castlecodersltd
@castlecodersltd 2 жыл бұрын
This helped me have a light bulb moment, thank you
@HexenzirkelZuluhed
@HexenzirkelZuluhed 2 жыл бұрын
You do mention this at the end, but "from functools import lru_cache" is a) in the standard library b) is even less to type and c) can optionally limit the amount of memory the memoization-cache can occupy.
@hodsinay6969
@hodsinay6969 5 ай бұрын
And d) is thread-safe
@adventuresoftext
@adventuresoftext 2 жыл бұрын
Definitely helping to boost a bit of performance in my massive open world text adventure I'm developing. Thank you for this tip!
@nameyname1447
@nameyname1447 Жыл бұрын
Drop a link?
@adventuresoftext
@adventuresoftext Жыл бұрын
@@nameyname1447 link for what?
@nameyname1447
@nameyname1447 Жыл бұрын
@@adventuresoftext A link to your project. Do you have a Github pepository or Replit or something?
@adventuresoftext
@adventuresoftext Жыл бұрын
@@nameyname1447 oh no it's not released yet, it's still got quite a bit of work just a few videos on this channel
@nameyname1447
@nameyname1447 Жыл бұрын
@@adventuresoftext Alright cool! Good Luck with It!
@IrbisTheCat
@IrbisTheCat 2 жыл бұрын
key creation here seems risky, as in some odd cases 2 different (k)wargs can end up as same key. Example: Args 1, kwargs "2", args 12, kwargs 12 empty strings. Would recomend adding specjal character between args and kwargs to avoid such thing.
@tucker8676
@tucker8676 2 жыл бұрын
That would also be risky: what about args 1 and 2 vs 12? Or args with the special character? If your args and kwargs are hashable you could always index with the tuple (*args, InternalSeparatorClass, **kwargs as tuple pairs). The most reliable and practical way is really to use functools.cache or a variant, which does what I just described internally.
@Bananananamann
@Bananananamann 2 жыл бұрын
The key creation is very use-case dependent and should be thought about, true. For this case it works well.
@wtfooqs
@wtfooqs 2 жыл бұрын
used a for loop for my fibonacci function: def fib(n): fibs = [0,1] for i in range(n-1): fibs.append(fibs[-1]+fibs[-2]) return fibs[n] ran like butter even at 1000+ as an input
@skiminechannel
@skiminechannel 2 жыл бұрын
In this case you just implement the memoization directly into your algorithm, which I think is the superior method.
@KosstAmojan
@KosstAmojan 2 ай бұрын
@@skiminechannel In general, you are correct that this approach (called iteration) is considered - and in fact _is_ - superior. In fact, memoization would be a complete waste of resources with this approach, because each fib is only calculated once. The purpose of memoization is to bypass the same calculations being made repeatedly, which is a problem unique to recursion.
@swelanauguste6176
@swelanauguste6176 2 жыл бұрын
Awesome video. This is wonderful to learn. Thanks, I really appreciate your videos.
@CollinJS
@CollinJS 2 жыл бұрын
It should be noted that generating keys that way can break compatibility with certain classes. A class implementing the __hash__ method will not behave as expected if you use its string representation as the key instead of the object itself. The purpose of __hash__ will be lost and __str__ or __repr__ will be used instead, which are neither reliable nor intended to be used for that purpose. It's generally best to let objects handle their own hashing. I realize you can't cover everything in a video so I wanted to mention it. One solution would be to preserve the objects in a tuple: key = (args, tuple(kwargs.items())) Similarly, the caching wrapper in Python's functools module uses a _make_key function which essentially returns: (args, kwd_mark, *kwargs.items()) where kwd_mark is a persistent object() which separates args from kwargs in a flattened list. Same idea, slightly more efficient. As others have noted, I think you missed a good opportunity to talk about functools, but that may now be a good opportunity for a future video. Thanks for your time and content.
@Indently
@Indently 2 жыл бұрын
I really appreciate these informative comments, they really make the Python community a better place, thank you for taking your time to write it! I will cover functools in a future lesson, I really wanted to get the basics on memoization out and about so people had an idea where to start, so I thank you once again for your patience, and hope to see you keep up with these informative comments around the internet :)
@Yotanido
@Yotanido 2 жыл бұрын
I actually think that this isn't that great of an example. This only works because the function recurses twice. Memoization is a great tool for pure functions that get frequently called with the same input. The recursive fibonacci definition happens to do this, but this is still not a great implementation. An iterative approach can be even faster and won't use up memory. You could even momoize the iterative implementation, for quick lookups of repeat inputs, but no wasted memory for all the intermediate values. Memoization is a powerful and useful tool, but it should be used when it is appropriate. In this case a better algorithm is all that is needed. (And you don't even need to change the recursion depth!)
@Indently
@Indently 2 жыл бұрын
Please add resources to your claims so others can further their understanding as well :)
@Yotanido
@Yotanido 2 жыл бұрын
@@Indently Looks like links do indeed get automatically blocked. I'm guessing you can fix that on your end.
@Indently
@Indently 2 жыл бұрын
The example I gave might not be the greatest, but it surely was one of the easiest ways to demonstrate it. I appreciate your informative comment, it's definitely something interesting to keep into account :) Thanks for sharing! (I also unblocked the link)
@stefanomarchesani7684
@stefanomarchesani7684 6 ай бұрын
First of all I want to praise you for your nice videos. I always enjoy them. That being said, I would like to point out a bug in your code. Since you are using the string thing to create the key, if you call the function in the two equivalent ways fibonacci(50) fibonacci(n=50) the two inputs are mapped into different strings, and so the second function call will not use the previously stored cache. I get that in the fibonacci example this does not matter and that you are just make an example of code that does memoization (not claiming any optimality), but this is a thing that, in my opinion, should have been mentioned in the video.
@xxaqploboxx
@xxaqploboxx 7 ай бұрын
Thanks a lot this cntent is incredible for junior python devs like me
@sesemuller4086
@sesemuller4086 2 жыл бұрын
In your fibonnacci implementation, f(n-2)+f(n-1) would be more efficient for the recursion because it reaches a lower depth sooner
@wishu6553
@wishu6553 2 жыл бұрын
Yea, but than caching arg:return like this wouldn't be possible. I guess it's just basic example
@MrJester831
@MrJester831 2 жыл бұрын
Another way to optimize your Python is to use a systems level language and to add bindings 🦀. This is why polars is so fast
@YDV669
@YDV669 2 жыл бұрын
That's so neat. Python has a solution for a problem called Cascade Defines in the QTP component of an ancient language, Powerhouse.
@velitskylev7068
@velitskylev7068 2 жыл бұрын
lru cache decorator is already available at functools
@Indently
@Indently 2 жыл бұрын
That would be a great 10 second tutorial
@yacoubasylla7358
@yacoubasylla7358 4 ай бұрын
exceptional! Thank you very much
@arpitkumar4525
@arpitkumar4525 5 ай бұрын
The 1st step is to avoid recursion if you can 😄 but nice to know this in case I must implement recursion.
@akashbhardwaj3531
@akashbhardwaj3531 11 күн бұрын
Can easily be achieved using lru caching?
@jcdiezdemedina
@jcdiezdemedina 2 жыл бұрын
Great video, by the way witch theme are you using?
@tobiastriesch3736
@tobiastriesch3736 2 жыл бұрын
For primitive recursive functions, such as Fibonacci's series, tail recursion would also circumvent the issue with max recursion depth, wouldn't it?
@neilmehra_
@neilmehra_ 11 ай бұрын
Yes, and by definition any tail recursive function can be trivially converted to iterative, so you could go further and just implement an iterative version.
@91BJnaruto
@91BJnaruto 2 жыл бұрын
I would like to know how you did that arrow as I have never seen it before?
@Indently
@Indently 2 жыл бұрын
It's a setting in PyCharm and other code editors. If you look up "ligatures" you might be able to find it for your IDE.
@FalcoGer
@FalcoGer 2 жыл бұрын
you can easily make fib with loops instead of recursion, saving yourself stack frames, stack memory, loads of time and your sanity. Recursion should be avoided whenever possible. It's slow, eats limited stack space, generates insane, impossible to debug stack traces and is generally a pain in the rear end. Caching results makes sense in some applications. But fib only needs 2 values to be remembered. Memory access, especially access larger than a page, or larger than processor cache is slow in it's own right. An iterative approach also doesn't require boilerplate code for a caching wrapper. And of course you don't get max recursion depth errors from perfectly sane and valid user inputs if you don't use recursion. Which you shouldn't. The naive recursive approach takes O(n^2) time. The iterative approach only takes O(n). Memorization also takes this down to O(n), but you still get overhead from function calls and memory lookups. If you want fast code, don't recurse. If you want readable code, don't recurse. If you want easy to debug code, don't recurse. The only reason to recurse is if doing it iteratively hurts readability or performance, whichever is more important to you. The max recursion value is there for a reason. Setting an arbitrary new value that's pulled out of your ass isn't fixing any problems, it just kicks the can down the road. What if some user wants the 10001st number? What you want is an arbitrary number. Putting in the user's input also is a really bad idea. Just... don't use recursion unless it can't be avoided. Here are my results, calculating fibbonacci(40) on my crappy laptop. In [27]: measure(fib_naive) r=102334155, 44.31601328699617 s In [28]: measure(fib_mem) r=102334155, 0.00019323197193443775 s In [29]: measure(fib_sane) r=102334155, 2.738495822995901e-05 s As you can see, the non recursive implementation is faster by a factor of 10 again, and it will only get worse with larger values. Of course calling the function again with the same value for testing in the interpreter is a bit of a mess. obviously an O(1) lookup of fib(1e9999) is going to be faster than an O(n) calculation. fib_naive and fib_mem are the same except for using your implementation of the cache. fib_sane is def fib_sane(n: int) -> int: p = 1 gp = 0 for _ in range(1, n): t = p + gp gp = p p = t return p
@supwut7292
@supwut7292 Жыл бұрын
You make great points through your post, however you missed the fundamental point of the vid. It wasn’t about whether the iterative approach is faster than the recursive approach. But rather the fundamental idea of caching and exploiting memory hierarchy. Furthermore, this is not just a theme in programming. This is a key part of computer architecture, software architecture, and processor design. It’s almost guaranteed for a recursive function to take longer due to its fundamental nature. However by exploiting caching we avoid the expensive costs of exhaustive memory look ups.
@gregoryfenn1462
@gregoryfenn1462 15 күн бұрын
Doesn't the the cache dictionary {} get remade and emptied every time the function is called, since cache is local variable?
@simonwillover4175
@simonwillover4175 2 жыл бұрын
2:40 Should take about 26 hours, looking at the time it took to do fib(30). Fib(40) would have been a better number to make your point.
@williamflores7323
@williamflores7323 2 жыл бұрын
This is SICK
@TiềnLêThanh-d4u
@TiềnLêThanh-d4u 7 ай бұрын
could you please explain where it store the cache?
@mithilbhoras5951
@mithilbhoras5951 9 ай бұрын
Functools library already has two memoization decorators: cache and lru_cache. So no need to write your own implementation.
@issaclifts
@issaclifts 2 жыл бұрын
Could this also be used in a while loop for example?: while a != 3000: print(a) a+=1
@IrbisTheCat
@IrbisTheCat 2 жыл бұрын
Doesn't seem to. It is to memorize result of function with same arguments called over and over with same result calculated.
@oskiral320
@oskiral320 2 жыл бұрын
no
@4artificial-love
@4artificial-love 2 жыл бұрын
#fibonacci time_start = time.time() print('st:', time.time()) a, b = 0, 1 fb_indx = 36 ctn=0 while ctn != fb_indx: print( b, end=' ') a, b = b, a+b ctn+=1 print( ' ', 'fibonacci', fb_indx, ':', b) print(' ended:', time.time(), ' ', 'timed:', time.time()-time_start) #timed: 0.0030629634857177734
@bgdgdgdf4488
@bgdgdgdf4488 2 жыл бұрын
Lesson: stop using recursion because it's slow. Use while loops instead.
@OBGynKenobi
@OBGynKenobi 2 жыл бұрын
I used the same technique to minimize AWS lambda calls to other services when it's the same expected value return.
@adityahpatel
@adityahpatel Жыл бұрын
how is memoization different from the lru_cache u discussed in another video?
@anamoyeee
@anamoyeee Жыл бұрын
Why? In the functools module @cache decorator does the same thing and you don't have to write your own implementation but just from functools import cache
@djin81
@djin81 Жыл бұрын
fibonacci(43) generates more than a billion recursive calls to fibonacci(n) using that code. Its no wonder fibonacci(50) doesn't complete, think how many times its generating a fibonacci(43) call which will add a billion more calls. There's only 50 unique return values, the cache version is 50 function calls vs literally billons of recursive function calls.
@j.r.9966
@j.r.9966 Жыл бұрын
Why is there not a new cache defined for each function call?
@Dyras.
@Dyras. 10 ай бұрын
implementation of memoization is very slow compared to if you used hashmap or 2d array but a nice starter to dynamic programming for beginners
@weistrass
@weistrass 11 ай бұрын
You invented the wheel
@QWERTIOX
@QWERTIOX 2 жыл бұрын
As a cpp dev, using recursion instead of basic loop to calc fibonacci looks like overkill. Personally i will write it like 2 element table, one boolen to change postion where im setting newest calced variable and doing this in loop for n times
@miltondias6617
@miltondias6617 2 жыл бұрын
It seems great to implement. Can I do it with any code?
@EvilTaco
@EvilTaco 2 жыл бұрын
yes, but it's only going to help if that function is called a huge amount of times with the same arguments. The reason the first implementation was so slow is because it never saved the results it returned, so it calculated a shit-ton of values it has already calculated at some point previously
@revenity7543
@revenity7543 Жыл бұрын
Why don't you use DP?
@shivamkumar-qp1jm
@shivamkumar-qp1jm 9 ай бұрын
What is the difference between lru_cache and this
@davidl3383
@davidl3383 2 жыл бұрын
thank you very much !
@nextgodlevel
@nextgodlevel Жыл бұрын
i love your videos
@gJonii
@gJonii 2 жыл бұрын
This is like, the usual talk about memoization but it's just slightly wrong everywhere. You don't use default libraries to import this functionality, you however don't write case-specific code for this case either, instead, you try to write generic library code very badly. Fever dream'ish quality to this video.
@aniketbose4360
@aniketbose4360 Жыл бұрын
Dude i know this video is to show memoization but in case you don't know there is a formaula for n'th fibonnaci and that's very simple too
@Indently
@Indently Жыл бұрын
I know :) thank you for bringing it up though!
@gregorymartin9091
@gregorymartin9091 2 жыл бұрын
Hello. Please could you explain me what's the most efficient between using @memoization and @lru_cache? Thank you, and congratulations for this really useful channel!
@PiletskayaV
@PiletskayaV 2 жыл бұрын
Never heard of "memoization" before. But during the first seconds of video I just said "lol just use cache decorator". And then he started to implement it. "Same thing, different names" situation I guess. My guess is that main points of video are: - Show that such thing as caching/memoization exists. - Show how to implement it by yourself so you would have deeper understanding of how it works under the hood. So after you learn it you can even implement it in other languages where you don't have it "out of the box".
@tipoima
@tipoima 2 жыл бұрын
Wait, so "memoization" is not a typo?
@Indently
@Indently 2 жыл бұрын
A typo for what?
@tipoima
@tipoima 2 жыл бұрын
@@Indently "memorization"
@Indently
@Indently 2 жыл бұрын
@@tipoima oh yeah, ahaha, true, I also thought something similar when I first heard it.
@gabrote42
@gabrote42 2 жыл бұрын
Whoah
@FTE99699
@FTE99699 Жыл бұрын
Thanks
@Indently
@Indently Жыл бұрын
Thank you for the generosity! :)
@JorgeGonzalez-zs7vj
@JorgeGonzalez-zs7vj 2 жыл бұрын
Nice!!!
@DivyanshuLohani
@DivyanshuLohani 2 жыл бұрын
at 6:40 on line 28 10_000 what is that syntax?
@Indently
@Indently 2 жыл бұрын
In Python you can use underscores as a separator when typing numbers. The compiler ignores it, but it's visually easy on your eyes.
@DivyanshuLohani
@DivyanshuLohani 2 жыл бұрын
@@Indently Oh ok thanks
@aneeshb8087
@aneeshb8087 5 ай бұрын
How exactly that cache worked in this program would be more helpful.
@frricbc4442
@frricbc4442 Жыл бұрын
can someone explain to me '→' i am not familiar with it in python
@Harveyleegoodie
@Harveyleegoodie Жыл бұрын
pretty sure it means to return the output of the function as listed for example: def main(s) -> int: makes it return the output as a int def main(s) -> str: makes it return the output as a str
@AcceleratedVelocity
@AcceleratedVelocity Жыл бұрын
memory leak go BRRRRRRRR
@idk____idk6530
@idk____idk6530 2 жыл бұрын
Man I'm thinking what if we use this function in Cython code 💀.
@CEREBRALWALLSY
@CEREBRALWALLSY 2 жыл бұрын
Can functools.lrucache be used for this instead?
@JustMastermind
@JustMastermind 2 жыл бұрын
yes
@sir_damnkrat
@sir_damnkrat 2 жыл бұрын
It shoud
@miguelvasquez9849
@miguelvasquez9849 2 жыл бұрын
Awesome
@SP-db6sh
@SP-db6sh 2 жыл бұрын
Use just prefect library
@romain.guillaume
@romain.guillaume 2 жыл бұрын
I know it is for demonstration purpose but this implementation of the Fibonacci sequence is awful, with or without the decorator. Without the decorator you have a O(exp(n)) program and on the other end a memory cache which is useless unless you need all the Fibonacci sequence. If you want to keep a O(n) program without memory issue in this case, just do a for loop and update only two variables a_n and a_n_plus_1. Like this, it is still a O(n) program but your store only two variables, not n. I know that some people will say it is obvious and that example has been chosen for demonstration but somebody had to say it (if it is not already done)
@Indently
@Indently 2 жыл бұрын
If you have a better beginner example for memoization, I would love to hear about it so I can improve my lessons for the future.
@7DYNAMIN
@7DYNAMIN 10 ай бұрын
the better way might be to use generator functions in python
@ThankYouESM
@ThankYouESM Жыл бұрын
lru_cache seems significantly faster and requires less code.
@tmahad5447
@tmahad5447 Жыл бұрын
Optimizing python be like Feed a turtle for speed istead using a fast animal
@Indently
@Indently Жыл бұрын
But your slow turtle won't beat my fast turtle in a race then, and people care about the fastest turtle in this scenario
@overbored1337
@overbored1337 Жыл бұрын
The best way to optimize python is to use another language
@Indently
@Indently Жыл бұрын
Like Spanish?
@shepardpower
@shepardpower 15 күн бұрын
​@@Indently Ah yes, Pitón
@robert_nissan
@robert_nissan 7 ай бұрын
poderoso script super🎉
@richardbennett4365
@richardbennett4365 2 жыл бұрын
I see his point. His code, if if cannot add up some numbers to Fibonacci(50), in less than a few seconds, he's got the wrong code, which is what he's demonstrating, or he's using the wrong computer programming language for this task. Everyone knows scientific problems are best handled in FORTRAN (or at least, C or Rust), and this problem is pure arithmetic. Python is not the right language for this problem unless of course memorize is used.
@SourabhBhat
@SourabhBhat 2 жыл бұрын
That is only partially right. Even though FORTRAN is better suited for scientific computing, efficient algorithms are very important. Try computing fib(50) using recursion in FORTRAN for yourself. How about fib(60) after that?
@richardbennett4365
@richardbennett4365 2 жыл бұрын
@@SourabhBhat, you are correct 💯. The point is to make the algorithm as efficient as possible given the language with which one is presented.
@nempk1817
@nempk1817 2 жыл бұрын
dont use python = more speed.
@sangchoo1201
@sangchoo1201 2 жыл бұрын
fibonacci? use O(logN) method
@spaghettiking653
@spaghettiking653 2 жыл бұрын
You mean Binet formula?
@sangchoo1201
@sangchoo1201 2 жыл бұрын
@@spaghettiking653 matrix exponation
@JordanMetroidManiac
@JordanMetroidManiac 2 жыл бұрын
Here’s an O(1) function. PHI = 5 ** .5 * .5 + .5 k1 = 1+PHI*PHI k2 = 1+1/(PHI*PHI) def fib(n): return int(.5 + .2 * (k1 * PHI ** n + k2 * (-1) ** n * PHI ** -n))
@sangchoo1201
@sangchoo1201 2 жыл бұрын
@@JordanMetroidManiac it's not O(1). the ** is O(n) and it doesn't work
@JordanMetroidManiac
@JordanMetroidManiac 2 жыл бұрын
@@sangchoo1201 I accidentally gave the formula for Lucas numbers. Also the exponent operator is not O(n) lol
@Shubham_Shiromani
@Shubham_Shiromani Жыл бұрын
from functools import wraps from time import perf_counter import sys def memoize(func): cache={} @wraps(func) def wrapper(*args, **kwargs): key= str(args)+str(kwargs) if key not in cache: cache[key]=func(*args, **kwargs) return cache[key] return wrapper def sum(n): s=0 for i in range(n): if i%3==0 or i%5==0: s=s+i return s t = int(input().strip()) for a0 in range(t): n = int(input().strip()) start= perf_counter print(sum(n)) end= perf_counter #For this code, it is not working-----------
@Fortexik
@Fortexik 2 жыл бұрын
in functools there is a decorator @cache or @lru_cache
@Steven-v6l
@Steven-v6l 5 ай бұрын
actually, the true way to optimize is to use a good algorithm. this requires thinking before you start typing. Fibonacci(n) your first implementation is crap because it is glacially slow your second implementation is crap because it needs a cache of unbounded size # calculate the n-th fibonacci number # this implementation is fast, # it requires no "extra" memory # it is stack friendly -- unlike recursion. # it does nothing hidden or unnecessary. def F(n): if n 0: f = fpp + fp fpp = fp fp = f return f some data >>> def foo(n): ... start = time.process_time() ... F(n) ... end = time.process_time() ... print(end-start) ... >>> foo(5) 3.900000000101045e-05 >>> foo(50) 5.699999999997374e-05 >>> foo(500) 0.00021399999999971442 >>> foo(5000) 0.002712000000000714 >>> foo(50000) 0.05559500000000028 >>> foo(500000) 2.9575280000000017 >>> foo(5000000) 291.43955300000005 >>> by the way F(5000000) ~ 7.108286 × 10^1044937 that's 1,044,938 decimal digits I'm running Python 3.9.6 on a MacBook Pro, w/ M1 Pro chip
@richardboreiko
@richardboreiko Жыл бұрын
That was interesting and effective. I tried using 5000 and got an error Process finished with exit code -1073741571 (0xC00000FD). I started looking for the upper limit on my Windows PC, and it's 2567. At 2568, I start to see the error. It may be because I have too many windows each with too many tabs, so I'll have to try it again after cleaning up my windows/tabs. Or it may just be a hardware limitation on my PC. Still, it's incredibly fast. Thanks! Also, I just checked for the error message on openai (since everybody's talking about it lately) and it said this: ======================================= Exit code -1073741571 (0xC00000FD) generally indicates that there was a stack overflow error in your program. This can occur if you have a function that calls itself recursively and doesn't have a proper stopping condition, or if you have a very large number of nested function calls. To troubleshoot this error, you will need to examine your code to see where the stack overflow is occurring. One way to do this is to use a debugger to step through your code and see where the error is being thrown. You can also try adding print statements to your code to trace the flow of execution and see where the program is getting stuck. It's also possible that the error is being caused by a problem with the environment in which your program is running, such as insufficient stack size or memory. In this case, you may need to modify the environment settings or allocate more resources to the program. If you continue to have trouble troubleshooting the error, it may be helpful to post a more detailed description of your code and the steps you have taken so far to debug the issue. =======================================
@dcknature
@dcknature 2 жыл бұрын
This reminds of learning to multiply at school a long time ago 🧓. Thanks for the tutorial video 😊! likes = 57 😉👍
@4artificial-love
@4artificial-love 2 жыл бұрын
i believe that simple is better...and faster... #fibonacci time_start = time.time() print('st:', time.time()) a, b = 0, 1 fb_indx = 10000-1 ctn=0 while ctn != fb_indx: #print( b, end=' ') a, b = b, a+b ctn+=1 print( ' ', 'fibonacci', fb_indx, ':', b) print(' ended:', time.time(), ' ', 'timed:', time.time()-time_start) #timed: 0.0.005985736846923828
@mx-kd2fl
@mx-kd2fl 2 жыл бұрын
OMG! You can just use functools.cache...
@Indently
@Indently 2 жыл бұрын
Right, there perfect way to teach how something works is by using pre-made functions
@Armcollector77
@Armcollector77 2 жыл бұрын
​@@Indently Thanks for the video, great content. No, you are right that it is not a good way to teach, but mentioning them at the end of your video is probably a good idea.
@Indently
@Indently 2 жыл бұрын
@@Armcollector77 That part I can accept, I will try to remember for future lessons :)
CONTEXT MANAGERS In Python Are GENIUS!
4:49
Indently
Рет қаралды 31 М.
5 Useful Python Decorators (ft. Carberra)
14:34
Indently
Рет қаралды 107 М.
Мама у нас строгая
00:20
VAVAN
Рет қаралды 12 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 78 МЛН
10 Important Python Concepts In 20 Minutes
18:49
Indently
Рет қаралды 326 М.
25 nooby Python habits you need to ditch
9:12
mCoding
Рет қаралды 1,8 МЛН
Compiled Python is FAST
12:57
Doug Mercer
Рет қаралды 117 М.
Unlocking your CPU cores in Python (multiprocessing)
12:16
mCoding
Рет қаралды 310 М.
Premature Optimization
12:39
CodeAesthetic
Рет қаралды 837 М.
5 Useful F-String Tricks In Python
10:02
Indently
Рет қаралды 332 М.
How to actually make your Python code run faster?
12:02
Tech Raj
Рет қаралды 8 М.
5 Good Python Habits
17:35
Indently
Рет қаралды 635 М.
Learn Python OOP in under 20 Minutes
18:32
Indently
Рет қаралды 119 М.
Turn Python BLAZING FAST with these 6 secrets
5:01
Dreams of Code
Рет қаралды 50 М.