A similar video by Matt Parker (Stand-up Maths) "Someone improved my code by 40,832,277,770%." I was part of the team that optimized his 1 month solution down to 300 microseconds. We submitted ours kind of late so he wasn't able to cover our big algorithmic changes, but many of the techniques you mention here applied there as well.
@sahilverma_dev2 ай бұрын
"from this little youtuber primeOgen"
@PP-ss3zf2 ай бұрын
trying to understand if hes just trolling, he said it so seriously
@sahilverma_dev2 ай бұрын
@@PP-ss3zf what do you think
@PP-ss3zf2 ай бұрын
@@sahilverma_dev my comment.. thats what i think
@Iron_spider992 ай бұрын
@@PP-ss3zf think some more
@yassinesafraoui2 ай бұрын
@@PP-ss3zf he's obviously trolling, he knows him
@haronxbelghit2 ай бұрын
18:00 no coincidence, the range from 97 to 122 falls between (32*3 = 96) and (32* 4 = 128), meaning the remainders span from 1 to 31 within this interval
@NeetCodeIO2 ай бұрын
Damn, it makes you realize how much thought the original programmers put into making things elegant. And then we ended up with JS, web apis, and frontend frameworks...
@ismbks2 ай бұрын
i wonder how many more ascii tricks there are, it's not very well documented, or just hard to find
@HtotheG2 ай бұрын
@@ismbksMy favorite Ascii Hack is that each capital letter A-Z (65-90) is exactly 32 away from their lowercase counterparts (97-122) in such a way that the only difference is the 6th bit (2^(6-1) = 32) making things like case-insensitive comparisons or conversions to upper or lowercase SUPER fast bitwise operations ignoring, unsetting, or setting the 6th bit respectively.
@ismbks2 ай бұрын
@@HtotheG hell yeah.. i already knew about this one but it's a really cool hack people should definitely know! apparently cloudflare uses this for fast string filtering so it must be good..
@hellowill2 ай бұрын
@NeetCodeIO back then programmers were genius. Like you basically needed a phd, or some deep understanding of math. The bar has really lowered. Which I guess was necessary to scale.
@TestTost-j4d2 ай бұрын
To preface, You very nicely explained everything. Even the tricky bits :). Stack is handled by the cpu under the direction of the OS. There is still overhead when you cross the page boundary. The heap is not handled by the OS, but by the whatever allocator you use (the allocator mmap()s pages when needed). Allocators usually use "bins/buckets" for various sizes of allocations, so it's pretty fast. Unless the allocator has to mmap() some more memory. Anyway, what i'm trying to say is that it's complicated. Like if your language would let you, you could even use the stack as a dynamic array. Or you could mmap() a big piece of memory and just use it as a dynamic array, as memory doesn't get allocated until you touch it. Getting the same result as the stack. If the array size is fixed, the compiler could even just reserve a piece of memory from when the program is loaded (like they do for const strings). Cache locality is also a bit.. Cpu's cache memory in "cache lines", that are usually 64 bytes. And yea, if your resize moves the data then all those cache lines get useless. Then again the memcpy puts the new data into cache, so it's not ~that~ bad. Just that something else will get thrown out. And there's more levels of cache, like L1 is closest to the core but way smaller then L3, ofc. And yea, the cpu "prefetch"-es data as you read it in. It even figures out the direction, so going 0..n is the same as n..0. In short you always want to use as little memory as possible, and keep the memory that is accessed at the same time as close together as you can. If you can keep it in registers, like the bitfield solution, then your golden. And you ~might~ want to align/pad to some power of 2 (especially to 16 bytes for SIMD, that you even had to on older cpus). PS Oh, and your solution to subtract 'a' would also be faster then modulo (Modulo is divide, ofc subtract would be faster. Bdw bitwise operations usually take 1/3 of a cpu tick, the fastest operations there are (except maybe mov between registers)).
@egor.okhterov2 ай бұрын
This ☝️ It is not about stack or heap. These are just simple introductory concepts taught in University, but the majority of the people stop at that and never actually understand how memory works. There is MMU, TLB inside of CPU. There are virtual pages of 4kb that are loaded via page faults. There is .text, .data and .bss section which are neither heap nor stack, but are still loaded into programs address space.
@treelibrarian76182 ай бұрын
... although modulo (%) 32 is optimized to and (&) 0x1f so is basically the same, and other static modulo's are normally optimized to 2x multiply so not so bad. and bitwise ops aren't 1/3 of a clock, they take a whole one: they just have 3 or 4 alu's that can do them so 3 or 4 can be done each tick if nothing else is happening. also, since icelake the move elimination you refer to has been removed on intel (sadge...) so mov takes same time as other ALU ops now... but zen still does it.
@dorianligthart33782 ай бұрын
@egor.okhterov I'd assume the devs in the video made the testing string variadic from stdin or a file (so there would be no .text section with the test string) cause otherwise the compiler, after linking, could just optimise the answer, maybe even fill it in😂 making the testing times invalid. What would otherwise be the point of the problem if not automation? You would write the first solution you'd come up with and wait a bit, thinking about how the younger you would waste time optimising the code hahaha But it's nice to finally find/read a more technical yt comment thread.
@saharshbhansali2502 ай бұрын
I really wish I could share KZbin comments.
@SanjayB-vy4gx2 ай бұрын
Bro gave 2hrs length content for him
@dekumidoriya-qi2is2 ай бұрын
this little youtuber 🤣🤣
@plaintext72882 ай бұрын
he should stream as well
@dekumidoriya-qi2is2 ай бұрын
@@plaintext7288 fr fr
@stoic123432 ай бұрын
@@plaintext7288 agree
@aflous2 ай бұрын
Primyejen 🤣🤣🤣
@sebastianwapniarski2077Ай бұрын
That's classic
@howto.34112 ай бұрын
That is a BRILLIANT video,loved watching it.
@NeetCodeIO2 ай бұрын
I'm honestly glad there are people out there that enjoy this stuff as much as I do. Love deep technical concepts.
@juanmacias59222 ай бұрын
These random topic videos have been really insightful, great content!
@MrSonny61552 ай бұрын
Feels like Boyer-Moore, but without the pain of preprocessing bad-character/good-suffix tables. Very nice.
@eblocha2 ай бұрын
I think you can still get a cache locality boost using an array, because the array’s memory is next to other stack variables. That means the array’s memory is more likely to be in the same cache line as the other stack variables.
@vasiliigulevich92022 ай бұрын
You only need two pointers referencing input, what array?
@TanmayPatil37Ай бұрын
Pointers are pointing to underlying array which has to be accessed (through the pointers)
@TheOnlyJura2 ай бұрын
"the actual runtime is what matters" - tell that to the average react developer
@0x0michaelАй бұрын
Lol lol lol, they're too bothered about elgant abstraction while their apps keep making people replace their phones every two years
@akialter2 ай бұрын
What I like about your explanation is you dont “assume” the audience know a thing, you drove into the tiniest detail like what even is an AND operation. Whereas college professor always have that assumption, like oh you guys must already know about stack, heap, memory allocation, let me talk about this scheduling algorithm,…
@rdubb772 ай бұрын
Because most professors suck and only “teach” for a paycheck
@criptych2 ай бұрын
You can create static arrays in Python with the "array" module. Still not sure if that qualifies it as a "real" language, though.
@GuRuGeorge03Ай бұрын
I am a lead web developer and have never done any leetcode except in university. I recently started leetcode to get into a big name company that pays like 10%-20% more than my current company and videos like this are very eye opening!
@marcsh_devАй бұрын
Thats quite a bit less than I wouldve thought though. If you like the folks you work with a lot, dont get a new job for anything under 25% more than what you make now Being on a horrid team is the worst. Id much rather work for less money than folks I dislike.
@MrHaggyy2 ай бұрын
Async or parallel optimization is also really interesting from a data structure perspective. As a chunk of cash has a specific size, we do want a structure that uses as much of a chunk as the algorithm can handle. This means in a forward sliding window approach we can assign each thread with a starting position and collect the results. Likewise you can often use multithreading to make a O(N^2) into a O(N × (N/threads) ). Which leads to great improvements on specific hardware. But it's hardware specific. Currently i'm working with a controller that only has one RAM but SIMD and MIMD. In that case you would either do the backward sliding window on the CPU, or try to fit the whole algorithm in the MIMD and do a forward brute force.
@LifeofbhadauriaАй бұрын
This was a great video 👏👏, enjoyed watching it, I wish youtube suggests me more videos of this type 😅
@zangdaarrmortpartoutАй бұрын
At 14:20 he is relocating the array every-time the windows change but with a well constructed loop it is possible to reuse the same array and toss out the indexes we know can't have been updated and contain values from the previous sub-string. On C# doing this makes this algo 4x faster.
@acters1242 ай бұрын
I enjoyed this. i like the mention of bit mask. using a 32 bit sized buffer, it is just so much faster to deal with bits as an index. This is assuming only lowercase(or only uppercase) Latin characters. (26 letters)
@DavidM_6032 ай бұрын
9:50 another bit of overhead that could've been avoided at that step is reusing and clearing one vec, instead of harassing the allocator for a new one every 14 bytes lol
@urizen959Ай бұрын
0:15 "little youtuber" 😂😂
@ppercipioАй бұрын
I had to do a double take just to make sure... 😂
@kondekinoe9337Ай бұрын
I double checked. And still didn't believe it first and then I turned the captions on just to be sure.
@0x0michaelАй бұрын
You're right about cache locality not being involved. It's the same thing with strings and small string optimizations
@theairaccumulator7144Ай бұрын
CPUs know how to add numbers together even floats though. ALUs and FPUs make it so the difference between a right shift and a multiply by 2 isn't a thing anymore.
@thargor2k2 ай бұрын
Maybe I misunderstood what you wanted to say in the beginning, but CPUs are totally able to add two numbers together. Does that in the end boil down to binary operations? Yes. But except in some very esoteric CPUs it doesn't run those binary operations, but there is dedicated circuitry to do the addition, in many cases in 1 cycle (e.g. on x86 it's a single uop, as well as on most embedded CPUs)
@yellingintothewindАй бұрын
Cache locality does matter even in small arrays vs vectors. Vectors have more space overhead, as they need to track their size, current location, allocated capacity, and so on. They also cannot be packed with whatever other data is in the current stack frame. So it is harder to get the entire working set to fit in the L1 cache, or even if it nominally fits, it's more likely to have parts of it evicted and have to fetch it from the L2 cache when task switching happens in multitasking operating systems. Taken to the extreme, you have programs like CPUBurn which fit the entire program into just the main CPU registers and stress test the CPU by cycling as fast as possible, never reaching out even to CPU cache. Your point applies more to cache lines, which is where moving data from system memory to the L3 cache happens to bring in the _next_ data you need. The concept is related, but not what matters here.
@valentinrafael92012 ай бұрын
Ignoring the constant *when you are learning Big O* is important, so that you dont get distracted, however, when building something, it’s only relevant if you are already at the “simplest form” or smallest big O you can achieve, and then the constant matters.
@kiratornatorАй бұрын
the right to left approach is often a good idea when looking for largest sub array
@jacksong8748Ай бұрын
25:11 actually you'd see the p's but who's counting? xD In any case, the reverse iteration to guarantee taking the maximum step size every time was definitely the coolest optimization in my book. The second the Primeagen pointed it out it was like HOLD UP, that's so freakin clever. very cool stuff. Not often do i see an optimization that makes me "teehee" like that.
@gazorperКүн бұрын
Edit: [Going by the assembler instructions gcc and clang generates] The modulus method of mapping 'a'..'z' to 0..25 is slower than just subtracting 'a', at least on x86 style CPUs. For reference, this is what I refer to: char a = 'c'; int av = a - 'c'; // faster on x86 int av2 = a % 32; // slower on x86. a lot slower actually. On ARM 32 and 64 the two appear to be equivalent. I did look at some other architectures, and it looks like CISC architectures are always slower for mod 32, and RISC is always equivalent. But that's anecdotal.
@sideone3581Ай бұрын
I am Fan of Prime but you cleared the stuff easily, He is too good and sometimes forget we don't understand his language
@maaikevreugdemaker92102 ай бұрын
I died at "we talked a bit about memory" 😂
@WiseWeeabo2 ай бұрын
I feel like a lot of these optimizations actually imply knowledge of the data and is biased towards a hypothetical success or failure case.. but if you know your data, there's a whole world of N possibilities to optimize for a specific use-case..
@MrHaggyy2 ай бұрын
They do imply knowledge. But quite broad knowledge is usually more than enough. For example in Bitmasking, the letter "A" is encoded in 0x65 in ASCII or 0x41 in UTF-8. We don't need to know the specific value, the information that its one byte per char and we can check for equal or not is already sufficient. Going into SIMD instructions forces you to know that 32bit = 4x8bit so you can check 4 chars in one go. A good optimisation is usually something really really trivial. You might want to look up how DFT or the the inverse Fourier Transform works. That one simple binary trick enabled a shit ton of things with image compression, nuclear detection or GPS beeing only a view applications.
@pramodpotdar5416Ай бұрын
Okay this is one of the great video I have seen in a while.
@mulllhausenАй бұрын
Once you get down to this level just write it in assembly. It's quite fun and simple
@spicybaguette7706Ай бұрын
14:00 I think the biggest is allocation, since it has to make a call to malloc every single iteration of the loop, which means one for almost every characters. I'm guessing that if you were to move the Vec::with_capacity out of the loop and vec.clear() it every time you checked a window, you would get much closer to the performance of the array code
@abdullahsaid47652 ай бұрын
I love your explaining for hard thing video in simple english. keep going u are doing good thing.👌
@lu3tz2 ай бұрын
I stumbled across a super nice run-time optimization video in F#, it's called "F# for Performance-Critical Code, by Matthew Crews" neat stuff in there!
@BigBrainHacksАй бұрын
Thanks for your nice and insightful explanation!
@sanchitwadehra2 ай бұрын
Dhanyavad bhai
@realfranserАй бұрын
You can assign each character to a prime number, you keep multiplying the result with the next number if the division has modulo different than 0 😮. Mem = int32
@yes54212 ай бұрын
Just started my cs degree but I love every part of this and can’t wait to get to this level
@porky1118Ай бұрын
20:20 I often store stuff as bitset. It's more comfortable than working with arrays IMO. Recently I also turned some struct of boolean flags into a bitset. (Or I rather told some AI to do it for me, since it's pretty repetitive)
@infiniteloop54492 ай бұрын
This screenshot reminds of Paolo Costa’s Secret Juice ad.
@kushagrasaxena52022 ай бұрын
25:56 brother you are the "super leetcode monkey"
@HyperCodec14 күн бұрын
Was not expecting to see python in the thumbnail of a video with a title containing “faster”
@ProfessorThock2 ай бұрын
Amazing video
@SanmayceАй бұрын
No one mentions the work of previous coders, giving due credits is a sign of not being a talking galfon. The skipping "optimization" is Boyer-Moore.
@devchaudhary782 ай бұрын
‘Prim-ye-jun’ that made me laugh harder
@sainayan2 ай бұрын
Yeah, I just started learning DSA, and I see this. Wow, I'm cooked.
@TheRageCommenter18 күн бұрын
Neetcode: *walks into a room* Inefficient algorithm: Why do I hear boss music?
@zangdaarrmortpartout28 күн бұрын
We did it in C# and while it is difficult to understand while simply watching the video it becomes trivial simple once you start putting it on paper. We weren't able to reproduce completely the flow Prime has in Rust, I see he first add last, then check, then only remove left, which means he enters the loop with potentially 13 bits set, we did initialize a first windows covering 14 characters and then as long as 14 bits aren't set we exclude left, include right, then check. I don't think it changes a lot but I'm curious to make the code even smaller than what it currently is. On the next step, which is understanding how I can parallelize this, right now I don't see how.
@CaptTerrificАй бұрын
17:15 mod functions are far more expensive in terms of clock cycles than subtraction, that feels like it'd matter a lot if we're in the realm of 1,000,000% optimizations
@aaaabbbbbccccccАй бұрын
%32 will get burned out by the compiler. If it wasn't a power of 2 then yes, it'd be vastly slower than the character - "a" approach
@prajwals82032 ай бұрын
Would love to see more reactions vids like this lol
@kushagrasaxena52022 ай бұрын
Primy-agen
@porky1118Ай бұрын
17:30 I also always subtract 'a' instead of modding sizeof(T).
@spookimiiki58912 ай бұрын
the name is primagoon
@jonphelan7072 ай бұрын
In a code where the loops...spin and grow BigO whispers...how fast can we go With n squared in sight...We’ll optimize all right And watch as that CPU blazes with new mojo!
@DamianL-o4e2 ай бұрын
"This little amoeba youtuber" -> Next Video, "This electron youtuber"
@JuliaC-mz8qy2 ай бұрын
no offense at all neetcode, I love neetcode. But I had a ratatouille moment when he started going into a sliding window explanation. I think im traumatized from my last job search.
@ferdynandkiepski50262 ай бұрын
The use of the modulo is a bad practice. While here we are doing modulo of a power of two on unsigned ints, which any sane compiler should optimize into an and (or do some wizardry to make it work on signed numbers as well), if these two conditions weren't met, and an actual division was performed to find the modulo, there would be a significant runtime cost. As such, your proposed method of substraction would be faster.
@mage36902 ай бұрын
So long as it's a compile-time constant, I'm fairly certain you can mod whatever number you want and it'll come out as a series of shift, sub and imul instructions. Godbolt helpfully told me that `int mod (int a) { return a % 3; }` contains not a single idiv instruction, _at zero optimizations._ At -O3, the function length went from 20 to 11 instructions. Using the 32 bit FNV_prime as the compile-time constant merely changed the constants in Godbolt's output, no other effect (ok, an lea with a multiply in the second operand got changed to an imul, whatever). Now, what it _absolutely will not do_ is take an array of anything, deduce that they are all _absolutely_ compile-time constants, and perform the same optimizations for each index of that array. No, for that to work, you have to declare an array of function pointers, which the compiler will refuse to inline under any circumstances (I shouldn't be surprised by that, but I sort of am). And apparently that optimization is just barely worth it on some CPUs: `int mod (int a) { return a % 7 }` removes the idiv on the general case CPU, but generates 14 lines of Assembly -- unless you specify `-march=znver3`, in which case the idiv comes right back, as I'm assuming it would for most modern architectures. Matter of fact, whatever algorithm they're using seems to get worse the further away you are from a power of 2, where the "nearest" power of 2 is always smaller than the constant. Maybe the guy who came up with the algorithm will generalize it to calculate down from the next higher power of 2 as well, and this piece of advice will become consigned to the dustbin of history. Who knows. Fascinating stuff, either way
@PennyEvolusАй бұрын
Omg my brain was like wha whahuh huùuh ohhhh yeah i get it u sent me on a roller coaster ride bro
@PennyEvolusАй бұрын
@@mage3690can i get that in a tldr pls im dyslexic (please ik us programmers need to read but i long for recognition and programming is the only unique skill i could learn at school to get recognition)
@vasiliigulevich92022 ай бұрын
Fixed window width is crazy. Why not track start and end separately? There would be no nested iterations, no tracking of multiple symbols.
@porky1118Ай бұрын
24:20 I started to hate if-let. In this case I'd use let-else, especially because the else case of let-else has to return (or continue/break) anyway.
@Dom-zy1qy2 ай бұрын
Bro this thumbnail is devious
@EnergyCourtier2 ай бұрын
Love these videos. Thanks.
@dhruvsolanki447319 күн бұрын
Amazing 🎉❤
@mahmoudmousa24052 ай бұрын
27:00 I agree the intuition made so much more sense to me
@lysendertrades2 ай бұрын
The name is the prime agen!
@platonvin1022Ай бұрын
reality is that bruteforcing on gpu is going to be faster for any reasonable size
@zS39SBT4fe5Zp8QАй бұрын
Now do it in CUDA and actually get O(1) by testing all positions in one cycle.
@aaaabbbbbccccccАй бұрын
And when the number of positions exceeds your cores, what then? It isn't O(1), it's still O(n)
@af57722 ай бұрын
hey, had to stop watching the video halfway through so maybe I missed something, but Im mainly referring to the beginning section of the video where you explain different ways to tackle the problem. When you mention dynamic array are you talking about some sort of higher level data structure that I am not familiar with? Just asking because when I hear dynamic array im thinking of a heap allocated array in C which you manipulate fully on your own with malloc and such. I mallocing some size any slower than just going for a static (stack) array. just to be clear arr[4] vs arr* = malloc(...
@NeetCodeIO2 ай бұрын
dynamic array is basically a vector in cpp or a arraylist in java. js and python only use dynamic arrays. i guess another word for it would be a 'resizable array'
@lah303032 ай бұрын
21:10 what happens in the occurrence where there are 3 duplicates?
@GoKotlinJava2 ай бұрын
any extra duplicates will result in less than 14 bits set. Two duplicates will result in 12 bits set. (12 unique + 2 duplicates cancelling each other) Three duplicates will result in 12 bits set. (11 unique + 3 duplicates giving us 1 unique and 2 cancelled) You want exactly 14 bits in final result
@SnipeSniperNEW2 ай бұрын
I freaking looooooooooooooooooooooooooooooooooooooooooove these videooooooooooooooooooooooooooooooooooooooooos
@bravesirrobin95762 ай бұрын
What are you using for the diagrams and screencasting tools? They look swish
@k98killer2 ай бұрын
ThePrimmyGen seems like a cool guy
@xenopholis47Ай бұрын
Can anybody explain why we use both binary/ 2 bit and also we use 8bit? I am noob. I am asking about the fundamental thing. Any long form answer will be appreciated. Edit: As I went further into the video, I realized this they are talking about DSA of which I have no idea. But still any easy explanations are welcome. Cheers
@xymaryai82832 ай бұрын
i'm a noob who's only just installed rustc and read only a few pages of the rust book, i have no clue why people approach this problem with a hash set instead of a table or vector
@drunkenmaster389Ай бұрын
"...by this little youtuber called the pree-mee-a-jen" 🤣
@zangdaarrmortpartoutАй бұрын
Where can we find this problem text ?
@vaolin17032 ай бұрын
Primi-Agen
@lancemarchetti8673Ай бұрын
The future of computing will eventually move away from algorithms
@Akhulud2 ай бұрын
its more like "making a faster algo" that "making an algo faster"
@BennyDeeDev2 ай бұрын
You lost me at bitmask, even though I am also named Benny 😢
@samarnagar96992 ай бұрын
i saw what you did there in start what so should we calll you dr.n now
@nirmalgurjar81812 ай бұрын
informative .. (y)
@FougaFrancoisАй бұрын
With your small understanding of Cache hierarchy and SIMD knowledge , I would not call myself an elite programmer ...
@ehm-wg8pd14 күн бұрын
the name is premi agent
@thekwoka4707Ай бұрын
Feels like the result he has wasn't even optimized. Why do windows when you can just do left and right markers?
@ElroyChua2 ай бұрын
wow
@betadelphini40362 ай бұрын
but what would be a vector in Python? A List?
@NeetCodeIO2 ай бұрын
yeah, python only has dynamic arrays, not static ones i believe
@morton42 ай бұрын
guys how is the case where there are 3 or more of the same characters in the window? that bit will still be set to true, am i missing something?
@NeetCodeIO2 ай бұрын
In that case we won't have 14 bits set to true tho.
@meraindia536710 күн бұрын
Let's through avx512 on it
@VideoViewer335122 ай бұрын
Sometimes performance doesn't really matter at all, sometimes getting features shipped is even more important.
@tyulen-vn6qj2 ай бұрын
I love you all
@koooongienoob2 ай бұрын
thank you, mr. super leetcode monkey sir.
@sarojregmi2002 ай бұрын
Bro roasted the preemagene
@EverAfterBreak22 ай бұрын
21:50 what if there’s 3 repeated letters? The bit would be 1 again
@ericcoyotl2 ай бұрын
Correct, but he stated that the only thing that matters is if there are 14 distinct characters (14 ‘1’s) If there’s 3 repeated letters, that bit would be 1. But there wouldn’t be enough other ‘1’ bits to total to 14
@EverAfterBreak22 ай бұрын
@@ericcoyotl you’re right, thanks for the explanation