The Perfect Sorting Algorithm?? Block Sort Explained (Wiki Sort, Grail Sort)

  Рет қаралды 41,047

Kuvina Saydaki

Kuvina Saydaki

Күн бұрын

Пікірлер: 144
@Kuvina
@Kuvina 10 ай бұрын
CORRECTIONS: none so far, but please tell me if there are any mistakes. I apologize for the long wait on this one. For most of October, I was working on a different video (hint: it's chemistry related), but I eventually realized it would take way too long to finish that, so I switched to block sort. And then I've been sick for most of November so far. My views are a bit down, so anything you can do to help the channel is very much appreciated. Anyway, thank you for your patience and continued support!
@ghostchris519
@ghostchris519 10 ай бұрын
Try making more videos on simple things expanded upon I’d say for more views. Your colors, first sorting video, and Platonic solids video were hits, and they all follow the “here’s a basic topic you already know about, I’m about to explain something complicated you didn’t know about it”
@Kuvina
@Kuvina 10 ай бұрын
Thank you! That is some really smart advice!
@pokemonjourneysfan5925
@pokemonjourneysfan5925 10 ай бұрын
I have a question, can we create say, a joke-sorting algorithm that works in prime number time. This means that the number of operations is O(p_n). I mean it is proven that p_n ~ n*(log(n)+log(log(n))-1) for n>=6. Note that here log represents log base e not 2.
@Kuvina
@Kuvina 10 ай бұрын
@@pokemonjourneysfan5925 that's really interesting! I've never seen a sorting algorithm that is O(log(logn)), but I've heard it's theoretically possible if the input is only integers.
@pokemonjourneysfan5925
@pokemonjourneysfan5925 10 ай бұрын
@@Kuvina More specifically I said, it runs in Prime number time. p_n is the nth prime.
@NobodyYouKnow01
@NobodyYouKnow01 2 ай бұрын
I'm halfway through this video, and I love the idea that, technically, I were a customer in a database, and my entry were part of the A block, you might be using my data entry to encode the sorting status of other customer's information in that database.
@DeadCatX2
@DeadCatX2 10 ай бұрын
I think the perfect sort would exploit CPU architecture, and therefore have optimal temporal and spatial locality prioritized over time and space complexity. I also think it would benefit from considering registers and cache as space that doesn't technically count against the "in-place" constraint. An nlogn algorithm that makes better use of locality will easily outperform one with more unpredictable memory access
@DFPercush
@DFPercush 10 ай бұрын
I don't think local variables count against space complexity, as long as they're not arrays of a size dependent on n. You have to be able to swap stuff, after all. All the pointers and partitions, etc are part of that.
@Lossanaght
@Lossanaght 10 ай бұрын
​@@DFPercush en.wikipedia.org/wiki/XOR_swap_algorithm :P
@justineberlein5916
@justineberlein5916 9 ай бұрын
I mean, they technically do. But if it's just "And a temporary variable for swapping things", that's O(1) extra space. Contrast with a normal merge sort that needs a buffer the size of one of the blocks to be merged, which is half the size of the array or O(n)
@johaquila
@johaquila 9 ай бұрын
What you propose would obviously lead to enormous savings when sorting, for example, a huge number of integers on a specific machine. But I think what you usually have is at most a few thousand elements sorted on a more or less random CPU (little more than the architecture being fixed) with somewhat random cache sizes and the actual comparisons delegated to a function or macro of variable complexity. In that case you don't know what to optimize for, and whatever happens during the comparison calculations may well completely destroy your locality. The usual metric of number of comparisons is exactly right in this case.
@Mephistahpheles
@Mephistahpheles 24 күн бұрын
No doubt. Knowing more about the data and the machine you're using will result in improvements. Example: If I'm actively receiving data, I can start sorting before I even have all the data. If the transmission is slow, and the sort is relatively fast....insertion (or variation) sort may very well be the best: when playing cards, I don't sort my entire hand when I draw a new card.
@dead-eyedarrel3878
@dead-eyedarrel3878 10 ай бұрын
I swear Kuvina follow up videos are like my highest anticipated KZbin content.
@immigrantmammoth5162
@immigrantmammoth5162 10 ай бұрын
finally, the return of enby math person
@darqed
@darqed 10 ай бұрын
best sorting series on youtube
@simonwillover4175
@simonwillover4175 10 ай бұрын
This is very hard for me to understand. However, I think this video does an amazing job teaching it. I am just having a hard time because this is my first time ever encountering most of these ideas. I will definitely want to watch this multiple times.
@gfasterOS
@gfasterOS 10 ай бұрын
I kinda want to see how this performs in terms of cache efficiency. It seems like this uses more operations than Quicksort, but if it is way better with cache efficiency then it might be useful outside of embedded cases.
@henriquefinger935
@henriquefinger935 10 ай бұрын
My beloved HeapSort shall be avenged.
@kered13
@kered13 10 ай бұрын
From benchmarks I recall seeing (awhile ago, no links, sorry), they're surprisingly competitive given the complexity involved. I suspect that a good quicksort will be faster on random data, but given the drawbacks of quicksort (unstable, hard to pick a good pivot for non-random data), they are probably worth considering. I believe wikisort is now used in some standard libraries.
@peceed
@peceed 10 ай бұрын
quicksort is unbeatable considering cache efficiency.
@batlin
@batlin 9 ай бұрын
Yes, a good set of benchmarks would help. Performance will depend on the size of the input and blocks, and which variant of block sort of course. There are some benchmarks for block sorts like quadsort, octosort and wikisort vs glibc qsort and std::stable_sort. Quadsort wasn't quite a fair comparison since it seems to use extra memory -- it is very fast but the author acknowledges "Quicksort has a slight advantage on random data as the array size increases beyond the L1 cache". Octosort seemed comparable to qsort for fully random order but much faster for a few scenarios (esp. already sorted ascending/descending). Wikisort is similar, but may not be as useful in embedded / low-memory environments because it slightly bends the "rules" by using a fixed-size 512-item cache -- this is still technically O(1) space because it's a constant size cache, but may not be what you want.
@ghostchris519
@ghostchris519 10 ай бұрын
Long live the piano intro
@Tarou9000
@Tarou9000 10 ай бұрын
I think it's from a video called something "the song of the world" but i'm sure it's from a channel called "midi music"
@cowcat8124
@cowcat8124 10 ай бұрын
07
@Tarou9000
@Tarou9000 10 ай бұрын
kzbin.info/www/bejne/qnyUfmlta8Zjjbssi=7BXe6TAdIIGSdQmX
@Z_Inspector
@Z_Inspector 10 ай бұрын
RIP
@haipingcao2212_.
@haipingcao2212_. 5 ай бұрын
-yes😂bo-
@Musicombo
@Musicombo 5 ай бұрын
Very, very good explanation of Grailsort. I still plan on posting my own explanations in the future, but it's great that yours is available on the net for now! I'll be sure to give you credit. We also will continue developing Holy Grailsort sometime in the near future!
@rodrigoqteixeira
@rodrigoqteixeira 24 күн бұрын
Oh cool. I think this explanation lacked a pseudo code since in the other 2 parts of the video it had and this one didn't. It's also hard to find any online so a pseudo code would help me understand better.
@itellyouforfree7238
@itellyouforfree7238 10 ай бұрын
Wow, what an extremely well made video! You summarized the key ideas very well! Thanks!
@michawhite7613
@michawhite7613 10 ай бұрын
I found that selection sort can be surprisingly useful as a lazy sort. I don't know how many values I'll need, but I usually don't need to sort the entire list. I usually just need the two smallest values.
@Milan_Openfeint
@Milan_Openfeint 10 ай бұрын
That task is usually solved by quicksort, you just don't sort both halves recursively, only the half containing the k-th element to find k lowest elements.
@AubreyBarnard
@AubreyBarnard 10 ай бұрын
Which is known as "quick select".
@HansLemurson
@HansLemurson 10 ай бұрын
Alternately, a Heap automatically maintains the smallest value at it's head, and has log(n) time for getting the next smallest item.
@michawhite7613
@michawhite7613 10 ай бұрын
@@Milan_Openfeint I haven't benchmarked it, but the worst case time complexity for this algorithm is O(n²). My worst case time complexity is O(n). The only benefit I can think of is that it would make the second selection O(1). But that seems overkill for just grabbing two elements. In all honesty, I did think about trying quicksort, but the thing that stopped me was that the one library I found for it used a lot of memory allocations, which was already taking most of my time.
@yuantan9292
@yuantan9292 9 ай бұрын
If you just need to find the k-th largest/smallest value (in your case k=2), consider using quick select. Quick select is O(n), in-place, though not stable. If you write C++, it is called "std::nth_element" The resulting list's first k elements will be the k smallest elements, and the k-th element in the list will be exactly the k-th smallest element. If you then need the first k elements sorted, you only need to sort the first k-1 elements however you want, or if k
@paradiseexpress3639
@paradiseexpress3639 10 ай бұрын
LETS GOOOOOOOOOO! A NEW Kuvina Saydaki video!!!!!!!!!!!!
@giuseppefusco1189
@giuseppefusco1189 9 ай бұрын
Loved the Technetium 99m joke
@General12th
@General12th 9 ай бұрын
This is incredible.
@yaksher
@yaksher 10 ай бұрын
I would say that quick sort is an "in place" sorting algorithm. I've never heard of in place as being defined as specifically O(1) and log n space complexity is... well, if you want your sort to ever actually terminate, your logarithm better be less than, say, 64 lol.
@sonicwaveinfinitymiddwelle8555
@sonicwaveinfinitymiddwelle8555 10 ай бұрын
Now get ready for... *EVOLUTIONARY SORT!!!* > This sort trains an artificial intelligence into learning how to sort stuff like humans do. > The AI will learn its own way to sort. > After the sort is completed the AI will be destroyed. Yes, I made this thing completely up and I have no intention to try and replicate it into a real sorting algorithm.
@simonwillover4175
@simonwillover4175 10 ай бұрын
In the future, we will probably have 1 quadrillion parameter AIs that are actually useful for this. Maybe we will have them sorting something that can not be sorted easily, like quatum entangled particles.
@sonicwaveinfinitymiddwelle8555
@sonicwaveinfinitymiddwelle8555 10 ай бұрын
If we are going to have that strong AIs I don't think sorting anything will be a problem@@simonwillover4175
@СергейМакеев-ж2н
@СергейМакеев-ж2н 9 ай бұрын
That's basically the "Universal Search" algorithm. Ironically, it's the "absolute fastest" of all algorithms from the perspective of complexity theory, even though it's horribly slow. That's because the time to build "the AI", or whatever you call it, is CONSTANT, and thus negligible compared to O(n).
@simonwillover4175
@simonwillover4175 9 ай бұрын
@@СергейМакеев-ж2н so if we had to sort like a list of 10^1000 items, the evolution sort would probably be pretty effective!
@lilyyy411
@lilyyy411 10 ай бұрын
It's strange that most of the enbies that i've seen are all obsessed with sorting algorithms
@m_affiliates
@m_affiliates 3 ай бұрын
as an enby i can confirm
@jimiwills
@jimiwills 10 ай бұрын
It's like a brain hug for 30 mins ❤❤❤
@George70220
@George70220 10 ай бұрын
Amazing video ❤
@Lucas-pj9ns
@Lucas-pj9ns 10 ай бұрын
Thanks for the cool video! How do you get the distinct elements in place?
@Kuvina
@Kuvina 10 ай бұрын
I think it's basically like rotation based merging, but in reverse. So in the algorithms where the left sublist is already sorted, you just start at the beginning and repeatedly find the first piece that is greater than the last one you used, and then rotate it into the next spot in the buffer. As for grail sort, you can use basically binary insertion sort, but if you find that the piece is exactly equal to one of the ones already in the buffer, you simply skip it and don't insert that one.
@xcoder1122
@xcoder1122 10 ай бұрын
Note that there is a simple variant of mergesort that is in-place, it's just not stable. Still it has advantages over quicksort as it requires less memory and it guarantees O(nlogn), which quicksort does not; quicksort is only typically O(nlogn) but unless you use a special variant of it, there are cases where it clearly is not, whereas meregsort is always O(nlogn), no matter the input. Of course, you could as well use heapsort instead of that mergesort variant if all you wanted was in-place and O(nlogn). But at some point, it also boils down to the fact that two algorithms aren't equally fast just because they are both O(nlogn) and speed is often the most important factor of all and it's not covered at all by the introduction table (complexity is not speed). Last but not least: In-place is a very stretched term. If an algorithm requires arrays of auxiliary structure, e.g. to track O(log n) bits, that is not truly in-place, as it just swaps stack memory for a bit array, which is a reduction of extra space but it is extra space.
@Kuvina
@Kuvina 10 ай бұрын
Thanks that's a good point! I should have mentioned just because it checks the boxes doesn't mean it's necessarily the most practical algorithm for every purpose!
@xcoder1122
@xcoder1122 10 ай бұрын
@@KuvinaWell, the big-O notation is always very deceiving. I usually tell people the story, that I required a lookup table for network protocol implementation in a system kernel. What did I choose? Well, a hasthable, of course, as it is O(1). Years later and just out of curiosity, I wanted to see what happens if I replace the hashtable by a sorted array instead, where lookups are O(logn), of course (binary search). And the result was: The sorted array resulted in a significant better performance. Of course, there is a turning point. The array will get slower the more elements it has, whereas the hashtable will roughly stay the same. I was able to calculate the turning point and later benchmarks confirmed that calculation to be correct. But that turning point was irrelevant, as it was way beyond 256 entries and the lookup table was bound to having at most 256 entries (the table entry count was stored in a byte) and would typically not have more than anything between 4 and 64 entries in real world scenarios. So just because two algorithms are both O(nlogn) and both have O(1) space complexity, doesn't mean it's totally irrelevant which one you pick. If there were no advantages and disadvantages, then there would be no reason why different variants even exist, right?
@killerbee.13
@killerbee.13 9 ай бұрын
@@xcoder1122 When it will have at most a small bounded number of elements, like 256, it's also worth considering just using a bitmap + the data (if the data has some kind of null state, you can also avoid the bitmap) and get lookup in a single pointer offset. It uses a bit more memory, but if this is a persistent structure that multiple copies won't exist of that's not much of an issue in most cases. Unless you're using a 16-bit microcontroller or something, anyway.
@nebularzz
@nebularzz 10 ай бұрын
YESS I WAS WAITING FOR THIS VIDEO!!!!
@CharlesVanNoland
@CharlesVanNoland 10 ай бұрын
"Auxiliary" is the WOTD for me :P
@mlvluu9836
@mlvluu9836 4 ай бұрын
9:16 Could this be done with the individual items rather than with blocks?
@kelema2097
@kelema2097 10 ай бұрын
Weclome back Kuvina!
@iwersonsch5131
@iwersonsch5131 10 ай бұрын
Does binary search insertion sort count as O(nlogn)? It only has O(nlogn) comparisons and O(n) writes, but each write is of size O(n) which I've been told may mean that each write takes O(n) time
@Kuvina
@Kuvina 10 ай бұрын
so the thing about insertion sort is that even though you can find a piece's destination within the sorted section in O(logn) time, to actually get it there, you have to shift everything over. On average, you have to shift over half the sorted section, which is on average half the list, so inserting 1 piece, even with a binary search is O(n) per piece, therefore in total it's O(n^2), the same time complexity as regular insertion sort. (It does have fewer operations though, just not a time complexity difference)
@nullplan01
@nullplan01 10 ай бұрын
According to Andrei Alexandrescu, binary search insertion sort is also worse experimentally, since the branches are less predictable. In the linear insertion sort, the CPU can learn that most of the time, the inner loop is continued, but in binary search insertion sort, you can take the upper or lower branch seemingly at random. Branch prediction matters!
@iwersonsch5131
@iwersonsch5131 10 ай бұрын
@@nullplan01 That's weird. How would e.g. 15 unpredicted outcomes be worse than 8000 loops?
@nullplan01
@nullplan01 10 ай бұрын
@@iwersonsch5131 if the branch is mispredicted, the effect is a pipeline flush, which can be as bad as doing nothing for hundreds of cycles. Plus Kuvina already explained that you still need the linear move operation.
@iwersonsch5131
@iwersonsch5131 10 ай бұрын
@@nullplan01 So linear insertion sort has 1 pipeline flush per entry, while binary insertion sort has about lb(n)/2 pipeline flushes per entry. A middle ground could be to break up the sorted list into linearly searched blocks of, say, sqrt(n) (or for very large n, n^(1/3) and n^(2/3)), resulting in 2 pipeline flushes per entry, or 3 for very large n, while keeping the comparisons for each entry below 1000*log_1000(n)
@ppantnt
@ppantnt 10 ай бұрын
At 3:03 I don't quite understand why the time to rotate length with sqrt(n) has a time complexity of O(sqrt(n)). Like if you try to rotate the left sublist to the place wouldn't it take time to shift element to the left?
@vgtcross
@vgtcross 10 ай бұрын
Each time you rotate, you need to move O(sqrt n) elements right and possibly O(n) elements left. This is done O(sqrt n) times in total, so it seems like it will take O(n sqrt n) time in total. But notice that each element is shifted left only once. This means that the time complexity of shifting elements left is O(n) in total. We shift O(sqrt n) elements right a total of O(sqrt n) times, so the time complexity of that is O(n) in total. Thus, the full merge process is linear.
@ppantnt
@ppantnt 10 ай бұрын
@@vgtcross But how should this algorithm be implemented, like keep shifting the next element after the sublist until reaching the destination? But if so then wouldn't it be shuffled and will need to be sorted every time it shift?
@Kuvina
@Kuvina 10 ай бұрын
So a rotation of 2 sublists can be done in linear time (compared to the length of the 2 sublists) if you simply flip each one then flip them together. This moves each piece twice, and there are faster ways that move each piece once, but the same time complexity.
@FishSticker
@FishSticker 10 ай бұрын
Could you do a proof of why nlogn is optimal, and not… say, loglogn
@Kuvina
@Kuvina 10 ай бұрын
I don't know the proof, but that's a good idea!
@FishSticker
@FishSticker 10 ай бұрын
@@Kuvina awesome!
@canaDavid1
@canaDavid1 10 ай бұрын
There are n! possibilites that the array can be in. Any comparison only gives 1 bit of information, but one needs log(n!) in total. n! ≈ n^n, so log(n^n) = nlog(n) is a lower bound on the number of comparisons needed to figure out what the permutation is.
@jetison333
@jetison333 10 ай бұрын
​@@canaDavid1n! Isnt about equal to n^n though, in fact n^n > n!.
@YourAverageLink
@YourAverageLink 10 ай бұрын
​@@jetison333log(n!) / log(n^n) asymptotically approaches 1 though, so they both evaluate to n log n in O notation
@haipingcao2212_.
@haipingcao2212_. 7 ай бұрын
Welcome back
@halneufmille
@halneufmille 7 ай бұрын
I thought Quick sort was in place too. Is it not in place because it uses recursion?
@Kuvina
@Kuvina 7 ай бұрын
Quick sort is O(logn) space due to recursion, which is sometimes considered in place, but usually only O(1) is considered in place
@knotsuchan
@knotsuchan 10 ай бұрын
i enjoyed this
@Kuvina
@Kuvina 10 ай бұрын
Thank you knots and Himari!
@ladyravendale1
@ladyravendale1 10 ай бұрын
Banger vid
@etooamill9528
@etooamill9528 10 ай бұрын
i thought quicksort(choose a pivot(p), smaller than p to the left bigger to the right, split array at p and do the same on the two halves recursively) was in place? am i confusing with another algorithm?
@canaDavid1
@canaDavid1 10 ай бұрын
It requires O(logn) stack space for the recursion
@animowany111
@animowany111 10 ай бұрын
@@canaDavid1 And it's not guaranteed to be O(logn) time (nor space) complexity, that depends on choosing "good enough" pivots
@RiedlerMusics
@RiedlerMusics 10 ай бұрын
this looks lovely, but honestly, I stopped understanding the second you started talking specifics
@peterreichg
@peterreichg 10 ай бұрын
In wikisort, how does it find the A block with the smallest tag value? Does it kind of selection sort it (check the second value of every A block, then block swap the first block with the one with the smallest tag value) or does it use a better method?
@Kuvina
@Kuvina 10 ай бұрын
Honestly I'm not sure. I don't know how you could do it faster, but I wouldn't be surprised if they found a way.
@pandaqwanda
@pandaqwanda 10 ай бұрын
nice video !
@EchoHeo
@EchoHeo 10 ай бұрын
cool video
@dylanherrera5395
@dylanherrera5395 9 ай бұрын
one question, isnt O(nlogn) slower than O(n) at higher sizes of n?
@killerbee.13
@killerbee.13 9 ай бұрын
O(n log n) is always higher than O(n), but it doesn't matter because it's optimal for comparison sorting. O(n) comparison sorting doesn't exist. Also, log n grows so slowly that the difference barely matters anyway. By the time it log n is appreciably large, you're hopefully going to be using a parallelized sort instead.
@StefanReich
@StefanReich 10 ай бұрын
Is space complexity O(n) actually a big deal? Sounds pretty okay to me. It's just temporary storage.
@paulstelian97
@paulstelian97 10 ай бұрын
It can be with hyper large lists, like more than would fit into RAM (external sorting). External sorting is another can of worms anyway. But it would likely be something like doing a sort on blocks of a certain size where a standard algorithm would be able to do it in RAM, then use extra space and in a streaming fashion perform a (potentially n-way) merge sort.
@pocarski
@pocarski 10 ай бұрын
When does stability even matter? Why would the order of entries matter if they have the same value?
@platinummyrr
@platinummyrr 10 ай бұрын
Because you might be sorting complex objects which have multiple keys. If you sort a list of people by age, you don't want the list to change relative order. If you did, it wouldnt be possible to guarantee a sort of two different keys since sorting by the second key could break the ordering of the first key.
@StefanReich
@StefanReich 10 ай бұрын
@@platinummyrr Yes, stability matters
@flameofthephoenix8395
@flameofthephoenix8395 10 ай бұрын
1:39 Hm, shouldn't it be as simple as iterating through each item in the list once to make it stable again?
@Flowery0
@Flowery0 10 ай бұрын
I (probably)will fucking write that one when i'll have the time
@put4558350
@put4558350 10 ай бұрын
If you have some idea of "possible" new sorting that you think really good and want to show to the world. What to do ?
@Milan_Openfeint
@Milan_Openfeint 10 ай бұрын
Implement, benchmark, give up.
@put4558350
@put4558350 10 ай бұрын
@@Milan_Openfeint Well, today sorting is result of years of optimize. Can't beat that easily. My idea can sort everything in two part. Read from main array once and get multiple sort lists. Then combine that lists to main array. Part 2 Multi thread able. And result is Stable. ... With big memory usage problem
@SkyboxMonster
@SkyboxMonster 9 ай бұрын
I came up with a proof of concept electronic sorting technique. But I cant find anything remotely similar to it in the existing sorting algorithm ecosystem. I dont know how to appraise it to see if its worth investing more time into developing it. Its Stable. Its not Technically in-place, but its impact on hardware is very limited. Its time complexity is very hard to define.... it is a multi-step process, but each step is measured in clock cycles,
@scalibbdp9249
@scalibbdp9249 9 ай бұрын
Test ALL your sorting algorithms on a Data Set with 1,073,741,824 elements ( 1 Billion! )...
@scalibbdp9249
@scalibbdp9249 9 ай бұрын
// Data Set size: 32768 x 32768 - RTfloat ///////////////////////////////// Application - MgwTestApp - NOS64_MGW ( 64-bit ) - Release Tests: Start Command Line arguments - User Count : 6 Values: 32768 3 1 4 7 * Test0001 Start * Microsoft Visual Studio version Not Defined ********************************************** Configuration - NOS64_MGW ( 64-bit ) - Release CTestSet::InitTestEnv - Passed * CSortSet Start * * TSortSet Methods * * CSortSet Methods * GetSortOrder - Passed Data Set of RANDOM Values Data Set Size : 1073741824 elements Number of Iterations: 3 Number of Tests : 1 Number of Threads : 4 * CSortSet Algorithms * Data Set: RTfloat - in RANDOM Order GetSortOrder - Passed HeapSort - Sorting... HeapSort - RTfloat - 425415.00000 ms HeapSort - Passed QuickSort - Sorting... QuickSort - RTfloat - 10495076.00000 ms QuickSort - Passed MergeSort - Sorting... MergeSort - RTfloat - 146345.00000 ms MergeSort - Passed Data Set: RTfloat - in ASCENDING Order GetSortOrder - Passed GetSortOrder - RTfloat - 0.00000 ms GetSortOrder - Passed * CSortSet Algorithms - VALUESET * Converted to RTdouble type Converted to RTubyte type VALUESET:RTubyte - 0.00000 ms VALUESET:RTubyte - Passed * CSortSet Generic * * CSortSet End * * Test0001 End * Tests: Completed
@scalibbdp9249
@scalibbdp9249 9 ай бұрын
You really need to Re-Think of everything when there is a data set of 1 Billion elements. Your "toy-test-cases" with n=300 Absolutely Small!
@wildblack1
@wildblack1 10 ай бұрын
nlogn = logn^n
@AgainPsychoX
@AgainPsychoX 10 ай бұрын
Sorting porn, I love it.
@redstowen
@redstowen 10 ай бұрын
Physic ep 6 when
@Speed001
@Speed001 10 ай бұрын
Found you
@RichConnerGMN
@RichConnerGMN 10 ай бұрын
wow it's the nb math/compsci nerd (buzz lightyear gif) /pos
@animowany111
@animowany111 10 ай бұрын
I can't tell if you're being rude or if you're comedically signing the comment as if you're a "PoS"...
@noelletheradcat
@noelletheradcat 10 ай бұрын
​@@animowany111/pos means positive connotation
@animowany111
@animowany111 10 ай бұрын
@@noelletheradcat Huh, never heard of it. The acronym for "Piece of Shit" (or "Point of Sale", but those are basically the same thing lmao) is too ingrained in my head
@567secret
@567secret 10 ай бұрын
Doesn't spaghetti sort beat nlog(n)?
@Kuvina
@Kuvina 10 ай бұрын
yeah I should've specified, nlogn is optimal for comparison based sorting algorithms if you don't use parallel processors.
@ImNetheN
@ImNetheN 10 ай бұрын
:3
@bobbsurname3140
@bobbsurname3140 10 ай бұрын
Sheesh, what a hassle
@christophera6554
@christophera6554 8 ай бұрын
🔥 Promo sm
@rodrigoqteixeira
@rodrigoqteixeira 10 ай бұрын
LETS GOOOK FKNALLY 🎉
@StefanReich
@StefanReich 10 ай бұрын
I thought this was a project out of academic curiosity, but apparently Wikisort is extremely competitive in real world applications. Very cool
@wWvwvV
@wWvwvV 9 ай бұрын
I guess it has the same problems as a c++ hashmap, it's O(1) but it uses a linked list in it's implementation. How to work in place, how to work concise in memory (no pointers). I watched the video only halfways to now, sorry: Go also has hashmaps O(1). Most of the time I prevent to use them in a public API. Just lend over arrays/slices and search them with a binary search. It is a much simplier API only using arrays, and it is still faster as a map lookup, because of the cpu cache locality. I already know, that Bubble Sort can be faster than, lets say Quiksort, for small numbers. It's evident how the mathematical function curves behave, the extremes are in the big numbers thou.
@rewrose2838
@rewrose2838 10 ай бұрын
Please use a dark background, we have vampire children at this school and would like them to be able to enjoy the videos as well
@alonamaloh
@alonamaloh 9 ай бұрын
I didn't follow every detail of the video but, don't all these algorithms still require O(log(n)) space on a call stack, due to their recursive nature? EDIT: Never mind. I think these are not recursive in nature. I'm going to have to implement these ideas to fully understand them. Awesome video!
@JamesSjaalman
@JamesSjaalman 10 ай бұрын
The blocking assumes fixed record size. For uneven record lengths (CSV files ...) the rotate/swap just doesn't work. [ or you would need a separate layer of "pointers", wich would result in terrible locality]
@paulstelian97
@paulstelian97 10 ай бұрын
Most sorting algorithms don't work well if you can't arbitrarily swap elements.
@andytroo
@andytroo 10 ай бұрын
for uneven record lengths you can store a fixed length header followed by a pointer to the rest - most of the comparisons are decided by the header.
@andytroo
@andytroo 10 ай бұрын
alternatively you can treat the header ties as their own sub-sorting problem - rows with the same header get sorted together, then you can do a second pass to 'tie-break' grabbing a second header chunk out of each group of ties.
@iamsushi1056
@iamsushi1056 10 ай бұрын
Radix sorts are a type of block sort, aren’t they?
@Kuvina
@Kuvina 10 ай бұрын
Personally I wouldn't say that. Radix sort doesn't really have any type of block rearrangement or local merging. It does categorize pieces into groups, but I would say radix sort is a type of bucket sort or vice versa. I would instead classify block sort as a type of merge sort.
@Zicrus
@Zicrus 10 ай бұрын
If it can't sort in linear time, then I wouldn't call it perfect, since some non-comparison based algorithms can do that.
@schwingedeshaehers
@schwingedeshaehers 10 ай бұрын
How does it work for arbitrary size of each element?
@Musicombo
@Musicombo 4 ай бұрын
You can only sort an arbitrarily-ordered array in linear time if you know the *distribution* of the data beforehand, otherwise the minimum complexity will be O(n log n).
@Zicrus
@Zicrus 4 ай бұрын
@@Musicombo In almost all practical cases, values are stored in 32, maybe 64 bit integers, which can always be sorted in linear time. If you don't make that assumption, then yes, you are right.
The ALMOST Platonic Solids
28:43
Kuvina Saydaki
Рет қаралды 143 М.
The Amazing Math behind Colors!
42:34
Kuvina Saydaki
Рет қаралды 186 М.
Players vs Corner Flags 🤯
00:28
LE FOOT EN VIDÉO
Рет қаралды 63 МЛН
The joker favorite#joker  #shorts
00:15
Untitled Joker
Рет қаралды 30 МЛН
Explaining EVERY Sorting Algorithm:  Variants and Hybrids
35:12
Kuvina Saydaki
Рет қаралды 26 М.
Explaining EVERY Sorting Algorithm (part 1)
35:35
Kuvina Saydaki
Рет қаралды 168 М.
I Made Sorting Algorithms Race Each Other
8:24
Green Code
Рет қаралды 128 М.
The ALMOST Perfect Numbers
30:01
Kuvina Saydaki
Рет қаралды 46 М.
The Bubble Sort Curve
19:18
Lines That Connect
Рет қаралды 573 М.
A problem so hard even Google relies on Random Chance
12:06
Breaking Taps
Рет қаралды 1,1 МЛН
Every Sorting Algorithm Explained in 120 minutes (full series)
1:57:33
Kuvina Saydaki
Рет қаралды 64 М.
10 FORBIDDEN Sorting Algorithms
9:41
Ardens
Рет қаралды 863 М.
Bresenham's Line Algorithm - Demystified Step by Step
16:10
NoBS Code
Рет қаралды 48 М.
Secret Kinks of Elementary Functions
32:06
Imaginary Angle
Рет қаралды 158 М.
Players vs Corner Flags 🤯
00:28
LE FOOT EN VIDÉO
Рет қаралды 63 МЛН