Big O Strikes Back

  Рет қаралды 9,324

SithDev

SithDev

Күн бұрын

Пікірлер: 36
@neko-chan5515
@neko-chan5515 2 жыл бұрын
Thanks for the video. Hope your channel will get lots of subs and views. You truly deserve it. The explanations are great and easy to understand. Awesome job
@oj.b.3889
@oj.b.3889 2 жыл бұрын
Hidden gem!
@TheDarkOne629
@TheDarkOne629 4 ай бұрын
What really makes me sick is run times O(n!) and, worst of all, O(n^n)... Great video, also. :)
@qwertyclan
@qwertyclan Жыл бұрын
Is it possible to have trigonometric running times such as O(sin(n))?
@rarebeeph1783
@rarebeeph1783 Жыл бұрын
O(sin(n)) would imply that, at some point, a larger input would have a shorter running time, and on average it'd be constant time with variation. i don't have the expertise to say for sure it's not possible, but it doesn't seem like that useful of a descriptor
@mnxs
@mnxs 6 ай бұрын
_Sort of,_ but not really, no (afaik). There's so-called _amortised_ complexity. This is where you have a standard case of some complexity, but in certain circumstances, this grows to another, worse one. Take hash maps, for example (assuming the time complexity is measured for input size). An insertion operation in those is O(1) amortised, because usually it's just (more or less) a single memory access, a key hash computation, and a little bit of miscellanea; O(1). But if the insertion triggers a so-called _rehash_ - that is, the value cannot be fitted into the existing backing array, and said backing array must be reallocated, all values moved (and often, re-hashed), and so forth - which is O(n). However, since those rehashes happens relatively rarely, we instead say that, _on average,_ the running time is O(1), since the cost of those infrequent rehashes can be said to be "split" among all the previous, true O(1) insertion operations. (Well-designed hash maps will also usually choose their new size as a multiple - often 2 - of their previous size, making the rehashes rarer and rarer as the structure grows.) Certain computations also have trivial simple cases. Multiplying or dividing an integer by a power of two, for example, can be done by using a simple (and extremely fast) bit-shift; however, other numbers will need the full treatment. This is sort of the opposite example of hash maps, since here the fast case is rare, and so the amortised running time will be of the worst case. In summary, some things will have spikes of higher complexity or lower complexity, which we can average. However, something like a true O(sin(n)) don't exist - and if it did, you'd amortise it anyway.
@FaffyWaffles
@FaffyWaffles 2 жыл бұрын
Your Channel is Amazing! I'm excited to see your growth! I'm here at 5.84K subs.
@ГеоргиГеоргиев-с3г
@ГеоргиГеоргиев-с3г 2 жыл бұрын
Well... Find n: " Unsigned int A=0,B While (A!= -1){ A++ If(A==N) B:=A } Return B " Would be constant time O(1)=the biggest number you could possibly put there, because the next would be -1, and with the same mentality you can make every program run in theta(1)=infinity.
@Splish_Splash
@Splish_Splash 2 жыл бұрын
What does this supposed to mean?
@ГеоргиГеоргиев-с3г
@ГеоргиГеоргиев-с3г 2 жыл бұрын
@@Splish_Splash TBH, its negative zero followed by negative infinity up to negative one. "What it means" you can add one untill you end up at negative one and it will run for the same amount of time no matter the input and output the correct result, and therefore it's big o notation of one, nothing more that i can add. (Due to how binary works)
@Splish_Splash
@Splish_Splash 2 жыл бұрын
@@ГеоргиГеоргиев-с3г what does the code supposed to do? idk the purpose
@ГеоргиГеоргиев-с3г
@ГеоргиГеоргиев-с3г 2 жыл бұрын
@@Splish_Splash "Well... FIND N:...."
@thesenamesaretaken
@thesenamesaretaken 2 жыл бұрын
Would an unsigned int ever return true for that loo
@Sylfa
@Sylfa Жыл бұрын
11:15 - The classifications don't generally care about constants, it's O(1) not O(14), just like it's O(n^2) not O(n^14). The classification is just a rough classification, it doesn't actually tell you which algorithm will run faster in reality, just which ones you should be considering first. O(n!) can be faster than O(1), especially if you got the equivalent of sleep(2e31) in the O(1) version. That's something even senior programmers at big firms like google can misunderstand, if you're only looking at big-O then you're almost certainly not picking the right algorithm. The best priority queue implementation has amortized O(1) on most operations, but it's slower than the next best algorithm in practice as you're almost guaranteed to never get n high enough to overcome the increased fixed costs. Not if you use the priority queue for its intended purposes and aren't trying to crash your system.
@TheDarkOne629
@TheDarkOne629 4 ай бұрын
You're right about most things, but wrong about "just like it's O(n^2) not O(n^14)". Said without mathematical definitions, you erase constant factors, but only for multiplication (and division) and addition (and subtraction), but not for exponentiation. In other words, a*n is in O(n) (erase the a), but n^a is not in O(n^(a-1)), because it has no constant factors (n^(a-1) == (n^a)/n). All of this is explained in the next chapter.
@niceguy2373
@niceguy2373 2 жыл бұрын
There is no algorithm that can solve sudoku faster then O(n^m) where m is the number of spaces 2:02
@angusheath5321
@angusheath5321 2 жыл бұрын
That is correct, but it is O(1) when the variable is the amount of given numbers.
@Sylfa
@Sylfa Жыл бұрын
An algorithm that takes in an 3n*3n-sized board with n² symbols to fit in it would indeed have a big-O > 1, the argument is that because the data is always 9*9 with 9 symbols makes it O(1). It's important to consider that big-O is *often* a terrible metric to look at as a programmer since it ignores scalars and constants. If your algorithm has a sleep(1e9) in it and then returns true then it's O(1) whereas an algorithm that performs a countdown from n to 0 before returning true would be O(n). So if you only care about big-O then you'd pick O(1) and wind up with a program that soft locks for over 11 days before returning.
@Splish_Splash
@Splish_Splash 2 жыл бұрын
does big O describes the worst case? Cause when we talk about quick sort, or another sorting algorithms, the worst case for them is O(n^2), but on an average time complexity is O(nlogn), so time complexity for quick sort is O(nlogn), unlike bubble sort, where best case is O(n) (if array already sorted), but average time is O(n^2), so algorithms time complexity is O(n^2)
@sithdev8206
@sithdev8206 2 жыл бұрын
When talking about complexity, one usually means "in the worst case". Big O describes an upper bound, which might be the worst case, the average case or something else. Saying "The average running time of Quicksort is O(n^2)" is technically correct, although misleading because the bound is not tight. Even O(n^3) would be correct. For a tight bound, it's best to use Big Theta notation. If you say the (average/worst case) running time is θ(n log n), you mean it's not better or worse than n log n (up to constant factors).
@Splish_Splash
@Splish_Splash 2 жыл бұрын
@@sithdev8206 I understood, but i am reffering to wikipedia, that says quick sort has O(n^2) in the worst case and O(nlogn) in the average and best cases. You can't say that quicksort has O(nlogn) in the worst case. So i am a bit confused because of that.
@sithdev8206
@sithdev8206 2 жыл бұрын
@@Splish_Splash I'm not entirely sure I understood your question. Are you confused about big O notation in general or about the running time of Quicksort in particular? What I can say about big O notation: Many people use it when they are actually mean a tight bound (as in big θ notation). The running times you mentioned are all tight. Regarding Quicksort: The worst case running time is θ(n^2) because there is a small chance of picking lots of really bad pivot elements. But usually (on average) Quicksort is really fast: θ(n log n).
@sithdevyoungscientists1556
@sithdevyoungscientists1556 2 жыл бұрын
Nice video!
@cheshire1
@cheshire1 2 жыл бұрын
2:50, don't these variables take log(n) memory? n has at least log(n) digits.
@sithdev8206
@sithdev8206 2 жыл бұрын
Technically, you're right. Throughout this series, I'm assuming that integer variables take constant space, just as integer operations take constant time. This is a common simplification when analyzing algorithms. Of course, it doesn't make sense in certain scenarios (e.g. most number-theoretic algorithms).
@Sylfa
@Sylfa Жыл бұрын
That only matters if you're measuring the complexity by doing it by hand, as in, how much space on paper you need to write on. The program itself would store those variables in the same amount of memory no matter what the size of the array is. So no, it's not log(n), not even technically speaking.
@yeet7060
@yeet7060 2 жыл бұрын
nice
@ralph_d_youtuber8298
@ralph_d_youtuber8298 Жыл бұрын
The best of all n! 😂
@Bolpat
@Bolpat 2 жыл бұрын
FYI, the Ο and ο in the notation is an Omicron as opposed to Ω and ω which are Omegas; it’s _micro_ vs. _mega,_ you see? Also, the fancy big-O is wrong and the symbols should always be upright (like sin for the sine function, it is not a variable).
@itellyouforfree7238
@itellyouforfree7238 2 жыл бұрын
That seems to be false: en.wikipedia.org/wiki/Big_O_notation#History_(Bachmann%E2%80%93Landau,_Hardy,_and_Vinogradov_notations) The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation. The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capital omicron, probably in reference to his definition of the symbol Omega.
@itellyouforfree7238
@itellyouforfree7238 2 жыл бұрын
Also, typesetting it in italic is perfectly fine. C¹ and L¹ are not variables either, but they are typeset in italic. Again from en.wikipedia.org/wiki/Big_O_notation#Typesetting: Big O is typeset as an italicized uppercase "O", as in the following example: $O(n^{2})$. In TeX, it is produced by simply typing O inside math mode. Unlike Greek-named Bachmann-Landau notations, it needs no special symbol. Yet, some authors use the calligraphic variant $\mathcal{O}$ instead.
@eazypeazy8559
@eazypeazy8559 Жыл бұрын
cool!)
Order and Conquer: Binary Search
15:40
SithDev
Рет қаралды 3,2 М.
Comparing Algorithms: Big O Notation
17:17
SithDev
Рет қаралды 6 М.
Disrespect or Respect 💔❤️
00:27
Thiago Productions
Рет қаралды 43 МЛН
The longest mathematical proof ever
19:30
Dr. Trefor Bazett
Рет қаралды 86 М.
What Is Big O Notation?
17:45
Reducible
Рет қаралды 317 М.
From Addition to Quantum Physics
1:06:07
MAKiT
Рет қаралды 442 М.
Big-O Notation - For Coding Interviews
20:38
NeetCode
Рет қаралды 515 М.
Big O myths busted! (Time complexity is complicated)
21:33
strager
Рет қаралды 136 М.
What Is Big O? (Comparing Algorithms)
8:15
Undefined Behavior
Рет қаралды 171 М.
Teleporting Ants & Dynamic Programming #SoME2
12:42
A Bit Wiser
Рет қаралды 168 М.