That’s the explanation that I was looking for. Thanks man!
@whiteboard1335Ай бұрын
Great video! One point to note regarding the bandwidth reduction through video chunking-while this won't actually save bandwidth, it does make the system more resilient. Clients can perform parallel operations, enable progressive loading, etc. However, the QPS will likely increase due to multiple requests for the same video (which, as mentioned at the end, can be managed by a CDN).
@YordanPetrov-hd9vzАй бұрын
Comment for the algo. Thanks for the great content !
@CS090SrikanthАй бұрын
But calculating prefix sum takes Time Complexity-->O(n) : as you traverse throughout the array to calculate Space Complexity --> O(n) as you are taking an array prefix sum of size 'n' 1st approach has time complexity --> O(n) with constant space I think the first approach is better yah prefix sum is helpful in many other problems I think not in this as the first one itself has lesser time and space complexity 🤔
@orkhan-1Ай бұрын
For prefix sum approach we run calculate_prefix_sum function only once and store the result as global variable. When we call sum_range it takes the values of indexes from the global variable; thus constant run time. I included the code in description for both cases
@Str1x87Ай бұрын
Great video! Go ahead with the system design
@orkhan-1Ай бұрын
Thanks!
@johnvaldez71552 ай бұрын
Hi at 14:55, the calculation of the # of servers are based on the QPS(RPS) and Server Capacity. Technically, how did you come up with the 100 for the server capacity? What is the based assumed parameters used for the server capacity? I get that there are research for a specs for a particular AWS EC2 instance based on its max QPS, but in this example, what are the basis for the assumption of 100? Thanks.
@orkhan-12 ай бұрын
Thank you for your comment. I took QPS that on average most modern servers can handle. As I mentioned in 18:50 each case is different and precise number of QPS will depend on variety of factors
@johnvaldez71552 ай бұрын
@@orkhan-1 I understand. However, how do you defined the average modt modern servers? Like in general did you defined it like 2vcpu, 4GB server is a 1 "server capacity"? Since this is a capacity planning video, I understand that this may be under or overplanned, but what is the "average" server capacity in this case? The "100" server capacity is very vague, so its hard to make a planning, otherwise we will have no choice but to perform precise values from testing but it will defeat the purpose of "planning" objective. Hope you can give light in this question.
@orkhan-1Ай бұрын
@@johnvaldez7155 It's a relative term. For an application of this scale (1 million DAU), an average server would start with 4 vCPUs and 16 GB of RAM, which is enough to handle approximately 100 QPS
@miroslavrehounek88772 ай бұрын
Great video! One thing I noticed at 22:19: shouldn't the video QPS increase by the extra requests we are now making to upload the videos in chunks?
@orkhan-12 ай бұрын
Well spotted! Indeed it will increase number of QPS eventually, but considering that we can apply almost all of the measures in 28:35 to decrease incoming bandwidth we may approximate that the we'll manage to keep bandwidth at 440 Mbps or lower. Thank you for the comment!
@razerx2 ай бұрын
Great video. Saved for my future preference. :D
@orkhan-12 ай бұрын
Thank you!
@rkreddy26992 ай бұрын
hey can u clarify, why is visited array is reqd when we r already tracking currentVisitedNodes in a set, cant we use that?
@Str1x872 ай бұрын
Great video. Thanks!
@orkhan-12 ай бұрын
Thank you!
@evko92643 ай бұрын
what are the alternative solutions?
@orkhan-13 ай бұрын
Given the time and space complexity requirements, I believe this is the most optimal solution. Other approaches, such as sorting or using a set, would increase the complexity in one way or another
@helloomkar3 ай бұрын
Thankk youu
@gowtham78883 ай бұрын
Nice Explanation Expecting more like this Content
@orkhan-13 ай бұрын
Thank you!
@azharahmad35953 ай бұрын
you didn't take into count, even and odd number of elements. There will be a case where fast.next.next will give NullPointer, we need to have that check
@orkhan-13 ай бұрын
Thank you for your comment! Before calling fast.next.next we make sure that fast.next is not null. It helps us to avoid NPE
@sabarishkrishnamoorthi79854 ай бұрын
I am your 400th subscriber, keep growing, we will support you, thank for the video, keep it up
@orkhan-14 ай бұрын
Thank you!
@crekso3984 ай бұрын
thanks a lot it was a clear explanation
@md_pedia14 ай бұрын
that was really helpful
@orkhan-14 ай бұрын
Thanks!
@NibinBinoy32484 ай бұрын
Great explanation ❤
@orkhan-14 ай бұрын
Thanks!
@bairunagarajuindian85094 ай бұрын
Thank you❤
@johnlocke46954 ай бұрын
Excellent explanation. Keep it going
@orkhan-14 ай бұрын
Thanks!
@RəvanMuradov-y7q5 ай бұрын
amazing explanation!
@orkhan-15 ай бұрын
Thanks!
@FrezoreR5 ай бұрын
Generally a great video. I think it would help the viewers if you explain in the code section why you have +1 for the target.
@orkhan-15 ай бұрын
Thank you for your comment! Since the nextInt method generates a number in the range 0 to totalSum-1, we add +1 to ensure that our target is in the range 1 to totalSum
@FrezoreR5 ай бұрын
@@orkhan-1 Thanks for the answer. Why not generate directly to totalSum? I.e. why do we want the min value to be 1? It might be an obvious question, but I just want to make sure I fully understand the solution :)
@orkhan-15 ай бұрын
By requirement array only contains positive integers; thus, the random pick must be between 1 and totalSum
@FrezoreR5 ай бұрын
@@orkhan-1 Ah yes. I was thinking of the requirement that said: randomly picks an index in the range [0, w.length - 1] But that translates to 1 - maxSum in our prefixSum, since the smallest sum will be 1, if all weights are 1 or larger. Let me know if I got that wrong somewhere. It helps to paraphrase it :)
@orkhan-15 ай бұрын
You're absolutely right! range [0, w.length - 1] refers to index whereas we need number between 1 and totalSum
@ElFox5 ай бұрын
One more short solution: class Solution(object): def countElements(self, nums): min_el = min(nums) max_el = max(nums) return sum(1 for num in nums if min_el < num < max_el)
@orkhan-15 ай бұрын
Thanks!
@ElFox6 ай бұрын
Here is one more ineresting solution: class Solution: # Time Complexity: O(n) # Space Complexity: O(1) def numPairsDivisibleBy60(self, time: List[int]) -> int: remainder_counts = [0] * 60 count = 0 for t in time: remainder = t % 60 if remainder == 0: count += remainder_counts[0] else: count += remainder_counts[60 - remainder] remainder_counts[remainder] += 1 return count
@ElFox6 ай бұрын
Example on Python (using sorting): from collections import Counter class Solution: def frequencySort(self, s: str) -> str: # Count character frequencies using Counter char_counts = Counter(s) # Sort characters by frequency (descending) and then by character sorted_chars = sorted(char_counts.items(), key=lambda item: (-item[1], item[0])) # Build the result string character by character result = [] for char, count in sorted_chars: result.append(char * count) return ''.join(result)
@orkhan-15 ай бұрын
Thanks!
@ElFox6 ай бұрын
Example on Python (using priority queue): from collections import Counter class Solution: def frequencySort(self, s: str) -> str: # Count character frequencies using Counter char_counts = Counter(s) # Create a min-heap to store (frequency, character) tuples min_heap = [(-count, char) for char, count in char_counts.items()] heapq.heapify(min_heap) # Build the min-heap # Build the result string character by character result = [] while min_heap: count, char = heapq.heappop(min_heap) result.append(char * -count) # Append character repeated by its count return ''.join(result) # Join the characters from the result list
@orkhan-15 ай бұрын
Thanks!
@ElFox6 ай бұрын
Solution in Python: from collections import Counter class Solution: def topKFrequent(self, nums: List[int], k: int) -> List[int]: # Creating the counter: O(n), where n is the length of the nums list. # Building the heap and extracting elements: O(k log n) using heapq.nlargest. # Overall: O(n + k log n) which is approximately O(n log n) in most cases since k # is typically much smaller than n. counter = Counter(nums) # The counter dictionary: O(n) in the worst case (all elements are unique). # The heap created by nlargest: O(k). # Overall: O(n + k), which is also approximately O(n) in most cases due to the smaller value of k. return heapq.nlargest(k, counter.keys(), counter.get)
@orkhan-16 ай бұрын
Thanks!
@throwawayaccount13899 ай бұрын
after hours of searching found the perfect video
@orkhan-19 ай бұрын
Thanks!
@throwawayaccount13899 ай бұрын
explain the tc and sc also in ur videos @@orkhan-1
@varunpunia119 ай бұрын
It's because of you that i finally understand this question...Love from india!
@orkhan-19 ай бұрын
Thanks!
@Nerddog123449 ай бұрын
You really deserve more view ❤
@orkhan-19 ай бұрын
Thanks!
@Nerddog123449 ай бұрын
Nice explanation
@orkhan-19 ай бұрын
Thanks!
@amitjosh799 ай бұрын
Python doesn’t have red black tree api to get floor & ceil keys ?
@orkhan-19 ай бұрын
Python doesn’t have a built-in red-black tree data structure in the standard library. However, sortedcontainers library provides a SortedDict class, which is implemented using a red-black tree
@karibui4949 ай бұрын
It must be noted that you can only remove the second loop is k is fixed, if k can change you will have the same time complexity of O(k(n-k))
@edwinndiritu98029 ай бұрын
A code example would be nice
@orkhan-19 ай бұрын
Added in description
@MRT122YT9 ай бұрын
arr = [2,1,4,6,5,1,3,5,2,3,4] k = 3 start = 0 summ = 0 max_summ = 0 for end in range(len(arr)): summ += arr[end] if end - start + 1 == k: max_summ = max(max_summ,summ) summ -= arr[start] start += 1 print(max_summ)
@wobby_19749 ай бұрын
I think its a bit misleading to say that the naive algorithm has a Time Complexity of O(n²), as the runtime depends on the array size n and the number of consecutive elements k. So as the array size would increase and k stays constant the algorithm would actually have a linear runtime, but if both increase the total time Complexity would increase quadratically. Which of course the second algorithm solves.
@orkhan-19 ай бұрын
Thank you for your comment! k as input size of the array may vary; the worst-case performance time complexity is O (N^2)
@warvinn9 ай бұрын
@@orkhan-1 The worst-case performance is only O(n^2) when k grows with input size, which would be unusual but not impossible. Generally speaking it is preferred to call this time complexity O(k⋅n)
@dewpta47179 ай бұрын
I think O(n^2) is an okay bound, but its not tight. Additionally, k = n would not be the worst case because then there is just one window and we can just sum up the array to get the solution. Alternative analysis: For an array of size n there are n - k + 1 subarrays (windows) of size k > 0. For each of these windows we need to sum up k items so, total number of operations are: (n - k + 1) * k => O( k(n - k) ).
@luiscanterohernandez666810 ай бұрын
Good video
@orkhan-110 ай бұрын
Thank you!
@puppergump411710 ай бұрын
Is this any different from just indexing the strings to compare them?
@orkhan-110 ай бұрын
No. Indexing String is basically Two Pointers approach.
@puppergump411710 ай бұрын
@@orkhan-1 Ok, I know some people directly increment/dereference pointers. They're old people though lol.