You content is very well researched and thorough. I learned so much from this video. Thank you!
@harshkr3236Күн бұрын
thank you brother
@sheeshmam1890Күн бұрын
Is zookeeper here refers to kafka zookeeper? Or is it any other tool to generate the counter?
@G3K-n2j4 күн бұрын
Best explanation ever. Thanks for the quality videos.
@G3K-n2j4 күн бұрын
Best explanation ever. Thanks for the quality videos.
@G3K-n2j4 күн бұрын
great explanation
@ujjvalsharma50556 күн бұрын
The logic is basically how AWS S3 multi-part upload works.
@stankafan66888 күн бұрын
what if one of the server gets down and range falls under that server only ?
@soumithjavvaji33109 күн бұрын
Kudos to your efforts for the video, awesome explanation!
@PranjalVerma-ib3gb11 күн бұрын
Bro dropped the best system design videos and disappeared. Really appreciate your work. Thanks a lot.
@cartman-d6o-n1y12 күн бұрын
Could you say if the following estimations (relevant) for scaled up hypothetical numbers (irrelevant) are correct? Say if we scale TPS requirements for database transactions workload to 1000 which is roughly distributed as 75% read and 25% write with every write replicated to at least 3 servers giving us the write amplification of 3. This gives us, IO per transaction = 3 reads + 1 write * 3 write amplification = 6 This give us, IOPS = 1000 x 6 = 6000 IOPS Suppose, we are using commodity disks whose specifications are: 1TB SATA 7200 RPM: - A 7200 RPM HDD can sustain 100-160 MB/s for large, sequential writes or reads. - Limited to 100-160 IOPS due to mechanical constraints (seek time and rotational latency). Can we estimate the number of disks required to fulfil the transactional workload requirements as? Num of disks required = Total IOPS/Disk IOPS = 6000/160 = 37.5 Similarly, for a upload workload, we are expecting 100 PB/year is equivalent to ~274 TB/day or ~3.17 GB/second. Considering a typical large block for sequential access is 1 MB. This gives us: IOPS = (Bandwidth x Write Amplification) / IO size (per disk) = 3170 x 3 MBps / 1MB = 9510 IOPS Can we estimate the number of disks required to fulfil the upload workload requirements as? Num of disks required = Total IOPS/Disk IOPS = 9510/160 = 59.4 We could have also estimated this using the disk throughput and bandwidth as: Num of disks required = (Bandwidth x Write Amplification) / Throughput = 9510/160 = 59.4 Let me know what do you think?
@IshitaSharma-d4s14 күн бұрын
amazing explanation on bitmap
@nehagour692815 күн бұрын
For distributed system here 1) Distributed Locking 2) conflict resolving use versioning here 3) Atoimic operation using INCR of Reddis to safely modify counter in shared resources 4) Most important : consesus algorithm to maintain consistency across nodes
@codecrash_t13219 күн бұрын
such an amazing explanations..........i am 100% sure that i will never ever forget the concepts.
@ReenaSahore21 күн бұрын
Thank you for knowledge sharing
@guptarishabh41224 күн бұрын
Am i the only one who thinks it was a rost video for chrome at 7:00 ???
@padam207224 күн бұрын
So beautifuly explained
@anthony9242Ай бұрын
Thanks Naren !!!
@karthikkumar5213Ай бұрын
In the Availability section, you have shown 1 and 2 as nodes. How does data distribution happen between nodes? What is the partition strategy? How can we redirect read/writes to specific nodes where they were previously written ? How do we decide no of nodes required? Can you provide info on all these or refer me to any video ?
@dashofdopeАй бұрын
we went from estimating 90% bloom filter accuracy to only missing one out of 10 million
@pravinnayak7467Ай бұрын
Amazing video, just one question, why not UUID v4, the collision rate is next to 0 even if every person on earth generates one uuid every second
@shadyabhiАй бұрын
Great intro, thank you for explaining this. I wonder, if we should also talk about "tested folder and files" that need to be shared across hosts, and if Merkle tree will help here?
@kar3817hrАй бұрын
Hi Narendra, thanks for the great content. I had one question related to storing the content in db and retrieving it. How will data be stored with all the space and line spaces?
@faizalvasaya2245Ай бұрын
hey Why are you not create new system design videos?
@Code_JAVA268Ай бұрын
great video
@rollercoastererАй бұрын
everyone thinks this is a good design in 2024? during interview, whenever you finish the design, you should go back to your requirements and check each of the requirements are satisfied.
@CodewithKing360Ай бұрын
you saved my 8 marks in exam
@sauravawesomeАй бұрын
If we use the locks to synchronize between two users by segmenting the docs in smaller portions and allowing only one user to edit that portion of the doc at a time but that will cause a very bad user experience as Lets say when the lock is unlocked for another users the docs suddenly updates with the changes made by the other user.
@thirumalainambi6068Ай бұрын
improve ur english its irritating
@udayiscool2 ай бұрын
So master becomes single point of failure?
@ayushjindal49812 ай бұрын
20:00 , at 5th sec we assign the lock in name of P2, but P1 is still having the lock which means that it can access the shared resource. So how does it ensure mutual exclusion?
@harvendrasinghrathore284829 күн бұрын
If the transaction is not committed within TTL, then it should get discarded. So transaction should be only successful, if the lock is getting release by the process, otherwise not, or else it will lead to inconsistency in the system.
@udayiscool2 ай бұрын
Can one API gateway for each page in the web application?
@aruneshsrivastava71542 ай бұрын
it gets more confusing
@shubhamkalla64892 ай бұрын
What happens if zookeeper goes down also if i run multiple insert query on same url. Its not checking long url in db will it not create duplicate entries like short code is unique but data dirty.
@yeehaw-s7k2 ай бұрын
very useful video,thank you!!
@mdtalibalam97432 ай бұрын
Very insightful video , can you also please tell me the book or document that you referred to understand this
@rickyz-wr2de2 ай бұрын
ur previous version was much better than this. Why did u change this
@kamleshpar98472 ай бұрын
Good explanation , I have one doubt both Active n passive Matching engine will be able to consume the message same time from Queue , If its point to point messaging ?
@madhuj36832 ай бұрын
Love your videos...Thank you so much for sharing
@abdullahjamal2 ай бұрын
I think instead of zookeeper, we can also use postgres sequence, where we increment the sequence in batches like 1000 etc.
@RohitSharma-ku9lt2 ай бұрын
i don't what i need to say . i understand everything this is my first system design video of my life . even in my dream i can explain this to any one . thanks sir .
@cimey062 ай бұрын
100K/24/3600 should be approx. 1.3 not 150
@e4312152 ай бұрын
Great video, I feel (apple,google,tesla) Kafka/queue between router will defeat the purpose of Cache. Since we already have primary and secondary matching engine, request should directly hit primary. Than it should be placed to Kafka.
@roshanvichare69242 ай бұрын
How will the server know the uid of user B who is newly registered after A sends a message? Initially it shouldn't send the message to B as it's not registered right?
@EyeofBOTTA2 ай бұрын
I'm unable to understand ... Because of your english slang
@Minipravin22 ай бұрын
This is one of the worst video I watched. Poor quality
@sridevi-jp3ic2 ай бұрын
Excellent teaching, each topic is crystal clear
@1030Celtic3 ай бұрын
What happens when I want the same Long URL to have the same Short URL? Is it possible? I want different users to reuse short URLs for the same Long. Also, how does sharding will work here with noSQL (chosen for high read/write)? Also, how will a record look like?
@OMERHAFEEZ-y7r3 ай бұрын
thanks
@rogerjin72593 ай бұрын
How to ensure a low latency processing if the design is using a distributed queues (via networking) in middle of trading flow. Shouldn't we have all these process within memory of the match server to eliminate network hops?