I had an interview last Friday (June 14) and I followed your exact steps. The question was to design the Ticketmaster. The Redis cache solution was the best. Thank you for these amazing videos
@hello_interview6 ай бұрын
Nice! Hope you passed 🤞🏼
@griffsterb4 ай бұрын
Did you get an offer?
@vigneshraghuraman6 ай бұрын
by far the best System design interview content I've come across - please continue making these. you are doing an invaluable service!
@hello_interview6 ай бұрын
♥️
@rupeshjha47176 ай бұрын
Bro, pls don't stop posting this kind of contents, really loved it so far with all of your videos. Able to relate with the kind of small impactful problems and solutions you mentioned during your videos, which indirectly impact the interviews
@hello_interview6 ай бұрын
I got you!
@crackITTechieTalks6 ай бұрын
I often don't comment for the videos. But couldn't stop commenting your video just to say "What a valuable content". Thanks a lot for all your videos!! Keep doing this..
@shoaibakhtar91943 ай бұрын
I gave the meta interview last week only and I was able to crack it. All thanks to you brother. The system design round went extremely well. I followed the exact same approach in all the questions and everything went really well. Keep posting the videos, these are the best content over the internet for system design.
@hello_interview3 ай бұрын
Let’s go!!!! Congrats! Thrilled to hear that. Well done 👏🏼
@TechieTech-gx2kd29 күн бұрын
What problem did you get ?
@TimothyZhou03 ай бұрын
Damn this is extremely nuanced. Some of the big-picture improvements (like adding the parsing queue) seemed kind of obvious, but then Evan would optimize it with a neat detail (e.g. including link in request so we don't have to fetch from database) that was so simple and yet hadn't occurred to me. Great series, great content, thanks so much!
@jk266436 ай бұрын
Please please keep posting more! It educates so many people and you make the world better!! :) Absolutely the best system design series!
@hello_interview6 ай бұрын
🥲
@qwer816606 ай бұрын
By far the most inspiring, relevant and practical system design interview content. I found them really useful to perform strongly in my system design interviews
@hello_interview6 ай бұрын
Awesome! Congratulations 🎊
@AlexZ-l7f6 ай бұрын
Again the best System Design interview overview I ever met. Please keep doing it for us!
@hello_interview6 ай бұрын
🫡
@sinajafarzadeh95772 ай бұрын
I’m so glad to have found this channel. One of few system design resources that isn’t just performative but has actual substance!!
@alirezakhosravian94496 ай бұрын
I'm watching your videos to get prepared for my interview 4 days later, I hope I'll be able to handle it :DDD , so far the best SD videos I could ever find on youtube.
@hello_interview6 ай бұрын
Good luck!! You got this!
@TeachAManToPhish18 күн бұрын
How did your interview go? What SD question were you asked?
@KiritiSai932 ай бұрын
I've seen many videos related to system design, but your staff level knowledge shows when you are designing components! Amazing job 🥳
@ankitasinghai225 күн бұрын
The best explaination of how to design a web crawler I have seen or read yet. Keep doing the good work :)
@hello_interview24 күн бұрын
Thank you 🫶
@davidoh09056 ай бұрын
This is such a great example for any kind of data application that needs asynchronous processing! Widely applicable!
@omerfarukozdemir53405 ай бұрын
Great content as always, thank you! Some comments about the design. 1. Concurrency within a crawler is going to bring a huge performance bonus. 2. Running an async framework for network io is much more faster than using threading. 3. We can put the retry logic within the crawler to make things simpler. 4. DNS caching looked like overengineering because DNS is already cached on multiple layers, programming language, OS, ISP and etc. 5. We're processing the html in another service but we're hashing the HTML in the crawler, that seems wrong.
@Dao007forever4 ай бұрын
5. You don't want to put the same content into Blob. We are IO bound, compute a rolling hash (SHA) is cheap.
@richbuckingham29 күн бұрын
@@Dao007forever right, we want to dedupe on text content, not HTML, so instead you should let the parsing worker do the content processing work which is the data we're interested in not duplicating and then produce the hash. The source HTML even if it had the same text content in it all the extra HTML is extremely likely to include unique content per request and so will produce a different hash on the unprocessed HTML versus the processed (clean) text content. The hashing work should absolutely not be done by the crawlers as the output is not going to be useful.
@Dao007forever29 күн бұрын
@@richbuckingham That would mean lots of duplicates since multiple URLs lead to the same page. We might want to de-dup twice if you are thinking about the clean content, but definitely have to dedup at the HTML level.
@brijeshthakrar21063 ай бұрын
I've been building a web scraper on my own and using similar logic, and after a month, I see this. I swear to god this helped me a lottttt, but honestly, it's good that I didn't see this on day 1. Otherwise, I would not have learned things on my own. Great job, guys. PS: I got to know about you from Jordan. Keep posting great content, both of you guys!!!
@rahulrandhiv4 ай бұрын
I am watching this during the wait time for by flight back to home from GOA :) and completed it
@hello_interview4 ай бұрын
💪
@Global_nomad_diaries6 ай бұрын
Soo soo soo much thankful I am for all this content.
@tushargoyal5544 ай бұрын
I usually refrain from commenting but this is by far the best explanation I can find for this problem statement. I work at Amazon, the use of message visibility timeout for exponential backoff is exactly what we do to add a delay of 1 hour for our retryable messages. One very minor practical insight is to not use the metric approximate message receive count because it is almost always incorrect because the count goes up if a thread reads the message but doesn't process it. I used a retry count attribute while putting message in the queue and checked whether it exceeds the retry threshold.
@hello_interview4 ай бұрын
Super cool and good to know! Appreciate you sharing that
@TheKarateKidd6 ай бұрын
This is the first video of yours I watched and I loved it. Your pace is just right and you explain things well, so I didn't feel overwhelmed like I usually do when I watch systems design videos. Thank you!
@perfectalgos96415 ай бұрын
Thanks for this video. This video is one of the best in the internet for crawler system design. With a full preparation you are going to an hour, how to manage it in 35mins of 45mins interview.
@hello_interview5 ай бұрын
Yah the hour here because of all the fluff and teaching. This is reasonably 35 without that.
@PiyushZambad5 ай бұрын
Thank you for making these videos so engaging! Your eloquent and logical style of explaining the concepts makes watching these videos so much fun.
@hello_interview5 ай бұрын
High praise! Right on :)
@eshw233 ай бұрын
Evan your explanations are extremely amazing and the best on this channel. Hope to hear more soon.
@TheKarateKidd6 ай бұрын
One of the first things that came to mind in the beginning of this problem is dynamic webpages. Most websites don't display the majority of their content on simple HTML. To be honest if I was interviewing a senior or above level candidate, not mentioning dynamic content early on would be seen as a red flag. I'm glad you included it at the end of your video, but I do think it is important enough to be mentioned early on.
@RafaelDHirtzPeriod24 ай бұрын
So sorry for being Microsoft Word, but on all of your videos THE APROACH is spelled incorrectly. Thank you so much for posting all your videos. Super helpful for all of us interviewees out there!
@hello_interview4 ай бұрын
🤦🏻♂️first person to notice this. Will fix next video!
@NeyazShafiАй бұрын
Excellent quality of content. Please do more of these.
@chongxiaocao57376 ай бұрын
Finally a new update! Apprecaite!
@technical34464 ай бұрын
Few inputs: - Bandwidth calculation need to factor in upload data to S3 as well. You will probably also do some compression while upload, and given HTML data had be fairly highly compressible. - At that rate, the system will likely not be network throughput bound, but usually latency and number of connections bound. Assume that each site takes 1 sec to return the web page, so for 10k requests per sec for each node, you will need 10k TCP connections, which if under possible limit but will lead to a number perf issues. - Memory requirements: 10k * 2 MB = 20 GB, should be enough, but all of these are GCable. less reusable memory and TCP connection - You will likely be better off using a lower node type, around 50 Gbps, utilisation beyond that for a single node is going to be challenging and you will hit other limits. - Another optimisation will be to have the parsing and crawling in the same process to avoid passing off the HTML content to a separate process. You can also update the DB in one write with all the links.
@sharanya_sr5 ай бұрын
Thank you for the great content and congratulations for making it a goto channel for system design. Content is refreshing and watch once never forget types. I request you to make a content to share how to approach a problem that we have not seen before. What best we could do like either map it to any related system or think logically how api/design would work focusing on the problem asked.
@hello_interview5 ай бұрын
Cool idea, we'll give that a go!
@allenliu10654 ай бұрын
Best explaination for bloom filter, redis set and hash as GSI.
@deshi_techMom3 ай бұрын
I absolutely love the details you talk and you have great presentation skill! super admirable! you just made system design interview easier for me
@zy33946 ай бұрын
love your content , learned a lot, please keep updating more. ❤
@itayyahimovitz863 ай бұрын
Great video! I would probably add a proxy component to this system design for the part where the crawler makes the HTTP calls to fetch the HTML (maybe for the DNS lookups as well). This is a critical part of designing a web crawler because you want avoid making the calls through the network where the web crawlers are deployed for case you get all your network ip addresses blocked and also for security reasons, you want to isolate the outgoing network calls from your instances.
@thiernoamiroudiallo2451Ай бұрын
This is really fantastic content. Keep up the good work.
@randymujica1364 ай бұрын
In my opinion one of the most important bullets of your strategy is how you minimize the initial HLD and you make sure you deliver something that actually covers all the functional requirements. I find this calibration really valuable and not that easy to achieve, since as a Senior candidate, one can be tempted to go straight to deep dives without actually setting clearly that pause from HLD to deep dives. What do you recommend to get better at this?
@zayankhan32236 ай бұрын
This is one of the best system design videos on the interview. Kudos to you. I would like to understand a little more on how do we handle duplicate content? What if the content is 80% same on two pages? Hash will work only when pages are exactly the same.
@hello_interview6 ай бұрын
Yah, only exactly the same
@BhaskarJayaraman4 ай бұрын
Great content. In deep dives around 52:41 "when you get a new URL you'll put it on here it'll be undefined and then when we actually parse it we'll update this with" and 52:46 "the real last craw time and with the S3 link which also would have been undefined so that would handle that" - I think you mean -- when we actually crawl and download it, we'll update it with the last crawl time and with the S3 link. Also when you use Dynamo the look up will be Log(1) not Log (n). Would be great if you had the DynamoDB GSI schema.
@IntSolver5 ай бұрын
Hey, thanks for your video. I have watched all your content and I gained immense amount of knowledge. I gave my E4 interview a week back, and my question was this (with a slight variation of the crawling being done through an app which was deployed in 10k devices). I covered all the content which you've presented here in the same structure, and was able to dive deep into all the parts the interviewer asked. I was expecting an offer but got rejected due to "No Hire" in Design round. After retrospection, I could find some people talking about chord algorithm and peer2peer crawler was expected. I still don't understand what would be the cause for No hire, because interviewer didn't even hint towards anything and was aligned throughout. The experience was really heartbreaking. SO, I just wanted to leave it out here that even though I did my best, it wasn't my day (I guess). thanks for your videos, nonetheless
@hello_interview5 ай бұрын
So sorry to hear that, that’s such disappointing news to receive. It’s always a toss up. Keep your head high and best of luck with future endeavors 💪
@krishnabansal75316 ай бұрын
Suggestions: Please mention what are the clarifying questions to be asked for a specific problem. Even if the problem is well known, the panel still expects to ask few clarifying questions, specially for a senior candidate. Also, if you can cover company specific expectations (if any) for top MAANG companies, that would be excellent.
@letsgetyucky6 ай бұрын
commenting for the algo. thanks for excellent and free content!
@hello_interview6 ай бұрын
Legend 🫡
@letsgetyucky6 ай бұрын
@@hello_interview Feedback: really enjoyed the video! Would love if future videos were also mostly skewed towards deep dives. Suggesting other topics to research yourself (or hash out with others in the comments) is also super valuable. Finally, calling out the anti patterns that are being regurgitated (e.g. bloom filters) is very valuable as well.
@davidoh09056 ай бұрын
@@letsgetyucky is bloom filters a anti-pattern!? just curious!
@letsgetyucky6 ай бұрын
@@davidoh0905 during the deep dive Evan says that Bloom Filters are commonly used in the interviews because it's they are used in solutions in the popular interview prep books. But the interview prep books don't do a great job of discussing the tradeoffs behind using a Bloom Filters vs more practical solutions. It's a nice theoretical solution, but in a real world system you could do something simpler and just bruteforce the problem.
@CS2dAVE6 ай бұрын
S Tier system design content! Another exceptional video 👏
@vzfzabc6 ай бұрын
Nice, thanks for the content. I also really appreciated the videos from the mock interview. I found that much more useful and would love to see more of those.
@hello_interview6 ай бұрын
Tougher there for privacy reasons. Requires explicit sign off from coach and candidate, but I'll see what I can do :)
@dibll6 ай бұрын
Hope you can create videos of the write ups done by other authors on HelloInterview in the near future. Love the content. Thank you!!
@hagridhaired8 күн бұрын
Love your content! Just subbed to premium on Hello Interview
@aishwarya71792 ай бұрын
Great video! How to draw the curved arrow like you did at 17:21? I tried looking up with excalidraw options but couldn't find it.
@bansalankur23 күн бұрын
Instead of frontier queue, if I store the data in a postgres table and pick the urls from there and changing the state atomically. Will there be any scaling challanges ?
@akshat31065 ай бұрын
I could not find the information where it is mentioned that aws sqs have inbuilt exponential backoff retry mechanism. Can anyone please share the link for the same. Thanks a lot!
@hello_interview5 ай бұрын
On mobile but scroll through the comments. I linked the aws docs in response to another comment.
@akshat31065 ай бұрын
@@hello_interview Thanks for reply, but could not find it
@fran_sanchez_yt3 ай бұрын
@@hello_interview I haven't been able to find the link and I also wasn't able to find this exponential back-off feature mentioned in the SQS docs...
@davidoh09056 ай бұрын
Just in time!!!!
@willfzjАй бұрын
Great video ! Just one question: in the last deep dive of "Crawler trap", your option is using DFS with max depth, why don't we just use BFS to do crawl ?what's better one in you opinion here?
@dho4492 ай бұрын
At 36:06 you discuss visibility timeout in the context of Amazon SQS. You said the worker will send SQS a message that the html has been downloaded. What if it takes longer than 30 seconds to download the html and send the message to SQS. Is there the potential that you will duplicate the html download if some other worker pulls it off the queue after it becomes visible again?
@princeofexcessАй бұрын
I was trying to find exponential backoff as a configuration file for SQS and Serverless Framework but I cant find it. Could anybody point me in that directions? I would handle it inside the function with code that increases invisibility timeout with approximate receive count. and returning the array of the record ids that threw an error but it seems like a lot of code for something that would be great to just configure.
@vimalkumarsinghal5 ай бұрын
Thanks for sharing the SD on web crawler. question : - how to consider dynamic pages / sub domain / url which loop back to same url / url with query string what the best approach to identify duplicate. thanks
@hello_interview5 ай бұрын
May not totally understand the question, but you could just drop the query strings from extracted urls
@IdanKepten4 күн бұрын
what about using proxy servers to fetch the webpage instead of directly from the worker ?
@darkimchicat22 күн бұрын
Thanks for this video! What's the text editor canvas you're using? Seems very slick and intuitive to use
@jiananstrackjourney3704 ай бұрын
Great video! I have a question, is 5k requests per second realistic? Even with the most powerful machine on EC2?
@aforty15 ай бұрын
Thanks for this! As far as checking the hash @ 57:00, wouldn’t we already have the last hash since we had to retrieve that url record before we fetched the webpage because we had to go get the lastCrawlTime?
@adrienbourgeois1082 ай бұрын
Both your design and the way you explain them are top notch compared to the rest I've seen so far. One quick remark about your suggestion of using a Redis set to check whether content has already been "seen". I would personally not use the Redis Hash Set datastructure for this as the Set needs to be able to fit inside one Redis node (I think you've mentioned that in the video) so it does not scale out. Why not simply use the Redis String datastructure? Your primary key and partition key would be the hash of the content and so for as long as the key is in Redis you know that you have seen the content. And unlike the HashSet, this scales out as the key->value can be on different Redis nodes. Anyway, using an in memory distributed cache (like Redis) is more than likely not necessary in this case as our bottleneck is downloading the HTML so optimizing the logic that checks whether we have seen the content before is not gonna move the needle.
@davidoh09056 ай бұрын
If Kafka does not support retry out of the box, what does that exactly mean? if you do not commit, does it not get move the offset, which could potentially serve as retry like(?) Also, could you compare this with some other queueing service that allows for retry like SQS maybe? Comparison on when to use Kafka vs SQS would be really good too! message broker vs task queue might be their most frequent use cases but might be good to provide justifications in this scenario!
@MrCSFTW2 ай бұрын
Maybe a nit or I'm not in the know. But SQS doesn't have built in exponential retry right? You'd need to implement with approxRecieveCount and modify visibility timeout?
@MrCSFTW2 ай бұрын
God my KZbin handle is unbearable I apologize
@microhan14Ай бұрын
"do we already have url" and url as PK, this would yield in never getting fresh data from the same url?
@undercovereconomist6 ай бұрын
Wow, the amount of Depth here is absolutely insane. How can you compressed so much information into a 1 hour interview? I learn so much information from this video that I never see else where, and it is all presented so elegant and natural. The speaker speaks clearly, no ums and ahs, no speed up? You must be a great engineer at work! One thing that I am a bit unsatisfied is about duplicated content. Is it even possible that we actually have completely duplicated content? Even when there are two different web pages, I think that they might just have a few location that the content is different. That would completely break our hash function right? Do you know of any hash function that would allow two webpages that are mostly similar to be close together? Do you see any role in word2vec or vector storage here?
@ronakshah7256 ай бұрын
I think this is a great question! I want to attempt to answer this, but I’m no expert haha. As the goal of this particular system is to train language models, it’s nice to understand if optimizing for “similar” web pages is necessary for our top level goal. In general, it could be helpful to prioritize learning based on chards of text, that appear in many pages. But we have to remember that connecting back to the source could also be required later, for things like citations. So we have to be a bit smart about this. TL;DR it’s a can of worms and I would try to better understand the priority of this compared to existing requirements of the system.
@ronakshah7256 ай бұрын
This isn’t skirting off the question, but it’s a good step towards delivering our final solution.
@sushmitagoswami20333 ай бұрын
Excellent video!. Have one thoughts - would it be possible to increase the font a bit? Thanks so much!
@theoshow54264 ай бұрын
Great content! Keep it coming!
@prahaladvenkat4 ай бұрын
Your channel is a gold mine! Thanks a ton. How to decide whether to use Kinesis data streams or SQS? Although they serve different purposes, it feels like both are good options to begin with, generally. Here, SQS ended up being a better option because of retries, DLQ support, etc. But ideally, I'd like to be able to deterministically and correctly choose the right option in the beginning itself. It'll be super helpful if you could quickly reason out in the videos (in just 1 or 2 lines) why you pick a certain offering over other seemingly similar technologies/offering!
@damluar3 ай бұрын
To avoid batching URLs from the same domain together, can we use Kafka partitions and spread messages by hash(URL)? Since different crawlers work at different paces, it is likely they will pick up those URLs at a different time.
@DilipKumar-ij3cf2 ай бұрын
For your url check, when do you remove for next time crawling content freshness? The check will not allow next eligible crawling. How are you addressing the design. Also some content change often and some not, you should probably handle that in your design. the solution is same for both.
@damluar3 ай бұрын
How would you choose the initial frontier URLs? How many should be enough?
@jingxu2697Ай бұрын
Thanks Evan for the great content! Learned a lot from this channel! One question: I am working in a company in which we mainly use in-house built tools, so I do not have much experience with open source tools or AWS tools, e..g, the queues you mentioned in this video. In the interview, is that okay I answer the ideas behind the tools instead of pointing out the tool names. Will that be a sign of lack of experience?
@hello_interviewАй бұрын
Absolutely
@praneethnimmagadda19385 ай бұрын
Just wondering , there is no mention related to inverted index in this crawling flow as this inverted index would help during the searches ?
@hello_interview5 ай бұрын
Searches of what?
@praneethnimmagadda19385 ай бұрын
@@hello_interview I mean when user searches for results of query on search engine
@richbuckingham29 күн бұрын
My main gripe is the noise getting into in the frontier queue; I think an intelligent worker processing URL candidates from a new URL candidates queue would be a huge dedupe and efficiency improvement rather than having the parser drop un-cleansed URLs directly into the frontier queue. Other gripes though too; no mention on what Canonical URLs are or how they could be used in the parsing logic, or how they would be best used as the PK to dedupe on in the URL table. No mention of URL parameters or dynamic page content (e.g. /pages/?page_id=123&noise=random versus /pages/123?noise=random). No mention of how using or not JS parsing in the crawler might provide different outcomes, no mention even of rules processing for filtering out non-text content (e.g. why would you even attempt to crawl a URL /images/image123.jpg). The system is also missing success and error metrics and custom per domain crawling and parsing logic (e.g. ability to define regex matches for blacklisting known bad URLs) in order that you are effectively crawling for good text content and not just pulling in poor quality text from specific domains or expending time and cost continuing to crawl a domain that had so far only provided garbage. You have only 5 days to do the work, but you pay hourly for everything on AWS so why not spin up 40 machines for 10 hours, instead of 4 machines for 100 hours, then you find out on day 1 if you will have a good outcome or not! You then will still have 4 more days to re-evaluate the system, figure out how you can improve the content quality, retry domains/urls etc where metrics show you had bad outcomes but you know you can do better. Tldr; determine ASAP where the best successes and biggest fails are, use rules-processing capabilities to optimize the system as you learn.
@flyingpiggy7414 ай бұрын
Why do we need a DNS server? Would it be enough to grab text from a url?
@healing10006 ай бұрын
Thank you! to avoid duplicate URLS, do we need to discuss using a cache or Is it ok to only use the data base
@hello_interview6 ай бұрын
Same convo as the duplicate content. Cache is certainly an option. The DB index enough imo.
@Sandeepg2556 ай бұрын
I think at 39:03, you are saying that set the visibility timeout of the message to now - crawlDelay, but visibility timeout concept is for a queue, then how are you planning to set it at message level ?
@hello_interview6 ай бұрын
You can set them at the message level with SQS! From the docs, “Every Amazon SQS queue has the default visibility timeout setting of 30 seconds. You can change this setting for the entire queue. Typically, you should set the visibility timeout to the maximum time that it takes your application to process and delete a message from the queue. When receiving messages, you can also set a special visibility timeout for the returned messages without changing the overall queue timeout.”
@swagatrath22564 ай бұрын
Very well explained! If possible please do share some tips on how one can keep up with latest technologies and develop a mindset towards such system designs. I feel like I'm good at coding but not that great when it comes to designing architecture like this. Basically what I'm looking for is how does one progress from a Developer role to an Architect role..
@xymadison3 ай бұрын
This is awesome, it is a very comprehensive and clean explanation, I've learnt a lot from your videos, Thanks. May I ask what tool or website you use as the white board?
@Anonymous-ym6st5 ай бұрын
thanks for the great content as always! One quick question: for redis and global secondary index comparison, given the data can be stored in the single instance, if we use hash based index (not sure if it is supported by dynamo, but should be supported by MySQL), then it should also be O(1) and redis in this case should be over-engineering a bit?
@lixinyi77344 ай бұрын
what is the Text editor you are using? I like it
@hello_interview4 ай бұрын
Excalidraw
@dibll6 ай бұрын
Not related to this Video in particular but I have question about partitioning - Lets say we have a DB with 2 columns firstname and lastname. When we say we want to prefix the partition key which is firstname with lastname, Does that mean all similar lastnames will be on same node , if yes what will happen to firstNames how they will be arranged? Thanks
@hello_interview5 ай бұрын
if the primary key is a composite of first and last then no, this just means that people with the same first and last name will be on the same ndoe
@sanketpatil4936 ай бұрын
Can not thank you enough for all this valuable content. Just amazing work! Btw can you share some good resources for preparing for the system designs interview? Books, courses, engineering blogs, etc. A dedicated video would be much more helpful!
@hello_interview6 ай бұрын
Im certainly biased, but i think our content is some of (if not the) best out there. so I would start at www.hellointerview.com/learn/system-design/in-a-hurry/introduction. Some useful blogs on system design too depending on your level which can be found at www.hellointerview.com/blog all written by either me or my co-founder (ex meta sr. hiring manager)
@adityaagarwal53482 ай бұрын
🙈 The excalidraw doesn't let you type when there are zig-zag arrows. If you click on one of the zig-zag arrows, you will see they block a big area and clicking into that area will just allow you to write text on arrow.
@Global_nomad_diaries6 ай бұрын
Can this be asked in product architecture interview at Meta or just system design?
@hello_interview6 ай бұрын
Should be system design not product architecture in meta world. But, you never know, some interviewers go rogue.
@mihaiapostol78645 ай бұрын
hello, i enjoyed your content a lot, i'm learning a lot from it, thannks! one question related to the design, you were talking at minute 52:00 that the check that the urlLink already exists should be done in the parser. but if this uniqueness check is not done earlier in the crawler, then the crawler could save the same text in s3 twice for the same urlLink, right?
@hello_interview5 ай бұрын
Nope! We won’t add new links to the queue if they already exist. Thats why we check in the parser
@mihaiapostol78645 ай бұрын
@@hello_interview understood, thank you!
@TheSmashtenАй бұрын
What are you using for the drawing board??
@hello_interviewАй бұрын
Excalidraw
@jieyin41692 ай бұрын
love this video
@dashofdope23 күн бұрын
Any value in telling your interview upfront "I am going to tackle the sys design in this order (Func reqs=>Core Entities=>API=>HLD=> Deep dives)? I understand you do it here for our sake, but would probs actually be easier for me to reference and make it clear to interviewer I'm not diving too deep to begin with.
@evalyly93136 ай бұрын
So for being able to give the right estimation of the back of the envelope calculation, the base knowledge is that the person knows that an AWS instance capacity is 400Gbps. I don't have this knowledge in mind, is that ok we can ask or search during interview or is this something we should keep in mind?
@hello_interview6 ай бұрын
I think it’s useful to have some basic specs as a note maybe on your desk when interviewing. But it’s also ok to ask. The intuition that caches can have up to around 100gb and dbs up to around 100TB is good intuition to have though.
@georgepesmazoglou43656 ай бұрын
Great design! I wonder why there was never a mention of doing the whole thing with spark, using offline batch jobs rather than realtime services?
@afge006 ай бұрын
I was thinking about batch as well
@hello_interview6 ай бұрын
Interesting. You know, as many times I’ve asked this, no one has every proposed it. Top of my head I see no obvious reason why you couldn’t get it to work, especially for just a one off.
@georgepesmazoglou43656 ай бұрын
@@hello_interview I do crawling for a large company, typically you would do something like the video's design when you care about data freshness, if you don't care about that, like the LLM use case you, would do a sparky thing where you just split the work to a bunch of workers, you can have the html fetching and processing parts in different stages. Your inputs can be the URLs and previous crawled pages and join them, so that you crawl only new urls, or recrawl URLs only after some time since their last crawl. The main disadvantage compared to your design is that you are not as fault tolerant as you can't do much in terms of checkpointing. Also it is less fun to discuss:)
@yottalynn7765 ай бұрын
Very nice explanation! When actually crawling the pages, it could be blocked by the website owner. Do you think we need to mention this in the interview and provide some solutions like using rotating proxies?
@hello_interview5 ай бұрын
Good place for depth! Ask your interviewer :)
@dhanyageetha15195 ай бұрын
Kafka also support configurable exponention back off from producer side
@hello_interview5 ай бұрын
Yup, that’s just to make sure the message gets on the queue, so not the same problem we’re solving here.
@kunliu10623 ай бұрын
Wow, wish I had found this much earlier. Now I certainly wouldn't just go into my next interview and throw the bloom filter onto the diagram without deep thinking 😝
@nanlala31716 ай бұрын
I saw you used many AWS services during your design. Is it a good practice to use specific products and their features (dlq/SQS, GSI / dynamo db) in the design? What if the interviewer never used these products and had no concept of these services/features.
@hello_interview6 ай бұрын
Depends on the company, in general, yes. But, importantly, don't just say the technology. This important part is that you understand the features and why they'd be useful. For example, Bad: I'll use DynamoDB here Good: I need a DB that can XYZ. DynamoDB can do this, so I'll choose it.
@Marcus-yc3ib2 ай бұрын
Thank you very much. You saved me.
@vamsikrishnabollepalli49086 ай бұрын
Can you also provide system design interview flow and product design interview flow for each problem?
@hello_interview5 ай бұрын
They're mostly the same tbh. www.hellointerview.com/blog/meta-system-vs-product-design
@tori_bam5 ай бұрын
thank you for another amazing contents! I'll be having a mock interview using Hello Interview soon.
@hello_interview5 ай бұрын
Sweet! Can’t wait :)
@joo023 ай бұрын
I confirm your hair and hat didn't have any negative influence in the making of this System Design video.
@hello_interview3 ай бұрын
😂🫶
@mularys6 ай бұрын
Here are my concerns: your solution is so nice, but if everyone is going to talk about the same thing during the interview, especially when one is driving the process, will it raise any red flags on the hiring committee side as they might think candidates are referring to the same sources?
@hello_interview6 ай бұрын
This is not meant to be a script. If your plan is to regurgitate this back to an interviewer I’d recommend not doing that. Instead it’s a teaching resource to learn about process, technologies, and potential deep dives. If you get this problem, then sure, talk about some of this stuff, but also let it be a conversation with the interviewer
@rostyslavmochulskyi1596 ай бұрын
But if there an issue if you answer all/most of interviewer questions correctly? I believe it is an issue if you memorise this, but can’t go any further, but if you can there is nothing wrong.
@mularys6 ай бұрын
@@hello_interview Yeah, makes sense. You present a good framework to structure the talking points that candidates can bring up. And I found it pretty useful. My system design question is the top-k video and I followed the key points you mentioned. My target is E5 and the interviewer just had a handful of follow-up questions (90% of the time I was talking). Eventually, I passed that round with a "strong hire". Of course, I added my points of view during the interview, but I feel like I was just taking something off the shelf.
@mdyuki10166 ай бұрын
what's the reason not storing URLs in databases like MySQL. for retrying, just add some column like "retry times"
@hello_interview6 ай бұрын
I mention this at somepoint I believe when discussing the alternate approach of having a "URL Scheduler Service." They have to get back on the queue somehow, so either directly or via a scheduler where state is in the DB.
@kamakshijayaraman37473 ай бұрын
iam not able to understand the math. for no of aws instances. can someone explain?
@t.jihad963 ай бұрын
Thank you for the effort, please keep doing the good job. I'm watching your videos as if it was a Netflix series, very exciting. I was hoping to cover some topics like if the crawler processed the message and failed to commit back to the queue that it processed the message due to a crash, how would you handle such a case? Is there a generic solution where it can be used in different systems instead of workarounds?
@tomtran69365 ай бұрын
what is the tool are you using to draw and take note , Evan?