System Design Interview: Design a Web Crawler w/ a Ex-Meta Staff Engineer

  Рет қаралды 31,136

Hello Interview - SWE Interview Preparation

Hello Interview - SWE Interview Preparation

Күн бұрын

Пікірлер: 170
@anchalsharma0843
@anchalsharma0843 2 ай бұрын
I have been watching too many system design videos, and most of them throw boxes and tools at the canvas just for the sake of it. But your videos follow an interesting and pragmatic approach that someone could actually use to design a real system. Above all, I truly appreciate the framework that you are infusing in viewers mind to tackle problems. Thanks for your efforts 🚀
@hello_interview
@hello_interview 2 ай бұрын
Glad you find it valuable!
@_launch_it_
@_launch_it_ 3 ай бұрын
I had an interview last Friday (June 14) and I followed your exact steps. The question was to design the Ticketmaster. The Redis cache solution was the best. Thank you for these amazing videos
@hello_interview
@hello_interview 3 ай бұрын
Nice! Hope you passed 🤞🏼
@griffsterb
@griffsterb 2 ай бұрын
Did you get an offer?
@vigneshraghuraman
@vigneshraghuraman 3 ай бұрын
by far the best System design interview content I've come across - please continue making these. you are doing an invaluable service!
@hello_interview
@hello_interview 3 ай бұрын
♥️
@crackITTechieTalks
@crackITTechieTalks 3 ай бұрын
I often don't comment for the videos. But couldn't stop commenting your video just to say "What a valuable content". Thanks a lot for all your videos!! Keep doing this..
@rupeshjha4717
@rupeshjha4717 3 ай бұрын
Bro, pls don't stop posting this kind of contents, really loved it so far with all of your videos. Able to relate with the kind of small impactful problems and solutions you mentioned during your videos, which indirectly impact the interviews
@hello_interview
@hello_interview 3 ай бұрын
I got you!
@jk26643
@jk26643 3 ай бұрын
Please please keep posting more! It educates so many people and you make the world better!! :) Absolutely the best system design series!
@hello_interview
@hello_interview 3 ай бұрын
🥲
@qwer81660
@qwer81660 3 ай бұрын
By far the most inspiring, relevant and practical system design interview content. I found them really useful to perform strongly in my system design interviews
@hello_interview
@hello_interview 3 ай бұрын
Awesome! Congratulations 🎊
@AlexZ-l7f
@AlexZ-l7f 3 ай бұрын
Again the best System Design interview overview I ever met. Please keep doing it for us!
@hello_interview
@hello_interview 3 ай бұрын
🫡
@davidoh0905
@davidoh0905 3 ай бұрын
This is such a great example for any kind of data application that needs asynchronous processing! Widely applicable!
@alirezakhosravian9449
@alirezakhosravian9449 3 ай бұрын
I'm watching your videos to get prepared for my interview 4 days later, I hope I'll be able to handle it :DDD , so far the best SD videos I could ever find on youtube.
@hello_interview
@hello_interview 3 ай бұрын
Good luck!! You got this!
@TheKarateKidd
@TheKarateKidd 3 ай бұрын
This is the first video of yours I watched and I loved it. Your pace is just right and you explain things well, so I didn't feel overwhelmed like I usually do when I watch systems design videos. Thank you!
@omerfarukozdemir5340
@omerfarukozdemir5340 2 ай бұрын
Great content as always, thank you! Some comments about the design. 1. Concurrency within a crawler is going to bring a huge performance bonus. 2. Running an async framework for network io is much more faster than using threading. 3. We can put the retry logic within the crawler to make things simpler. 4. DNS caching looked like overengineering because DNS is already cached on multiple layers, programming language, OS, ISP and etc. 5. We're processing the html in another service but we're hashing the HTML in the crawler, that seems wrong.
@Dao007forever
@Dao007forever Ай бұрын
5. You don't want to put the same content into Blob. We are IO bound, compute a rolling hash (SHA) is cheap.
@tushargoyal554
@tushargoyal554 Ай бұрын
I usually refrain from commenting but this is by far the best explanation I can find for this problem statement. I work at Amazon, the use of message visibility timeout for exponential backoff is exactly what we do to add a delay of 1 hour for our retryable messages. One very minor practical insight is to not use the metric approximate message receive count because it is almost always incorrect because the count goes up if a thread reads the message but doesn't process it. I used a retry count attribute while putting message in the queue and checked whether it exceeds the retry threshold.
@hello_interview
@hello_interview Ай бұрын
Super cool and good to know! Appreciate you sharing that
@Global_nomad_diaries
@Global_nomad_diaries 3 ай бұрын
Soo soo soo much thankful I am for all this content.
@brijeshthakrar2106
@brijeshthakrar2106 18 күн бұрын
I've been building a web scraper on my own and using similar logic, and after a month, I see this. I swear to god this helped me a lottttt, but honestly, it's good that I didn't see this on day 1. Otherwise, I would not have learned things on my own. Great job, guys. PS: I got to know about you from Jordan. Keep posting great content, both of you guys!!!
@PiyushZambad
@PiyushZambad 2 ай бұрын
Thank you for making these videos so engaging! Your eloquent and logical style of explaining the concepts makes watching these videos so much fun.
@hello_interview
@hello_interview 2 ай бұрын
High praise! Right on :)
@shoaibakhtar9194
@shoaibakhtar9194 17 күн бұрын
I gave the meta interview last week only and I was able to crack it. All thanks to you brother. The system design round went extremely well. I followed the exact same approach in all the questions and everything went really well. Keep posting the videos, these are the best content over the internet for system design.
@hello_interview
@hello_interview 17 күн бұрын
Let’s go!!!! Congrats! Thrilled to hear that. Well done 👏🏼
@perfectalgos9641
@perfectalgos9641 2 ай бұрын
Thanks for this video. This video is one of the best in the internet for crawler system design. With a full preparation you are going to an hour, how to manage it in 35mins of 45mins interview.
@hello_interview
@hello_interview 2 ай бұрын
Yah the hour here because of all the fluff and teaching. This is reasonably 35 without that.
@TimothyZhou0
@TimothyZhou0 23 күн бұрын
Damn this is extremely nuanced. Some of the big-picture improvements (like adding the parsing queue) seemed kind of obvious, but then Evan would optimize it with a neat detail (e.g. including link in request so we don't have to fetch from database) that was so simple and yet hadn't occurred to me. Great series, great content, thanks so much!
@deshi_techMom
@deshi_techMom 24 күн бұрын
I absolutely love the details you talk and you have great presentation skill! super admirable! you just made system design interview easier for me
@eshw23
@eshw23 26 күн бұрын
Evan your explanations are extremely amazing and the best on this channel. Hope to hear more soon.
@rahulrandhiv
@rahulrandhiv 2 ай бұрын
I am watching this during the wait time for by flight back to home from GOA :) and completed it
@hello_interview
@hello_interview 2 ай бұрын
💪
@TheKarateKidd
@TheKarateKidd 3 ай бұрын
One of the first things that came to mind in the beginning of this problem is dynamic webpages. Most websites don't display the majority of their content on simple HTML. To be honest if I was interviewing a senior or above level candidate, not mentioning dynamic content early on would be seen as a red flag. I'm glad you included it at the end of your video, but I do think it is important enough to be mentioned early on.
@IntSolver
@IntSolver 2 ай бұрын
Hey, thanks for your video. I have watched all your content and I gained immense amount of knowledge. I gave my E4 interview a week back, and my question was this (with a slight variation of the crawling being done through an app which was deployed in 10k devices). I covered all the content which you've presented here in the same structure, and was able to dive deep into all the parts the interviewer asked. I was expecting an offer but got rejected due to "No Hire" in Design round. After retrospection, I could find some people talking about chord algorithm and peer2peer crawler was expected. I still don't understand what would be the cause for No hire, because interviewer didn't even hint towards anything and was aligned throughout. The experience was really heartbreaking. SO, I just wanted to leave it out here that even though I did my best, it wasn't my day (I guess). thanks for your videos, nonetheless
@hello_interview
@hello_interview 2 ай бұрын
So sorry to hear that, that’s such disappointing news to receive. It’s always a toss up. Keep your head high and best of luck with future endeavors 💪
@randymujica136
@randymujica136 2 ай бұрын
In my opinion one of the most important bullets of your strategy is how you minimize the initial HLD and you make sure you deliver something that actually covers all the functional requirements. I find this calibration really valuable and not that easy to achieve, since as a Senior candidate, one can be tempted to go straight to deep dives without actually setting clearly that pause from HLD to deep dives. What do you recommend to get better at this?
@sharanya_sr
@sharanya_sr 3 ай бұрын
Thank you for the great content and congratulations for making it a goto channel for system design. Content is refreshing and watch once never forget types. I request you to make a content to share how to approach a problem that we have not seen before. What best we could do like either map it to any related system or think logically how api/design would work focusing on the problem asked.
@hello_interview
@hello_interview 3 ай бұрын
Cool idea, we'll give that a go!
@itayyahimovitz86
@itayyahimovitz86 10 күн бұрын
Great video! I would probably add a proxy component to this system design for the part where the crawler makes the HTTP calls to fetch the HTML (maybe for the DNS lookups as well). This is a critical part of designing a web crawler because you want avoid making the calls through the network where the web crawlers are deployed for case you get all your network ip addresses blocked and also for security reasons, you want to isolate the outgoing network calls from your instances.
@allenliu1065
@allenliu1065 Ай бұрын
Best explaination for bloom filter, redis set and hash as GSI.
@zy3394
@zy3394 3 ай бұрын
love your content , learned a lot, please keep updating more. ❤
@RafaelDHirtzPeriod2
@RafaelDHirtzPeriod2 Ай бұрын
So sorry for being Microsoft Word, but on all of your videos THE APROACH is spelled incorrectly. Thank you so much for posting all your videos. Super helpful for all of us interviewees out there!
@hello_interview
@hello_interview Ай бұрын
🤦🏻‍♂️first person to notice this. Will fix next video!
@zayankhan3223
@zayankhan3223 3 ай бұрын
This is one of the best system design videos on the interview. Kudos to you. I would like to understand a little more on how do we handle duplicate content? What if the content is 80% same on two pages? Hash will work only when pages are exactly the same.
@hello_interview
@hello_interview 3 ай бұрын
Yah, only exactly the same
@chongxiaocao5737
@chongxiaocao5737 3 ай бұрын
Finally a new update! Apprecaite!
@technical3446
@technical3446 Ай бұрын
Few inputs: - Bandwidth calculation need to factor in upload data to S3 as well. You will probably also do some compression while upload, and given HTML data had be fairly highly compressible. - At that rate, the system will likely not be network throughput bound, but usually latency and number of connections bound. Assume that each site takes 1 sec to return the web page, so for 10k requests per sec for each node, you will need 10k TCP connections, which if under possible limit but will lead to a number perf issues. - Memory requirements: 10k * 2 MB = 20 GB, should be enough, but all of these are GCable. less reusable memory and TCP connection - You will likely be better off using a lower node type, around 50 Gbps, utilisation beyond that for a single node is going to be challenging and you will hit other limits. - Another optimisation will be to have the parsing and crawling in the same process to avoid passing off the HTML content to a separate process. You can also update the DB in one write with all the links.
@krishnabansal7531
@krishnabansal7531 3 ай бұрын
Suggestions: Please mention what are the clarifying questions to be asked for a specific problem. Even if the problem is well known, the panel still expects to ask few clarifying questions, specially for a senior candidate. Also, if you can cover company specific expectations (if any) for top MAANG companies, that would be excellent.
@dibll
@dibll 3 ай бұрын
Hope you can create videos of the write ups done by other authors on HelloInterview in the near future. Love the content. Thank you!!
@vzfzabc
@vzfzabc 3 ай бұрын
Nice, thanks for the content. I also really appreciated the videos from the mock interview. I found that much more useful and would love to see more of those.
@hello_interview
@hello_interview 3 ай бұрын
Tougher there for privacy reasons. Requires explicit sign off from coach and candidate, but I'll see what I can do :)
@kunliu1062
@kunliu1062 Ай бұрын
Wow, wish I had found this much earlier. Now I certainly wouldn't just go into my next interview and throw the bloom filter onto the diagram without deep thinking 😝
@letsgetyucky
@letsgetyucky 3 ай бұрын
commenting for the algo. thanks for excellent and free content!
@hello_interview
@hello_interview 3 ай бұрын
Legend 🫡
@letsgetyucky
@letsgetyucky 3 ай бұрын
​@@hello_interview Feedback: really enjoyed the video! Would love if future videos were also mostly skewed towards deep dives. Suggesting other topics to research yourself (or hash out with others in the comments) is also super valuable. Finally, calling out the anti patterns that are being regurgitated (e.g. bloom filters) is very valuable as well.
@davidoh0905
@davidoh0905 3 ай бұрын
@@letsgetyucky is bloom filters a anti-pattern!? just curious!
@letsgetyucky
@letsgetyucky 3 ай бұрын
@@davidoh0905 during the deep dive Evan says that Bloom Filters are commonly used in the interviews because it's they are used in solutions in the popular interview prep books. But the interview prep books don't do a great job of discussing the tradeoffs behind using a Bloom Filters vs more practical solutions. It's a nice theoretical solution, but in a real world system you could do something simpler and just bruteforce the problem.
@swagatrath2256
@swagatrath2256 Ай бұрын
Very well explained! If possible please do share some tips on how one can keep up with latest technologies and develop a mindset towards such system designs. I feel like I'm good at coding but not that great when it comes to designing architecture like this. Basically what I'm looking for is how does one progress from a Developer role to an Architect role..
@BhaskarJayaraman
@BhaskarJayaraman Ай бұрын
Great content. In deep dives around 52:41 "when you get a new URL you'll put it on here it'll be undefined and then when we actually parse it we'll update this with" and 52:46 "the real last craw time and with the S3 link which also would have been undefined so that would handle that" - I think you mean -- when we actually crawl and download it, we'll update it with the last crawl time and with the S3 link. Also when you use Dynamo the look up will be Log(1) not Log (n). Would be great if you had the DynamoDB GSI schema.
@CS2dAVE
@CS2dAVE 3 ай бұрын
S Tier system design content! Another exceptional video 👏
@undercovereconomist
@undercovereconomist 3 ай бұрын
Wow, the amount of Depth here is absolutely insane. How can you compressed so much information into a 1 hour interview? I learn so much information from this video that I never see else where, and it is all presented so elegant and natural. The speaker speaks clearly, no ums and ahs, no speed up? You must be a great engineer at work! One thing that I am a bit unsatisfied is about duplicated content. Is it even possible that we actually have completely duplicated content? Even when there are two different web pages, I think that they might just have a few location that the content is different. That would completely break our hash function right? Do you know of any hash function that would allow two webpages that are mostly similar to be close together? Do you see any role in word2vec or vector storage here?
@ronakshah725
@ronakshah725 3 ай бұрын
I think this is a great question! I want to attempt to answer this, but I’m no expert haha. As the goal of this particular system is to train language models, it’s nice to understand if optimizing for “similar” web pages is necessary for our top level goal. In general, it could be helpful to prioritize learning based on chards of text, that appear in many pages. But we have to remember that connecting back to the source could also be required later, for things like citations. So we have to be a bit smart about this. TL;DR it’s a can of worms and I would try to better understand the priority of this compared to existing requirements of the system.
@ronakshah725
@ronakshah725 3 ай бұрын
This isn’t skirting off the question, but it’s a good step towards delivering our final solution.
@joo02
@joo02 21 күн бұрын
I confirm your hair and hat didn't have any negative influence in the making of this System Design video.
@hello_interview
@hello_interview 21 күн бұрын
😂🫶
@prahaladvenkat
@prahaladvenkat Ай бұрын
Your channel is a gold mine! Thanks a ton. How to decide whether to use Kinesis data streams or SQS? Although they serve different purposes, it feels like both are good options to begin with, generally. Here, SQS ended up being a better option because of retries, DLQ support, etc. But ideally, I'd like to be able to deterministically and correctly choose the right option in the beginning itself. It'll be super helpful if you could quickly reason out in the videos (in just 1 or 2 lines) why you pick a certain offering over other seemingly similar technologies/offering!
@xymadison
@xymadison Ай бұрын
This is awesome, it is a very comprehensive and clean explanation, I've learnt a lot from your videos, Thanks. May I ask what tool or website you use as the white board?
@jiananstrackjourney370
@jiananstrackjourney370 Ай бұрын
Great video! I have a question, is 5k requests per second realistic? Even with the most powerful machine on EC2?
@t.jihad96
@t.jihad96 9 күн бұрын
Thank you for the effort, please keep doing the good job. I'm watching your videos as if it was a Netflix series, very exciting. I was hoping to cover some topics like if the crawler processed the message and failed to commit back to the queue that it processed the message due to a crash, how would you handle such a case? Is there a generic solution where it can be used in different systems instead of workarounds?
@sanketpatil493
@sanketpatil493 3 ай бұрын
Can not thank you enough for all this valuable content. Just amazing work! Btw can you share some good resources for preparing for the system designs interview? Books, courses, engineering blogs, etc. A dedicated video would be much more helpful!
@hello_interview
@hello_interview 3 ай бұрын
Im certainly biased, but i think our content is some of (if not the) best out there. so I would start at www.hellointerview.com/learn/system-design/in-a-hurry/introduction. Some useful blogs on system design too depending on your level which can be found at www.hellointerview.com/blog all written by either me or my co-founder (ex meta sr. hiring manager)
@sushmitagoswami2033
@sushmitagoswami2033 Ай бұрын
Excellent video!. Have one thoughts - would it be possible to increase the font a bit? Thanks so much!
@georgepesmazoglou4365
@georgepesmazoglou4365 3 ай бұрын
Great design! I wonder why there was never a mention of doing the whole thing with spark, using offline batch jobs rather than realtime services?
@afge00
@afge00 3 ай бұрын
I was thinking about batch as well
@hello_interview
@hello_interview 3 ай бұрын
Interesting. You know, as many times I’ve asked this, no one has every proposed it. Top of my head I see no obvious reason why you couldn’t get it to work, especially for just a one off.
@georgepesmazoglou4365
@georgepesmazoglou4365 3 ай бұрын
@@hello_interview I do crawling for a large company, typically you would do something like the video's design when you care about data freshness, if you don't care about that, like the LLM use case you, would do a sparky thing where you just split the work to a bunch of workers, you can have the html fetching and processing parts in different stages. Your inputs can be the URLs and previous crawled pages and join them, so that you crawl only new urls, or recrawl URLs only after some time since their last crawl. The main disadvantage compared to your design is that you are not as fault tolerant as you can't do much in terms of checkpointing. Also it is less fun to discuss:)
@davidoh0905
@davidoh0905 3 ай бұрын
Just in time!!!!
@zfarahx
@zfarahx 3 ай бұрын
Another bump for the algo!
@hello_interview
@hello_interview 3 ай бұрын
You all are the best!
@mularys
@mularys 3 ай бұрын
Here are my concerns: your solution is so nice, but if everyone is going to talk about the same thing during the interview, especially when one is driving the process, will it raise any red flags on the hiring committee side as they might think candidates are referring to the same sources?
@hello_interview
@hello_interview 3 ай бұрын
This is not meant to be a script. If your plan is to regurgitate this back to an interviewer I’d recommend not doing that. Instead it’s a teaching resource to learn about process, technologies, and potential deep dives. If you get this problem, then sure, talk about some of this stuff, but also let it be a conversation with the interviewer
@rostyslavmochulskyi159
@rostyslavmochulskyi159 3 ай бұрын
But if there an issue if you answer all/most of interviewer questions correctly? I believe it is an issue if you memorise this, but can’t go any further, but if you can there is nothing wrong.
@mularys
@mularys 3 ай бұрын
@@hello_interview Yeah, makes sense. You present a good framework to structure the talking points that candidates can bring up. And I found it pretty useful. My system design question is the top-k video and I followed the key points you mentioned. My target is E5 and the interviewer just had a handful of follow-up questions (90% of the time I was talking). Eventually, I passed that round with a "strong hire". Of course, I added my points of view during the interview, but I feel like I was just taking something off the shelf.
@davidoh0905
@davidoh0905 3 ай бұрын
If Kafka does not support retry out of the box, what does that exactly mean? if you do not commit, does it not get move the offset, which could potentially serve as retry like(?) Also, could you compare this with some other queueing service that allows for retry like SQS maybe? Comparison on when to use Kafka vs SQS would be really good too! message broker vs task queue might be their most frequent use cases but might be good to provide justifications in this scenario!
@yottalynn776
@yottalynn776 2 ай бұрын
Very nice explanation! When actually crawling the pages, it could be blocked by the website owner. Do you think we need to mention this in the interview and provide some solutions like using rotating proxies?
@hello_interview
@hello_interview 2 ай бұрын
Good place for depth! Ask your interviewer :)
@damluar
@damluar 18 күн бұрын
To avoid batching URLs from the same domain together, can we use Kafka partitions and spread messages by hash(URL)? Since different crawlers work at different paces, it is likely they will pick up those URLs at a different time.
@Anonymous-ym6st
@Anonymous-ym6st 2 ай бұрын
thanks for the great content as always! One quick question: for redis and global secondary index comparison, given the data can be stored in the single instance, if we use hash based index (not sure if it is supported by dynamo, but should be supported by MySQL), then it should also be O(1) and redis in this case should be over-engineering a bit?
@damluar
@damluar 18 күн бұрын
How would you choose the initial frontier URLs? How many should be enough?
@aforty1
@aforty1 2 ай бұрын
Thanks for this! As far as checking the hash @ 57:00, wouldn’t we already have the last hash since we had to retrieve that url record before we fetched the webpage because we had to go get the lastCrawlTime?
@theoshow5426
@theoshow5426 2 ай бұрын
Great content! Keep it coming!
@akshat3106
@akshat3106 2 ай бұрын
I could not find the information where it is mentioned that aws sqs have inbuilt exponential backoff retry mechanism. Can anyone please share the link for the same. Thanks a lot!
@hello_interview
@hello_interview 2 ай бұрын
On mobile but scroll through the comments. I linked the aws docs in response to another comment.
@akshat3106
@akshat3106 2 ай бұрын
@@hello_interview Thanks for reply, but could not find it
@fran_sanchez_yt
@fran_sanchez_yt Ай бұрын
@@hello_interview I haven't been able to find the link and I also wasn't able to find this exponential back-off feature mentioned in the SQS docs...
@Global_nomad_diaries
@Global_nomad_diaries 3 ай бұрын
Can this be asked in product architecture interview at Meta or just system design?
@hello_interview
@hello_interview 3 ай бұрын
Should be system design not product architecture in meta world. But, you never know, some interviewers go rogue.
@flyingpiggy741
@flyingpiggy741 Ай бұрын
Why do we need a DNS server? Would it be enough to grab text from a url?
@vamsikrishnabollepalli4908
@vamsikrishnabollepalli4908 3 ай бұрын
Can you also provide system design interview flow and product design interview flow for each problem?
@hello_interview
@hello_interview 3 ай бұрын
They're mostly the same tbh. www.hellointerview.com/blog/meta-system-vs-product-design
@vimalkumarsinghal
@vimalkumarsinghal 3 ай бұрын
Thanks for sharing the SD on web crawler. question : - how to consider dynamic pages / sub domain / url which loop back to same url / url with query string what the best approach to identify duplicate. thanks
@hello_interview
@hello_interview 3 ай бұрын
May not totally understand the question, but you could just drop the query strings from extracted urls
@Sandeepg255
@Sandeepg255 3 ай бұрын
I think at 39:03, you are saying that set the visibility timeout of the message to now - crawlDelay, but visibility timeout concept is for a queue, then how are you planning to set it at message level ?
@hello_interview
@hello_interview 3 ай бұрын
You can set them at the message level with SQS! From the docs, “Every Amazon SQS queue has the default visibility timeout setting of 30 seconds. You can change this setting for the entire queue. Typically, you should set the visibility timeout to the maximum time that it takes your application to process and delete a message from the queue. When receiving messages, you can also set a special visibility timeout for the returned messages without changing the overall queue timeout.”
@bobberman09
@bobberman09 3 ай бұрын
can you post the 2nd top voted one (youtube) earlier? At least written version :) Also very interested in the stock exchange question, but I see that's further down.
@hello_interview
@hello_interview 3 ай бұрын
The written coming this week or early next at the latest! Almost done :)
@bobberman09
@bobberman09 3 ай бұрын
@@hello_interview Looking forward to it :) Love the videos btw, feel like its the only system designs I can trust for interview prep
@dhanyageetha1519
@dhanyageetha1519 2 ай бұрын
Kafka also support configurable exponention back off from producer side
@hello_interview
@hello_interview 2 ай бұрын
Yup, that’s just to make sure the message gets on the queue, so not the same problem we’re solving here.
@nanlala3171
@nanlala3171 3 ай бұрын
I saw you used many AWS services during your design. Is it a good practice to use specific products and their features (dlq/SQS, GSI / dynamo db) in the design? What if the interviewer never used these products and had no concept of these services/features.
@hello_interview
@hello_interview 3 ай бұрын
Depends on the company, in general, yes. But, importantly, don't just say the technology. This important part is that you understand the features and why they'd be useful. For example, Bad: I'll use DynamoDB here Good: I need a DB that can XYZ. DynamoDB can do this, so I'll choose it.
@kamakshijayaraman3747
@kamakshijayaraman3747 23 күн бұрын
iam not able to understand the math. for no of aws instances. can someone explain?
@evalyly9313
@evalyly9313 3 ай бұрын
So for being able to give the right estimation of the back of the envelope calculation, the base knowledge is that the person knows that an AWS instance capacity is 400Gbps. I don't have this knowledge in mind, is that ok we can ask or search during interview or is this something we should keep in mind?
@hello_interview
@hello_interview 3 ай бұрын
I think it’s useful to have some basic specs as a note maybe on your desk when interviewing. But it’s also ok to ask. The intuition that caches can have up to around 100gb and dbs up to around 100TB is good intuition to have though.
@ehudklein726
@ehudklein726 9 күн бұрын
good stuff!
@NeelCrasta
@NeelCrasta 10 күн бұрын
Depth should be on Domain Table instead of URL Table. URLs would be unique, so the depth would not increase. Whereas, the depth will increase on a Domain, and having max depth will restrict us from falling into a loop-trap.
@hello_interview
@hello_interview 10 күн бұрын
True! Might’ve mistyped/misspoke. Thanks!
@NeelCrasta
@NeelCrasta 10 күн бұрын
@@hello_interview Your system design videos are amazing.
@healing1000
@healing1000 3 ай бұрын
Thank you! to avoid duplicate URLS, do we need to discuss using a cache or Is it ok to only use the data base
@hello_interview
@hello_interview 3 ай бұрын
Same convo as the duplicate content. Cache is certainly an option. The DB index enough imo.
@tori_bam
@tori_bam 3 ай бұрын
thank you for another amazing contents! I'll be having a mock interview using Hello Interview soon.
@hello_interview
@hello_interview 3 ай бұрын
Sweet! Can’t wait :)
@praneethnimmagadda1938
@praneethnimmagadda1938 2 ай бұрын
Just wondering , there is no mention related to inverted index in this crawling flow as this inverted index would help during the searches ?
@hello_interview
@hello_interview 2 ай бұрын
Searches of what?
@praneethnimmagadda1938
@praneethnimmagadda1938 2 ай бұрын
​@@hello_interview I mean when user searches for results of query on search engine
@fufuhu148
@fufuhu148 Ай бұрын
I am not entirely sure I agree with the trade-off discussion between Bloom-filter vs Hash(GSI). Hash collisions can occur, which means we can still receive false positives with GSI hashes.
@fufuhu148
@fufuhu148 Ай бұрын
I think it might be necessary to consider byte-by-byte checking when we find a hashing match, to make sure its not just a hash collision.
@hello_interview
@hello_interview Ай бұрын
Hash collisions will almost certainly not occur. They’re so rare they’re not worth designing around for a system like this, where the consequence is minor. It’s 1 in 340 undecillion chance lol
@fufuhu148
@fufuhu148 Ай бұрын
@@hello_interview I agree with you. My point was hash collision is as likely as false positive in bloom filter
@mihaiapostol7864
@mihaiapostol7864 2 ай бұрын
hello, i enjoyed your content a lot, i'm learning a lot from it, thannks! one question related to the design, you were talking at minute 52:00 that the check that the urlLink already exists should be done in the parser. but if this uniqueness check is not done earlier in the crawler, then the crawler could save the same text in s3 twice for the same urlLink, right?
@hello_interview
@hello_interview 2 ай бұрын
Nope! We won’t add new links to the queue if they already exist. Thats why we check in the parser
@mihaiapostol7864
@mihaiapostol7864 2 ай бұрын
@@hello_interview understood, thank you!
@lixinyi7734
@lixinyi7734 Ай бұрын
what is the Text editor you are using? I like it
@hello_interview
@hello_interview Ай бұрын
Excalidraw
@dibll
@dibll 3 ай бұрын
Not related to this Video in particular but I have question about partitioning - Lets say we have a DB with 2 columns firstname and lastname. When we say we want to prefix the partition key which is firstname with lastname, Does that mean all similar lastnames will be on same node , if yes what will happen to firstNames how they will be arranged? Thanks
@hello_interview
@hello_interview 3 ай бұрын
if the primary key is a composite of first and last then no, this just means that people with the same first and last name will be on the same ndoe
@trueinviso1
@trueinviso1 3 ай бұрын
I wonder if questions about the type of content we are scraping matters? i.e. ignore suspicious sites or offensive content
@hello_interview
@hello_interview 3 ай бұрын
Valid question for interviewer!
@mdyuki1016
@mdyuki1016 3 ай бұрын
what's the reason not storing URLs in databases like MySQL. for retrying, just add some column like "retry times"
@hello_interview
@hello_interview 3 ай бұрын
I mention this at somepoint I believe when discussing the alternate approach of having a "URL Scheduler Service." They have to get back on the queue somehow, so either directly or via a scheduler where state is in the DB.
@Analytics4u
@Analytics4u 2 ай бұрын
There is no mention of shardimg here ?
@Analytics4u
@Analytics4u 2 ай бұрын
I like the deep dive section
@cedarparkfamily
@cedarparkfamily 2 ай бұрын
I still can see the ad here
@tomtran6936
@tomtran6936 3 ай бұрын
what is the tool are you using to draw and take note , Evan?
@hello_interview
@hello_interview 3 ай бұрын
Excaldiraw
@bhaskardabhi
@bhaskardabhi 3 ай бұрын
Wont there be a case that even though HTML will be diff but the hash will be same? is it even possible?
@hello_interview
@hello_interview 3 ай бұрын
Not worth even considering. Hash collisions are so unlikely they’re not worth discussing
@happybaniya
@happybaniya Ай бұрын
Best❤
@krishnabansal7531
@krishnabansal7531 3 ай бұрын
I hope someone asks me Web Crawler question.
@philopateernabil1421
@philopateernabil1421 Ай бұрын
Can't we just ignore failed websites, no need to retry as we already having million others to process in the frontier queue?
@hello_interview
@hello_interview Ай бұрын
Product decision!
@Ryan-g7h
@Ryan-g7h 2 ай бұрын
Which drawing tool is this?
@hello_interview
@hello_interview 2 ай бұрын
Excaldiraw
@Ryan-g7h
@Ryan-g7h 2 ай бұрын
Thank you
@serendipity1328
@serendipity1328 26 күн бұрын
why is it called frontier queue? Is this some kind of standard term?
@damluar
@damluar 18 күн бұрын
I believe the term comes from BFS where we have a frontier of nodes and we expand the frontier as we go.
@mohitaggarwal949
@mohitaggarwal949 3 ай бұрын
If we store Hash in URL table in DynamoDB , how does it handle a case of copied webpages which will have different URLs and same HTML ?
@hello_interview
@hello_interview 3 ай бұрын
Check the hash before storing in s3 and putting on parsing queue
@shyamvani
@shyamvani 3 ай бұрын
you need to store the hash of the page contents for the url and not the hash of the url itself.
@HandyEngineering
@HandyEngineering 2 ай бұрын
I was going to ask the same question there - you can not avoid downloading by using a hash of the content 😊 You can use this hash to mark duplicates and not store the text output N times, true... You also mentioned PK lookup before going into hash and said log(N), obvious typo. Great content overall
@aaa-hw2ty
@aaa-hw2ty 2 ай бұрын
400gbps nic😂
@YoussifSalama
@YoussifSalama 14 күн бұрын
I think he was off by a couple orders of magnitude there 😅
@annoyingorange90
@annoyingorange90 3 ай бұрын
really good video but please stop panning uselessly :D appreciate ur work!
@hello_interview
@hello_interview 3 ай бұрын
Example?
@annoyingorange90
@annoyingorange90 3 ай бұрын
@@hello_interview 7:17 is the main one thnx love u
System Design Interview: Design Top-K Youtube Videos w/ a Ex-Meta Senior Manager
48:39
Hello Interview - SWE Interview Preparation
Рет қаралды 27 М.
Как мы играем в игры 😂
00:20
МЯТНАЯ ФАНТА
Рет қаралды 3,4 МЛН
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 52 МЛН
System Design Interview: Design Tinder w/ a Ex-Meta Staff Engineer
1:13:22
Hello Interview - SWE Interview Preparation
Рет қаралды 11 М.
Mobile System Design Interview: Design a File Downloader #systemdesign #faang #mobile
26:38
Alexey Glukharev: Software Engineering & IT Career
Рет қаралды 400
Google system design interview: Design Spotify (with ex-Google EM)
42:13
IGotAnOffer: Engineering
Рет қаралды 1,1 МЛН
Kafka Deep Dive w/ a Ex-Meta Staff Engineer
43:31
Hello Interview - SWE Interview Preparation
Рет қаралды 42 М.
Frontend System Design Interview (Build Google Search)
18:55
theSeniorDev
Рет қаралды 3,7 М.
System Design Interview: Design Uber w/ a Ex-Meta Staff Engineer
1:03:05
Hello Interview - SWE Interview Preparation
Рет қаралды 86 М.
Front End System Design Fundamentals (All In One Comprehensive Guide)
37:50