System Design Interview: Design a Web Crawler w/ a Ex-Meta Staff Engineer

  Рет қаралды 49,838

Hello Interview - SWE Interview Preparation

Hello Interview - SWE Interview Preparation

Күн бұрын

Пікірлер: 204
@_launch_it_
@_launch_it_ 6 ай бұрын
I had an interview last Friday (June 14) and I followed your exact steps. The question was to design the Ticketmaster. The Redis cache solution was the best. Thank you for these amazing videos
@hello_interview
@hello_interview 6 ай бұрын
Nice! Hope you passed 🤞🏼
@griffsterb
@griffsterb 4 ай бұрын
Did you get an offer?
@vigneshraghuraman
@vigneshraghuraman 6 ай бұрын
by far the best System design interview content I've come across - please continue making these. you are doing an invaluable service!
@hello_interview
@hello_interview 6 ай бұрын
♥️
@rupeshjha4717
@rupeshjha4717 6 ай бұрын
Bro, pls don't stop posting this kind of contents, really loved it so far with all of your videos. Able to relate with the kind of small impactful problems and solutions you mentioned during your videos, which indirectly impact the interviews
@hello_interview
@hello_interview 6 ай бұрын
I got you!
@crackITTechieTalks
@crackITTechieTalks 6 ай бұрын
I often don't comment for the videos. But couldn't stop commenting your video just to say "What a valuable content". Thanks a lot for all your videos!! Keep doing this..
@shoaibakhtar9194
@shoaibakhtar9194 3 ай бұрын
I gave the meta interview last week only and I was able to crack it. All thanks to you brother. The system design round went extremely well. I followed the exact same approach in all the questions and everything went really well. Keep posting the videos, these are the best content over the internet for system design.
@hello_interview
@hello_interview 3 ай бұрын
Let’s go!!!! Congrats! Thrilled to hear that. Well done 👏🏼
@TechieTech-gx2kd
@TechieTech-gx2kd 29 күн бұрын
What problem did you get ?
@TimothyZhou0
@TimothyZhou0 3 ай бұрын
Damn this is extremely nuanced. Some of the big-picture improvements (like adding the parsing queue) seemed kind of obvious, but then Evan would optimize it with a neat detail (e.g. including link in request so we don't have to fetch from database) that was so simple and yet hadn't occurred to me. Great series, great content, thanks so much!
@jk26643
@jk26643 6 ай бұрын
Please please keep posting more! It educates so many people and you make the world better!! :) Absolutely the best system design series!
@hello_interview
@hello_interview 6 ай бұрын
🥲
@qwer81660
@qwer81660 6 ай бұрын
By far the most inspiring, relevant and practical system design interview content. I found them really useful to perform strongly in my system design interviews
@hello_interview
@hello_interview 6 ай бұрын
Awesome! Congratulations 🎊
@AlexZ-l7f
@AlexZ-l7f 6 ай бұрын
Again the best System Design interview overview I ever met. Please keep doing it for us!
@hello_interview
@hello_interview 6 ай бұрын
🫡
@sinajafarzadeh9577
@sinajafarzadeh9577 2 ай бұрын
I’m so glad to have found this channel. One of few system design resources that isn’t just performative but has actual substance!!
@alirezakhosravian9449
@alirezakhosravian9449 6 ай бұрын
I'm watching your videos to get prepared for my interview 4 days later, I hope I'll be able to handle it :DDD , so far the best SD videos I could ever find on youtube.
@hello_interview
@hello_interview 6 ай бұрын
Good luck!! You got this!
@TeachAManToPhish
@TeachAManToPhish 18 күн бұрын
How did your interview go? What SD question were you asked?
@KiritiSai93
@KiritiSai93 2 ай бұрын
I've seen many videos related to system design, but your staff level knowledge shows when you are designing components! Amazing job 🥳
@ankitasinghai2
@ankitasinghai2 25 күн бұрын
The best explaination of how to design a web crawler I have seen or read yet. Keep doing the good work :)
@hello_interview
@hello_interview 24 күн бұрын
Thank you 🫶
@davidoh0905
@davidoh0905 6 ай бұрын
This is such a great example for any kind of data application that needs asynchronous processing! Widely applicable!
@omerfarukozdemir5340
@omerfarukozdemir5340 5 ай бұрын
Great content as always, thank you! Some comments about the design. 1. Concurrency within a crawler is going to bring a huge performance bonus. 2. Running an async framework for network io is much more faster than using threading. 3. We can put the retry logic within the crawler to make things simpler. 4. DNS caching looked like overengineering because DNS is already cached on multiple layers, programming language, OS, ISP and etc. 5. We're processing the html in another service but we're hashing the HTML in the crawler, that seems wrong.
@Dao007forever
@Dao007forever 4 ай бұрын
5. You don't want to put the same content into Blob. We are IO bound, compute a rolling hash (SHA) is cheap.
@richbuckingham
@richbuckingham 29 күн бұрын
@@Dao007forever right, we want to dedupe on text content, not HTML, so instead you should let the parsing worker do the content processing work which is the data we're interested in not duplicating and then produce the hash. The source HTML even if it had the same text content in it all the extra HTML is extremely likely to include unique content per request and so will produce a different hash on the unprocessed HTML versus the processed (clean) text content. The hashing work should absolutely not be done by the crawlers as the output is not going to be useful.
@Dao007forever
@Dao007forever 29 күн бұрын
@@richbuckingham That would mean lots of duplicates since multiple URLs lead to the same page. We might want to de-dup twice if you are thinking about the clean content, but definitely have to dedup at the HTML level.
@brijeshthakrar2106
@brijeshthakrar2106 3 ай бұрын
I've been building a web scraper on my own and using similar logic, and after a month, I see this. I swear to god this helped me a lottttt, but honestly, it's good that I didn't see this on day 1. Otherwise, I would not have learned things on my own. Great job, guys. PS: I got to know about you from Jordan. Keep posting great content, both of you guys!!!
@rahulrandhiv
@rahulrandhiv 4 ай бұрын
I am watching this during the wait time for by flight back to home from GOA :) and completed it
@hello_interview
@hello_interview 4 ай бұрын
💪
@Global_nomad_diaries
@Global_nomad_diaries 6 ай бұрын
Soo soo soo much thankful I am for all this content.
@tushargoyal554
@tushargoyal554 4 ай бұрын
I usually refrain from commenting but this is by far the best explanation I can find for this problem statement. I work at Amazon, the use of message visibility timeout for exponential backoff is exactly what we do to add a delay of 1 hour for our retryable messages. One very minor practical insight is to not use the metric approximate message receive count because it is almost always incorrect because the count goes up if a thread reads the message but doesn't process it. I used a retry count attribute while putting message in the queue and checked whether it exceeds the retry threshold.
@hello_interview
@hello_interview 4 ай бұрын
Super cool and good to know! Appreciate you sharing that
@TheKarateKidd
@TheKarateKidd 6 ай бұрын
This is the first video of yours I watched and I loved it. Your pace is just right and you explain things well, so I didn't feel overwhelmed like I usually do when I watch systems design videos. Thank you!
@perfectalgos9641
@perfectalgos9641 5 ай бұрын
Thanks for this video. This video is one of the best in the internet for crawler system design. With a full preparation you are going to an hour, how to manage it in 35mins of 45mins interview.
@hello_interview
@hello_interview 5 ай бұрын
Yah the hour here because of all the fluff and teaching. This is reasonably 35 without that.
@PiyushZambad
@PiyushZambad 5 ай бұрын
Thank you for making these videos so engaging! Your eloquent and logical style of explaining the concepts makes watching these videos so much fun.
@hello_interview
@hello_interview 5 ай бұрын
High praise! Right on :)
@eshw23
@eshw23 3 ай бұрын
Evan your explanations are extremely amazing and the best on this channel. Hope to hear more soon.
@TheKarateKidd
@TheKarateKidd 6 ай бұрын
One of the first things that came to mind in the beginning of this problem is dynamic webpages. Most websites don't display the majority of their content on simple HTML. To be honest if I was interviewing a senior or above level candidate, not mentioning dynamic content early on would be seen as a red flag. I'm glad you included it at the end of your video, but I do think it is important enough to be mentioned early on.
@RafaelDHirtzPeriod2
@RafaelDHirtzPeriod2 4 ай бұрын
So sorry for being Microsoft Word, but on all of your videos THE APROACH is spelled incorrectly. Thank you so much for posting all your videos. Super helpful for all of us interviewees out there!
@hello_interview
@hello_interview 4 ай бұрын
🤦🏻‍♂️first person to notice this. Will fix next video!
@NeyazShafi
@NeyazShafi Ай бұрын
Excellent quality of content. Please do more of these.
@chongxiaocao5737
@chongxiaocao5737 6 ай бұрын
Finally a new update! Apprecaite!
@technical3446
@technical3446 4 ай бұрын
Few inputs: - Bandwidth calculation need to factor in upload data to S3 as well. You will probably also do some compression while upload, and given HTML data had be fairly highly compressible. - At that rate, the system will likely not be network throughput bound, but usually latency and number of connections bound. Assume that each site takes 1 sec to return the web page, so for 10k requests per sec for each node, you will need 10k TCP connections, which if under possible limit but will lead to a number perf issues. - Memory requirements: 10k * 2 MB = 20 GB, should be enough, but all of these are GCable. less reusable memory and TCP connection - You will likely be better off using a lower node type, around 50 Gbps, utilisation beyond that for a single node is going to be challenging and you will hit other limits. - Another optimisation will be to have the parsing and crawling in the same process to avoid passing off the HTML content to a separate process. You can also update the DB in one write with all the links.
@sharanya_sr
@sharanya_sr 5 ай бұрын
Thank you for the great content and congratulations for making it a goto channel for system design. Content is refreshing and watch once never forget types. I request you to make a content to share how to approach a problem that we have not seen before. What best we could do like either map it to any related system or think logically how api/design would work focusing on the problem asked.
@hello_interview
@hello_interview 5 ай бұрын
Cool idea, we'll give that a go!
@allenliu1065
@allenliu1065 4 ай бұрын
Best explaination for bloom filter, redis set and hash as GSI.
@deshi_techMom
@deshi_techMom 3 ай бұрын
I absolutely love the details you talk and you have great presentation skill! super admirable! you just made system design interview easier for me
@zy3394
@zy3394 6 ай бұрын
love your content , learned a lot, please keep updating more. ❤
@itayyahimovitz86
@itayyahimovitz86 3 ай бұрын
Great video! I would probably add a proxy component to this system design for the part where the crawler makes the HTTP calls to fetch the HTML (maybe for the DNS lookups as well). This is a critical part of designing a web crawler because you want avoid making the calls through the network where the web crawlers are deployed for case you get all your network ip addresses blocked and also for security reasons, you want to isolate the outgoing network calls from your instances.
@thiernoamiroudiallo2451
@thiernoamiroudiallo2451 Ай бұрын
This is really fantastic content. Keep up the good work.
@randymujica136
@randymujica136 4 ай бұрын
In my opinion one of the most important bullets of your strategy is how you minimize the initial HLD and you make sure you deliver something that actually covers all the functional requirements. I find this calibration really valuable and not that easy to achieve, since as a Senior candidate, one can be tempted to go straight to deep dives without actually setting clearly that pause from HLD to deep dives. What do you recommend to get better at this?
@zayankhan3223
@zayankhan3223 6 ай бұрын
This is one of the best system design videos on the interview. Kudos to you. I would like to understand a little more on how do we handle duplicate content? What if the content is 80% same on two pages? Hash will work only when pages are exactly the same.
@hello_interview
@hello_interview 6 ай бұрын
Yah, only exactly the same
@BhaskarJayaraman
@BhaskarJayaraman 4 ай бұрын
Great content. In deep dives around 52:41 "when you get a new URL you'll put it on here it'll be undefined and then when we actually parse it we'll update this with" and 52:46 "the real last craw time and with the S3 link which also would have been undefined so that would handle that" - I think you mean -- when we actually crawl and download it, we'll update it with the last crawl time and with the S3 link. Also when you use Dynamo the look up will be Log(1) not Log (n). Would be great if you had the DynamoDB GSI schema.
@IntSolver
@IntSolver 5 ай бұрын
Hey, thanks for your video. I have watched all your content and I gained immense amount of knowledge. I gave my E4 interview a week back, and my question was this (with a slight variation of the crawling being done through an app which was deployed in 10k devices). I covered all the content which you've presented here in the same structure, and was able to dive deep into all the parts the interviewer asked. I was expecting an offer but got rejected due to "No Hire" in Design round. After retrospection, I could find some people talking about chord algorithm and peer2peer crawler was expected. I still don't understand what would be the cause for No hire, because interviewer didn't even hint towards anything and was aligned throughout. The experience was really heartbreaking. SO, I just wanted to leave it out here that even though I did my best, it wasn't my day (I guess). thanks for your videos, nonetheless
@hello_interview
@hello_interview 5 ай бұрын
So sorry to hear that, that’s such disappointing news to receive. It’s always a toss up. Keep your head high and best of luck with future endeavors 💪
@krishnabansal7531
@krishnabansal7531 6 ай бұрын
Suggestions: Please mention what are the clarifying questions to be asked for a specific problem. Even if the problem is well known, the panel still expects to ask few clarifying questions, specially for a senior candidate. Also, if you can cover company specific expectations (if any) for top MAANG companies, that would be excellent.
@letsgetyucky
@letsgetyucky 6 ай бұрын
commenting for the algo. thanks for excellent and free content!
@hello_interview
@hello_interview 6 ай бұрын
Legend 🫡
@letsgetyucky
@letsgetyucky 6 ай бұрын
​@@hello_interview Feedback: really enjoyed the video! Would love if future videos were also mostly skewed towards deep dives. Suggesting other topics to research yourself (or hash out with others in the comments) is also super valuable. Finally, calling out the anti patterns that are being regurgitated (e.g. bloom filters) is very valuable as well.
@davidoh0905
@davidoh0905 6 ай бұрын
@@letsgetyucky is bloom filters a anti-pattern!? just curious!
@letsgetyucky
@letsgetyucky 6 ай бұрын
@@davidoh0905 during the deep dive Evan says that Bloom Filters are commonly used in the interviews because it's they are used in solutions in the popular interview prep books. But the interview prep books don't do a great job of discussing the tradeoffs behind using a Bloom Filters vs more practical solutions. It's a nice theoretical solution, but in a real world system you could do something simpler and just bruteforce the problem.
@CS2dAVE
@CS2dAVE 6 ай бұрын
S Tier system design content! Another exceptional video 👏
@vzfzabc
@vzfzabc 6 ай бұрын
Nice, thanks for the content. I also really appreciated the videos from the mock interview. I found that much more useful and would love to see more of those.
@hello_interview
@hello_interview 6 ай бұрын
Tougher there for privacy reasons. Requires explicit sign off from coach and candidate, but I'll see what I can do :)
@dibll
@dibll 6 ай бұрын
Hope you can create videos of the write ups done by other authors on HelloInterview in the near future. Love the content. Thank you!!
@hagridhaired
@hagridhaired 8 күн бұрын
Love your content! Just subbed to premium on Hello Interview
@aishwarya7179
@aishwarya7179 2 ай бұрын
Great video! How to draw the curved arrow like you did at 17:21? I tried looking up with excalidraw options but couldn't find it.
@bansalankur
@bansalankur 23 күн бұрын
Instead of frontier queue, if I store the data in a postgres table and pick the urls from there and changing the state atomically. Will there be any scaling challanges ?
@akshat3106
@akshat3106 5 ай бұрын
I could not find the information where it is mentioned that aws sqs have inbuilt exponential backoff retry mechanism. Can anyone please share the link for the same. Thanks a lot!
@hello_interview
@hello_interview 5 ай бұрын
On mobile but scroll through the comments. I linked the aws docs in response to another comment.
@akshat3106
@akshat3106 5 ай бұрын
@@hello_interview Thanks for reply, but could not find it
@fran_sanchez_yt
@fran_sanchez_yt 3 ай бұрын
@@hello_interview I haven't been able to find the link and I also wasn't able to find this exponential back-off feature mentioned in the SQS docs...
@davidoh0905
@davidoh0905 6 ай бұрын
Just in time!!!!
@willfzj
@willfzj Ай бұрын
Great video ! Just one question: in the last deep dive of "Crawler trap", your option is using DFS with max depth, why don't we just use BFS to do crawl ?what's better one in you opinion here?
@dho449
@dho449 2 ай бұрын
At 36:06 you discuss visibility timeout in the context of Amazon SQS. You said the worker will send SQS a message that the html has been downloaded. What if it takes longer than 30 seconds to download the html and send the message to SQS. Is there the potential that you will duplicate the html download if some other worker pulls it off the queue after it becomes visible again?
@princeofexcess
@princeofexcess Ай бұрын
I was trying to find exponential backoff as a configuration file for SQS and Serverless Framework but I cant find it. Could anybody point me in that directions? I would handle it inside the function with code that increases invisibility timeout with approximate receive count. and returning the array of the record ids that threw an error but it seems like a lot of code for something that would be great to just configure.
@vimalkumarsinghal
@vimalkumarsinghal 5 ай бұрын
Thanks for sharing the SD on web crawler. question : - how to consider dynamic pages / sub domain / url which loop back to same url / url with query string what the best approach to identify duplicate. thanks
@hello_interview
@hello_interview 5 ай бұрын
May not totally understand the question, but you could just drop the query strings from extracted urls
@IdanKepten
@IdanKepten 4 күн бұрын
what about using proxy servers to fetch the webpage instead of directly from the worker ?
@darkimchicat
@darkimchicat 22 күн бұрын
Thanks for this video! What's the text editor canvas you're using? Seems very slick and intuitive to use
@jiananstrackjourney370
@jiananstrackjourney370 4 ай бұрын
Great video! I have a question, is 5k requests per second realistic? Even with the most powerful machine on EC2?
@aforty1
@aforty1 5 ай бұрын
Thanks for this! As far as checking the hash @ 57:00, wouldn’t we already have the last hash since we had to retrieve that url record before we fetched the webpage because we had to go get the lastCrawlTime?
@adrienbourgeois108
@adrienbourgeois108 2 ай бұрын
Both your design and the way you explain them are top notch compared to the rest I've seen so far. One quick remark about your suggestion of using a Redis set to check whether content has already been "seen". I would personally not use the Redis Hash Set datastructure for this as the Set needs to be able to fit inside one Redis node (I think you've mentioned that in the video) so it does not scale out. Why not simply use the Redis String datastructure? Your primary key and partition key would be the hash of the content and so for as long as the key is in Redis you know that you have seen the content. And unlike the HashSet, this scales out as the key->value can be on different Redis nodes. Anyway, using an in memory distributed cache (like Redis) is more than likely not necessary in this case as our bottleneck is downloading the HTML so optimizing the logic that checks whether we have seen the content before is not gonna move the needle.
@davidoh0905
@davidoh0905 6 ай бұрын
If Kafka does not support retry out of the box, what does that exactly mean? if you do not commit, does it not get move the offset, which could potentially serve as retry like(?) Also, could you compare this with some other queueing service that allows for retry like SQS maybe? Comparison on when to use Kafka vs SQS would be really good too! message broker vs task queue might be their most frequent use cases but might be good to provide justifications in this scenario!
@MrCSFTW
@MrCSFTW 2 ай бұрын
Maybe a nit or I'm not in the know. But SQS doesn't have built in exponential retry right? You'd need to implement with approxRecieveCount and modify visibility timeout?
@MrCSFTW
@MrCSFTW 2 ай бұрын
God my KZbin handle is unbearable I apologize
@microhan14
@microhan14 Ай бұрын
"do we already have url" and url as PK, this would yield in never getting fresh data from the same url?
@undercovereconomist
@undercovereconomist 6 ай бұрын
Wow, the amount of Depth here is absolutely insane. How can you compressed so much information into a 1 hour interview? I learn so much information from this video that I never see else where, and it is all presented so elegant and natural. The speaker speaks clearly, no ums and ahs, no speed up? You must be a great engineer at work! One thing that I am a bit unsatisfied is about duplicated content. Is it even possible that we actually have completely duplicated content? Even when there are two different web pages, I think that they might just have a few location that the content is different. That would completely break our hash function right? Do you know of any hash function that would allow two webpages that are mostly similar to be close together? Do you see any role in word2vec or vector storage here?
@ronakshah725
@ronakshah725 6 ай бұрын
I think this is a great question! I want to attempt to answer this, but I’m no expert haha. As the goal of this particular system is to train language models, it’s nice to understand if optimizing for “similar” web pages is necessary for our top level goal. In general, it could be helpful to prioritize learning based on chards of text, that appear in many pages. But we have to remember that connecting back to the source could also be required later, for things like citations. So we have to be a bit smart about this. TL;DR it’s a can of worms and I would try to better understand the priority of this compared to existing requirements of the system.
@ronakshah725
@ronakshah725 6 ай бұрын
This isn’t skirting off the question, but it’s a good step towards delivering our final solution.
@sushmitagoswami2033
@sushmitagoswami2033 3 ай бұрын
Excellent video!. Have one thoughts - would it be possible to increase the font a bit? Thanks so much!
@theoshow5426
@theoshow5426 4 ай бұрын
Great content! Keep it coming!
@prahaladvenkat
@prahaladvenkat 4 ай бұрын
Your channel is a gold mine! Thanks a ton. How to decide whether to use Kinesis data streams or SQS? Although they serve different purposes, it feels like both are good options to begin with, generally. Here, SQS ended up being a better option because of retries, DLQ support, etc. But ideally, I'd like to be able to deterministically and correctly choose the right option in the beginning itself. It'll be super helpful if you could quickly reason out in the videos (in just 1 or 2 lines) why you pick a certain offering over other seemingly similar technologies/offering!
@damluar
@damluar 3 ай бұрын
To avoid batching URLs from the same domain together, can we use Kafka partitions and spread messages by hash(URL)? Since different crawlers work at different paces, it is likely they will pick up those URLs at a different time.
@DilipKumar-ij3cf
@DilipKumar-ij3cf 2 ай бұрын
For your url check, when do you remove for next time crawling content freshness? The check will not allow next eligible crawling. How are you addressing the design. Also some content change often and some not, you should probably handle that in your design. the solution is same for both.
@damluar
@damluar 3 ай бұрын
How would you choose the initial frontier URLs? How many should be enough?
@jingxu2697
@jingxu2697 Ай бұрын
Thanks Evan for the great content! Learned a lot from this channel! One question: I am working in a company in which we mainly use in-house built tools, so I do not have much experience with open source tools or AWS tools, e..g, the queues you mentioned in this video. In the interview, is that okay I answer the ideas behind the tools instead of pointing out the tool names. Will that be a sign of lack of experience?
@hello_interview
@hello_interview Ай бұрын
Absolutely
@praneethnimmagadda1938
@praneethnimmagadda1938 5 ай бұрын
Just wondering , there is no mention related to inverted index in this crawling flow as this inverted index would help during the searches ?
@hello_interview
@hello_interview 5 ай бұрын
Searches of what?
@praneethnimmagadda1938
@praneethnimmagadda1938 5 ай бұрын
​@@hello_interview I mean when user searches for results of query on search engine
@richbuckingham
@richbuckingham 29 күн бұрын
My main gripe is the noise getting into in the frontier queue; I think an intelligent worker processing URL candidates from a new URL candidates queue would be a huge dedupe and efficiency improvement rather than having the parser drop un-cleansed URLs directly into the frontier queue. Other gripes though too; no mention on what Canonical URLs are or how they could be used in the parsing logic, or how they would be best used as the PK to dedupe on in the URL table. No mention of URL parameters or dynamic page content (e.g. /pages/?page_id=123&noise=random versus /pages/123?noise=random). No mention of how using or not JS parsing in the crawler might provide different outcomes, no mention even of rules processing for filtering out non-text content (e.g. why would you even attempt to crawl a URL /images/image123.jpg). The system is also missing success and error metrics and custom per domain crawling and parsing logic (e.g. ability to define regex matches for blacklisting known bad URLs) in order that you are effectively crawling for good text content and not just pulling in poor quality text from specific domains or expending time and cost continuing to crawl a domain that had so far only provided garbage. You have only 5 days to do the work, but you pay hourly for everything on AWS so why not spin up 40 machines for 10 hours, instead of 4 machines for 100 hours, then you find out on day 1 if you will have a good outcome or not! You then will still have 4 more days to re-evaluate the system, figure out how you can improve the content quality, retry domains/urls etc where metrics show you had bad outcomes but you know you can do better. Tldr; determine ASAP where the best successes and biggest fails are, use rules-processing capabilities to optimize the system as you learn.
@flyingpiggy741
@flyingpiggy741 4 ай бұрын
Why do we need a DNS server? Would it be enough to grab text from a url?
@healing1000
@healing1000 6 ай бұрын
Thank you! to avoid duplicate URLS, do we need to discuss using a cache or Is it ok to only use the data base
@hello_interview
@hello_interview 6 ай бұрын
Same convo as the duplicate content. Cache is certainly an option. The DB index enough imo.
@Sandeepg255
@Sandeepg255 6 ай бұрын
I think at 39:03, you are saying that set the visibility timeout of the message to now - crawlDelay, but visibility timeout concept is for a queue, then how are you planning to set it at message level ?
@hello_interview
@hello_interview 6 ай бұрын
You can set them at the message level with SQS! From the docs, “Every Amazon SQS queue has the default visibility timeout setting of 30 seconds. You can change this setting for the entire queue. Typically, you should set the visibility timeout to the maximum time that it takes your application to process and delete a message from the queue. When receiving messages, you can also set a special visibility timeout for the returned messages without changing the overall queue timeout.”
@swagatrath2256
@swagatrath2256 4 ай бұрын
Very well explained! If possible please do share some tips on how one can keep up with latest technologies and develop a mindset towards such system designs. I feel like I'm good at coding but not that great when it comes to designing architecture like this. Basically what I'm looking for is how does one progress from a Developer role to an Architect role..
@xymadison
@xymadison 3 ай бұрын
This is awesome, it is a very comprehensive and clean explanation, I've learnt a lot from your videos, Thanks. May I ask what tool or website you use as the white board?
@Anonymous-ym6st
@Anonymous-ym6st 5 ай бұрын
thanks for the great content as always! One quick question: for redis and global secondary index comparison, given the data can be stored in the single instance, if we use hash based index (not sure if it is supported by dynamo, but should be supported by MySQL), then it should also be O(1) and redis in this case should be over-engineering a bit?
@lixinyi7734
@lixinyi7734 4 ай бұрын
what is the Text editor you are using? I like it
@hello_interview
@hello_interview 4 ай бұрын
Excalidraw
@dibll
@dibll 6 ай бұрын
Not related to this Video in particular but I have question about partitioning - Lets say we have a DB with 2 columns firstname and lastname. When we say we want to prefix the partition key which is firstname with lastname, Does that mean all similar lastnames will be on same node , if yes what will happen to firstNames how they will be arranged? Thanks
@hello_interview
@hello_interview 5 ай бұрын
if the primary key is a composite of first and last then no, this just means that people with the same first and last name will be on the same ndoe
@sanketpatil493
@sanketpatil493 6 ай бұрын
Can not thank you enough for all this valuable content. Just amazing work! Btw can you share some good resources for preparing for the system designs interview? Books, courses, engineering blogs, etc. A dedicated video would be much more helpful!
@hello_interview
@hello_interview 6 ай бұрын
Im certainly biased, but i think our content is some of (if not the) best out there. so I would start at www.hellointerview.com/learn/system-design/in-a-hurry/introduction. Some useful blogs on system design too depending on your level which can be found at www.hellointerview.com/blog all written by either me or my co-founder (ex meta sr. hiring manager)
@adityaagarwal5348
@adityaagarwal5348 2 ай бұрын
🙈 The excalidraw doesn't let you type when there are zig-zag arrows. If you click on one of the zig-zag arrows, you will see they block a big area and clicking into that area will just allow you to write text on arrow.
@Global_nomad_diaries
@Global_nomad_diaries 6 ай бұрын
Can this be asked in product architecture interview at Meta or just system design?
@hello_interview
@hello_interview 6 ай бұрын
Should be system design not product architecture in meta world. But, you never know, some interviewers go rogue.
@mihaiapostol7864
@mihaiapostol7864 5 ай бұрын
hello, i enjoyed your content a lot, i'm learning a lot from it, thannks! one question related to the design, you were talking at minute 52:00 that the check that the urlLink already exists should be done in the parser. but if this uniqueness check is not done earlier in the crawler, then the crawler could save the same text in s3 twice for the same urlLink, right?
@hello_interview
@hello_interview 5 ай бұрын
Nope! We won’t add new links to the queue if they already exist. Thats why we check in the parser
@mihaiapostol7864
@mihaiapostol7864 5 ай бұрын
@@hello_interview understood, thank you!
@TheSmashten
@TheSmashten Ай бұрын
What are you using for the drawing board??
@hello_interview
@hello_interview Ай бұрын
Excalidraw
@jieyin4169
@jieyin4169 2 ай бұрын
love this video
@dashofdope
@dashofdope 23 күн бұрын
Any value in telling your interview upfront "I am going to tackle the sys design in this order (Func reqs=>Core Entities=>API=>HLD=> Deep dives)? I understand you do it here for our sake, but would probs actually be easier for me to reference and make it clear to interviewer I'm not diving too deep to begin with.
@evalyly9313
@evalyly9313 6 ай бұрын
So for being able to give the right estimation of the back of the envelope calculation, the base knowledge is that the person knows that an AWS instance capacity is 400Gbps. I don't have this knowledge in mind, is that ok we can ask or search during interview or is this something we should keep in mind?
@hello_interview
@hello_interview 6 ай бұрын
I think it’s useful to have some basic specs as a note maybe on your desk when interviewing. But it’s also ok to ask. The intuition that caches can have up to around 100gb and dbs up to around 100TB is good intuition to have though.
@georgepesmazoglou4365
@georgepesmazoglou4365 6 ай бұрын
Great design! I wonder why there was never a mention of doing the whole thing with spark, using offline batch jobs rather than realtime services?
@afge00
@afge00 6 ай бұрын
I was thinking about batch as well
@hello_interview
@hello_interview 6 ай бұрын
Interesting. You know, as many times I’ve asked this, no one has every proposed it. Top of my head I see no obvious reason why you couldn’t get it to work, especially for just a one off.
@georgepesmazoglou4365
@georgepesmazoglou4365 6 ай бұрын
@@hello_interview I do crawling for a large company, typically you would do something like the video's design when you care about data freshness, if you don't care about that, like the LLM use case you, would do a sparky thing where you just split the work to a bunch of workers, you can have the html fetching and processing parts in different stages. Your inputs can be the URLs and previous crawled pages and join them, so that you crawl only new urls, or recrawl URLs only after some time since their last crawl. The main disadvantage compared to your design is that you are not as fault tolerant as you can't do much in terms of checkpointing. Also it is less fun to discuss:)
@yottalynn776
@yottalynn776 5 ай бұрын
Very nice explanation! When actually crawling the pages, it could be blocked by the website owner. Do you think we need to mention this in the interview and provide some solutions like using rotating proxies?
@hello_interview
@hello_interview 5 ай бұрын
Good place for depth! Ask your interviewer :)
@dhanyageetha1519
@dhanyageetha1519 5 ай бұрын
Kafka also support configurable exponention back off from producer side
@hello_interview
@hello_interview 5 ай бұрын
Yup, that’s just to make sure the message gets on the queue, so not the same problem we’re solving here.
@kunliu1062
@kunliu1062 3 ай бұрын
Wow, wish I had found this much earlier. Now I certainly wouldn't just go into my next interview and throw the bloom filter onto the diagram without deep thinking 😝
@nanlala3171
@nanlala3171 6 ай бұрын
I saw you used many AWS services during your design. Is it a good practice to use specific products and their features (dlq/SQS, GSI / dynamo db) in the design? What if the interviewer never used these products and had no concept of these services/features.
@hello_interview
@hello_interview 6 ай бұрын
Depends on the company, in general, yes. But, importantly, don't just say the technology. This important part is that you understand the features and why they'd be useful. For example, Bad: I'll use DynamoDB here Good: I need a DB that can XYZ. DynamoDB can do this, so I'll choose it.
@Marcus-yc3ib
@Marcus-yc3ib 2 ай бұрын
Thank you very much. You saved me.
@vamsikrishnabollepalli4908
@vamsikrishnabollepalli4908 6 ай бұрын
Can you also provide system design interview flow and product design interview flow for each problem?
@hello_interview
@hello_interview 5 ай бұрын
They're mostly the same tbh. www.hellointerview.com/blog/meta-system-vs-product-design
@tori_bam
@tori_bam 5 ай бұрын
thank you for another amazing contents! I'll be having a mock interview using Hello Interview soon.
@hello_interview
@hello_interview 5 ай бұрын
Sweet! Can’t wait :)
@joo02
@joo02 3 ай бұрын
I confirm your hair and hat didn't have any negative influence in the making of this System Design video.
@hello_interview
@hello_interview 3 ай бұрын
😂🫶
@mularys
@mularys 6 ай бұрын
Here are my concerns: your solution is so nice, but if everyone is going to talk about the same thing during the interview, especially when one is driving the process, will it raise any red flags on the hiring committee side as they might think candidates are referring to the same sources?
@hello_interview
@hello_interview 6 ай бұрын
This is not meant to be a script. If your plan is to regurgitate this back to an interviewer I’d recommend not doing that. Instead it’s a teaching resource to learn about process, technologies, and potential deep dives. If you get this problem, then sure, talk about some of this stuff, but also let it be a conversation with the interviewer
@rostyslavmochulskyi159
@rostyslavmochulskyi159 6 ай бұрын
But if there an issue if you answer all/most of interviewer questions correctly? I believe it is an issue if you memorise this, but can’t go any further, but if you can there is nothing wrong.
@mularys
@mularys 6 ай бұрын
@@hello_interview Yeah, makes sense. You present a good framework to structure the talking points that candidates can bring up. And I found it pretty useful. My system design question is the top-k video and I followed the key points you mentioned. My target is E5 and the interviewer just had a handful of follow-up questions (90% of the time I was talking). Eventually, I passed that round with a "strong hire". Of course, I added my points of view during the interview, but I feel like I was just taking something off the shelf.
@mdyuki1016
@mdyuki1016 6 ай бұрын
what's the reason not storing URLs in databases like MySQL. for retrying, just add some column like "retry times"
@hello_interview
@hello_interview 6 ай бұрын
I mention this at somepoint I believe when discussing the alternate approach of having a "URL Scheduler Service." They have to get back on the queue somehow, so either directly or via a scheduler where state is in the DB.
@kamakshijayaraman3747
@kamakshijayaraman3747 3 ай бұрын
iam not able to understand the math. for no of aws instances. can someone explain?
@t.jihad96
@t.jihad96 3 ай бұрын
Thank you for the effort, please keep doing the good job. I'm watching your videos as if it was a Netflix series, very exciting. I was hoping to cover some topics like if the crawler processed the message and failed to commit back to the queue that it processed the message due to a crash, how would you handle such a case? Is there a generic solution where it can be used in different systems instead of workarounds?
@tomtran6936
@tomtran6936 5 ай бұрын
what is the tool are you using to draw and take note , Evan?
@hello_interview
@hello_interview 5 ай бұрын
Excaldiraw
System Design Interview: Design Top-K Youtube Videos w/ a Ex-Meta Senior Manager
48:39
Hello Interview - SWE Interview Preparation
Рет қаралды 47 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
System Design Interview: Design Dropbox or Google Drive w/ a Ex-Meta Staff Engineer
58:08
Hello Interview - SWE Interview Preparation
Рет қаралды 82 М.
Mobile System Design Interview. Design Chat Application #systemdesign #faang #systemdesigninterview
50:35
Alexey Glukharev: Software Engineering & IT Career
Рет қаралды 3,2 М.
Systems Design in an Hour
1:11:00
Jordan has no life
Рет қаралды 35 М.
Kafka Deep Dive w/ a Ex-Meta Staff Engineer
43:31
Hello Interview - SWE Interview Preparation
Рет қаралды 78 М.
System Design Interview: Design an Ad Click Aggregator w/ a Ex-Meta Staff Engineer
1:02:22
Hello Interview - SWE Interview Preparation
Рет қаралды 63 М.
Interview with a Meta EM: AI Impact on SWEs, Team Match, Ramp-Up, How to Learn
45:14
Hello Interview - SWE Interview Preparation
Рет қаралды 12 М.
Elasticsearch Deep Dive w/ a Ex-Meta Senior Manager
44:03
Hello Interview - SWE Interview Preparation
Рет қаралды 31 М.
15: Reddit Comments | Systems Design Interview Questions With Ex-Google SWE
48:37
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН