Thanks for such an excellent talk. Covering five key use cases in a well organized way.
@KevinGenus5 жыл бұрын
Fibonacci use case! FINALLY. Thank you!!
@lmaoiwaslikelmao11323 жыл бұрын
Gur Dotan is the most Warcraft Orc name I've ever heard in real life
@DawidOhia6 жыл бұрын
Great presentation! I wonder what is the averege hit/miss ratio and the size (in number of entries) of the cache in this Dataroma scenario.
@sunwonjhung99192 жыл бұрын
Great Explanation! I have a question, how do you invalidate the caches? Do you use TTL?
@Redisinc2 жыл бұрын
TTL is a great candidate and our main recommendation for an easy invalidation policy.
@rohitchatterjee23274 жыл бұрын
Excellent slides very easy to follow
@osamaa.h.altameemi55925 жыл бұрын
Fantastic talk, thx a ton.
@ערןאוצפ5 жыл бұрын
Great presentation, 2 questions 1) how do u corelate a cached query result with a new incoming query (meaning recognizing the same query again) 2)this is more of a clarification then a question, A query is taken from the queue and then attempts to accuire a lock for that specific tannet. If that tannet is already at it's capacity the query is re queued , potantioally it might be busy the next time around and the next time.. etc. And then the query is thrown away.
@dandymcgee4 жыл бұрын
For #1: It's probably just a hash of the query text (maybe auto-formatted in some way to remove minor, inconsequential differences like whitespace?). Either way, there are likely cases where the "same" query would have a cache miss due to a minor difference in the query text. This is almost certainly irrelevant enough to ignore. Re #2: He said there's a max retry count. After that limit is hit, the query is indeed discarded. Presumably the client would then be notified of this via the pub/sub in the form of a "query failed" message rather than a query result.