USENIX ATC '13 - TAO: Facebook’s Distributed Data Store for the Social Graph

  Рет қаралды 27,208

USENIX

USENIX

Күн бұрын

Пікірлер: 11
@gsb22
@gsb22 3 жыл бұрын
This is GOLDMINE. Amazed with how the system is laid. Respect for all the engineers involved.
@dilipkumar2k6
@dilipkumar2k6 4 жыл бұрын
awesome design and presentation. very helpful. Thx for sharing.
@yogita_garud
@yogita_garud 2 жыл бұрын
This is awesome! Thanks for sharing. Great talk.
@bakkks
@bakkks 2 жыл бұрын
How does write-through cache and/or async replication help with 'see your own writes/ write timeliness' ? My understanding is in order to see your own writes(given replication/sync lag) requests by the same writer have to be served by the same web server -> cache -> db chain.
@grandhirahul
@grandhirahul 2 жыл бұрын
It helps because the reads are served immediately from the cache. And also the webservers that write to the follower cache are in the same zone/ecosystem.
@panhejia
@panhejia Жыл бұрын
due to load balancer level sticky session, the same writer client will use the same follower cache. A write request returns as long as the client region cache is populated. Thus the write sees what they write pretty much immediately. But I also do not fully understand the async replication. If a client request -> local follower cache update -> local lead cache update -> local db write is completed, then there is no need to wait for the local lead cache update -> master region lead cache update -> master region db update -> async local region db replication to happen. Is the db async replication only done for consistency purpose? If so can we just use CRDT?
@bakkks
@bakkks 2 жыл бұрын
How does the cache leader help reduce a hotspot ?
@panhejia
@panhejia Жыл бұрын
i believe the partition of follower cache helps the hotspot issue because you are basically dividing the user groups that are suppose to come to the same hotspot into different cache partitions. As to the leader cache, it is more for the thundering herd issue. Image you have us-west-2 goes offline for 5 mins, then goes back online, if there is no leader cache, then all of the west coast users will hit multiple follower cache, generating a huge wave of duplicate request to the TAO dbs. Using a leader cache that has a smaller data eviction time, it adds another layer that dedupes those duplicate requests. Thus protecting the TAO dbs. Happy to discuss more
@ingenious-records
@ingenious-records 3 жыл бұрын
Simple and Elegant
@nikon800
@nikon800 4 ай бұрын
Well this is a dense talk...
@HandBanana333
@HandBanana333 3 жыл бұрын
Meep
NSDI '13 - Scaling Memcache at Facebook
23:18
USENIX
Рет қаралды 13 М.
TAO: Facebook’s Distributed Data Store for the Social Graph
24:59
Какой я клей? | CLEX #shorts
0:59
CLEX
Рет қаралды 1,9 МЛН
Counter-Strike 2 - Новый кс. Cтарый я
13:10
Marmok
Рет қаралды 2,8 МЛН
Introduction to Neo4j and Graph Databases
1:56:55
Microsoft Research
Рет қаралды 79 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,6 МЛН
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 164 М.
Large-Scale Low-Latency Storage for the Social Network - Data@Scale
26:15
Balancing Multi-Tenancy and Isolation at 4 Billion QPS
27:31
Kafka: A Modern Distributed System
52:25
InfoQ
Рет қаралды 40 М.
Haystack Paper Presentation
29:52
Michael Schladt
Рет қаралды 826
Facebook and memcached - Tech Talk
27:56
Meta Developers
Рет қаралды 235 М.