Awesome video. I'd add another concern: concurrency. The code you had with steps 1,2,3 is not atomic, so it will happen that under high-load or even normal load that multiple threads/user actions will all see "no item in cache", and then all fetch from the DB and then all proceed to set their own version of the item to the cache. I guess I could go on about ways to mitigate that... might be worth doing a video together for the Loosely Coupled Show? lol
@CodeOpinion4 жыл бұрын
Yes, good point about concurrency. I think it depends on if that's a concern or not. It would depend on what your load is like and how quickly you can fetch and save to the cache versus blocking.
@uncommonbg4 жыл бұрын
Very good point, that's definitely something to consider especially if the object that's going to be cached is rather large and you can't afford to cache it twice. I've found that LazyCache deals in a very good way with this issue and it's API is very friendly.
@retn11222 жыл бұрын
The logic behind the db will get overloaded if cache is unavailable isn’t clear. Won’t checking the db, then cache vs check cache then db have the exact same db trip IF cache isn’t available?
@ryanman74 жыл бұрын
RE: Concurrency. I worked on a long-running system that used a cache for performance. For our requirements we had to use data that existed at the time the process started. If we were to "Mix and match" data from minute 0 and minute 30, the results would be disastrous. We also didn't want concurrent runs to collide with one another, for obvious reasons. Because of the volume of data we were using we also had to batch items and work through them iteratively, so invalidation was generally side-loaded with a timeout fallback. For this you need to ensure you're using a cohesive surrogate keying strategy, whether that's hashing keys based on InstanceID or storing cached objects in a collection beneath a purely-surrogate key. Really like that you open with "Don't" because everyone wants to throw Redis in everything as a substitute for efficient queries -_-
@CodeOpinion4 жыл бұрын
Thanks for the comment! Really interesting situation. As always, context is king!
@c4m4l3403 жыл бұрын
Hi, I just discovered your videos and they are great. No bullshit and direct to the point with examples. Great job. What you recomend, if using microservices and need to store some partial data from other microservice. Slow/very slow changing data. ie. The billing microservice, needs some of your user information (let's imagine is relevant) to create that bill. Also the Notification microservice, needs your user information to send your the notification emails. Should you do some local (each microservice) cache. When to update: on service startup and using events to maintain data? How should that data be geted (microservices communication).
@CodeOpinion3 жыл бұрын
Local cache. Could be updated by event carried state transfer. All depends on how stale that data can be.
@arunsatyarth90972 жыл бұрын
0:45 This is so wrong. Write through means that the application only writes to the cache. Its the cache which writes to the database.
@CodeOpinion2 жыл бұрын
That's a write back.
@arunsatyarth90972 жыл бұрын
@@CodeOpinion Write back is when data is only written to cache. Cache writes it to db at a later time, not immediately.
@lebza_teza2 жыл бұрын
Hi I just joined your channel and I can't find the source code for this specific video I can find the looselyCoupledMonolith Project but it doesn't include Invalidation caching
@CodeOpinion2 жыл бұрын
I think this was before I started creating various repos and opened up a membership.
@shreyasjejurkar12334 жыл бұрын
Events is quite useful patterns for cache invalidation. And works pretty well!
@CodeOpinion4 жыл бұрын
They are! However, if you don't have a centralized place in code that handles state changes, it can be incredibly hard. I used the example of using EF, but preferably it would be at a command level where your publishing events. And if you don't have any infrastructure around messaging, then that's also another hurdle. But yes, events FTW!
@shreyasjejurkar12334 жыл бұрын
@@CodeOpinion Although if there are multiple places where state changes, it would not be the problem for events. Because we just firing events when state changes, and the event handler ( can be any there) will just remove the cache entries from caches! This decoupling I like, is that we don't have to touch the code where the actual state changes in order to get our new event handler wired up!
@CodeOpinion4 жыл бұрын
@@shreyasjejurkar1233 Agree 100%. Let me re-phrase, if you have other places that are causing data changes that you are un-aware that you need to publish an event. Eg, another application, stored procedure (eeek!), etc. Simplest example of this, with a relational database, is someone actually writing an update statement directly to fix some data. All bad ideas and shouldn't happen in the first place.
@shreyasjejurkar12334 жыл бұрын
@@CodeOpinion yeah, agreed. We need to take a note that if we are changing some state then make sure we are firing an appropriate event by passing some appropriate data so that we can clear the cache for that data only ! 😇
@bdemir4 жыл бұрын
Your video is very nice. How can I access the source codes of the project? Thank you.
@CodeOpinion4 жыл бұрын
I don't believe I have any code up on GitHub from this video, but mostly have used this project. There are many different branches. github.com/dcomartin/LooselyCoupledMonolith