Please, never lose your calmness and bullshit-free type of teaching. No music, no intro, no merchandise, no hype over things, just pure calm content.
@alexandrubaginean736911 ай бұрын
Hit the like as a thank you and share the content!
@randomforest_dev Жыл бұрын
great stuff! I always like your bite-sized short-to-the-point real-world-issues tutorials.
@yuris10101 Жыл бұрын
i would add that if you're really working with some kind of data that may be changed by 100's of users concurrently, you probably want to serialize all the writes to a queue, and have workers perform updates. that's because with either optimistic writes or transactions users could be always getting errors because the resource is stale or locked which forces them to try and try again and put unnecessary load on the system. so basically for reads you just read the database, but for writes those actions (+1) are added to queue and performed by backend separately, you may use batching and so on. this way your user actions will be preserved and still influence the outcome
@WebDevCody Жыл бұрын
Good suggestions
@hamesjetfield0 Жыл бұрын
> you probably want to serialize all the writes to a queue This is essentially what kafka/SQS/RabbitMQ is when stripped down to a bare minimum. In fact, it's also what your database probably does under the hood too.
@rebelsdeveloper367 Жыл бұрын
It call transaction
@yuris10101 Жыл бұрын
@@rebelsdeveloper367 nope :)
@ioannischristou236211 ай бұрын
@@yuris10101 all modern relational DBMS implement ACID principles in transaction control as efficiently as possible. Also, when a transaction asks for a read x, update x, write x, how do you separate the read x from the subsequent update x and write x that should be done overnight in a batch or smth?
@gbroton Жыл бұрын
Understanding the isolation levels is a key here. And it is worth to mention that different database engines can implement the same isolation in a very different way.
@amalsalim886 ай бұрын
i just needed a simple explanation... after 30mins on gpt, this helped. thank you!
@developingWithPaul11 ай бұрын
It's important to note that count-fixed-1 only works like you would expect in postgres and not other RDBMS. This is due to the postgres transaction level setting that doesn't allow for "dirty reads"
@michaelkhalsa Жыл бұрын
Use transactional records, and a db trigger to update summary values. This also gives you an audit and point in time queries. Next use claims for sensitive information such as an order or purchase order which only one person should be editing at a time, along with a mechanism for one party to release the claim of another. This has an additional benefit of communicating who is doing what. These two approaches have served me well over the decades. Also only updating fields that have actually changed, of which proper db design helps.
@dandogamer10 ай бұрын
Was looking for the comment that mentioned transactional records + db trigger ❤
@ultrahash11 ай бұрын
Maybe ur channel is what I ever needed for the questions I get from time to time while working on some projects or maybe while thinking. Thanks for the awesome content/experience sharing.
@maddriven07 Жыл бұрын
yow this reservation series is really awesome, the progression from simple reservation to advance database concept is amazing! great content and very smart of you. thanks.
@Sdirimohamedsalah5 күн бұрын
Very useful. In the past I add tow fields one with id of the current user (ownership of the request) the second it a timestamp
@01ghost13 Жыл бұрын
There are much more concurency issues to handle, other than lost updates. It would be great to either cover them in future or push people to learn about them by themselfs. Unfortunetely, optimistic locking will not save you from phantom reads. I think if anyone wants to become backend developer, those problems and how to solve them are very important
@MrDevianceh Жыл бұрын
How is this not a solved problem in 2023? If there are best practices for these concurrency issues, what are they? Why are they not so prominent? Do you have any literature on the topic?
@o_glethorpe Жыл бұрын
We need to have a new framework every week @@MrDevianceh, we cant have both
@Novascrub Жыл бұрын
@@MrDevianceh A good db platform gives you all of the tools you need to make the tradeoffs necessary for your particular situation (and by situation I mean every single interaction your code has with the db), so in some sense they are solved, but there are just some essential complexities that mean we can't ignore these things. And the truth is those tradeoffs are often between technical realities and ux/product concerns, so you don't want to ignore them anyway. But... if like most apps you aren't running so much concurrency that you are stressing your backend, you can often go pretty far with some sane defaults. Use transactions. Try to make those transactions as short as you possibly can. Don't do other work while you are holding them open. Put a limit on every query. Access tables in the same order every time. Retry nonfatal errors a couple of times. To dig in, google "concurrency control" and "read isolation" with your database of choice. And, this is imho one of the better corners of wikipedia en.wikipedia.org/wiki/Concurrency_control
@lozyodella4178 Жыл бұрын
@@MrDeviancehRight? Why is so "difficult" to find good resources on the matter? I'm trying to learn concurrency problems on orms but I other than what was said in this video (that I already knew) I find nothing. But in the comments everyone says that it's only the tip of the iceberg. Any good read is appreciated
@devcaiquearaujo Жыл бұрын
@@MrDevianceh do you all know that it is a complex theme and an article may not explain all points? But, yes, there is some books, I recommend: "Database Concurrency Control: Methods, Performance, and Analysis", "Concurrency Control and Recovery in Database Systems", and "Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery"
@tmanley1985 Жыл бұрын
Another way to approach this could be to store new records for each count instead of updating a single record with a count value. This has its limitations as well, but it skips the entire race condition scenario because you're always just appending a new record and the sum of those records is the count. It also preserves history. This could get hairy with amount of records you create but there's ways to mitigate that as well (e.g., having a count value on the record so that you could delete past records and add that total to some arbitary record if you didn't care about history). Another thing to consider. If you're using SELECT FOR UPDATE, be careful how you structure your sql statements because you can run into deadlocks. If you're using SELECT FOR UPDATE like a queue (not related to Cody's counting example), you can use SKIPPED LOCKED and the transaction won't wait for that resource to be freed, it will just move onto the next one.
@safarl45 Жыл бұрын
Inserts and reads only!! The most recent insert is what you query to get the current value! That’s the best way to do it.
@devcaiquearaujo Жыл бұрын
Your approuch makes sense for a counter, but it will not working when you are handle with blog posts, for example. In this case you need a lock to prevent another user editing post at same time. Another usefull example is a ticketing system, you may have user selecting the same seat at the "same time", you need a lock too.
@safarl45 Жыл бұрын
@@devcaiquearaujo no, it still works
@devcaiquearaujo Жыл бұрын
@@safarl45 I am saying about his answer, not yours.
@tmanley1985 Жыл бұрын
@@devcaiquearaujoRight, which is why I specifically was referring to the example in the video about counts.
@oSpam Жыл бұрын
Great video, I love this new “series” I suppose you could call it. Really helpful and really important too, glad you opened my eyes about this
@TyriqDominguez11 ай бұрын
I learned a very similar problem in uni with the readers/writers problem in Operating Systems . Glad to see a real world problem with it
@mmmikem Жыл бұрын
I love the format of this, you're learning and sharing your process. Nice one, hope to see more of these!
@uzair004 Жыл бұрын
8:15 working in dynamodb you still have to bring your own locking mechanism. We have implemented optimistic locks using updatedAt field as you described. (I must say that implementing this for existing system was quite some work i.e you have to make sure that updatedAt read from db is not altered before update statement, bcz we need to compare with value in table) Second thing, for inc/dec you'll have to use atomic counters not traditional way to increment/decrement (because of multi-node or partitions , not sure if thats correct word to use) Third thing, I believe transaction writes is also required (which has double cost compared with single read/write)
@WebDevCody Жыл бұрын
And if you use global tables you need to make sure if you want consistency that all your reads and writes point to the same region. The way we’ve done counters is to increment and return the updated value to get an auto increments id.
@Arzen8411 ай бұрын
We use a similar approach. When somebody request a data eg.: user data in our admin site, we send a checkoutstamp alongside the data. If we want to update the user data for some reason we have to propvide the checkoutstamp alongside the updated value(s). If the checkoutstamp is older than the updated_at field on the table itself, we send an error to the user that somebody have already updated this record, please refetch the screen. On the FE we have a toast notification showing the errors but we could potentially retry the update if needed based on the error code sent to the FE. These other approaches are very interesting, I am not a DB expert neither but I love seeing different solutions for the same problem and their trade-offs. Great video! :)
@WebDevCody11 ай бұрын
Yeah that’s a good approach for conflict resolution, just let the user decide how to rety
@RiderInHell10 ай бұрын
Ah! I see what you did there! That seems a hell of a lot like the git concept. You must have the latest version first before pushing! :)
@millouwmills3677 ай бұрын
I think this only works if the timestamp is generated by one node with a monotonic clock. Otherwise clock skew could cause an update to be left unrecognized which may or may not be critical for you. See the book "Designing Data-Intensive Applications" for details
@EmperorAlles Жыл бұрын
holy, thanks for this video, I really need this logic for future postgress project.
@MrSoloDev11 ай бұрын
You could simply borrow from event sourcing principles where the database acts as a ledger. With any ledger, you cannot delete or modify any existing records. You can only add. This solves any race condition. For reading the data, you can borrow from CQRS pattern where the sum is read from a different table or database which is asynchronously updated, ideally through event based architecture.
@dandogamer10 ай бұрын
Sounds like a lot of complexity to do something very trivial
@MrSoloDev10 ай бұрын
@@dandogamer Obviously, the method that I mentioned is only for complex situations. Regardless of the complexity, it still remains a possible option.
@alinassar37353 ай бұрын
bless your humbleness king
@rotivanov Жыл бұрын
Awesome content as always. Thanks for sharing, Cody!
@suikast420 Жыл бұрын
In most of the cases an optimistic lock approach is my fav
@lotw3 Жыл бұрын
Looking forward to see the videos about DynamoDb!
@josecanciani11 ай бұрын
Both option 1 and option 2 will lock the row: option 2 is slower because its two different statements and also there’s the latency between app and db server.
@hellokittyboy Жыл бұрын
It's better if you avoid any kind of select than update type pattern in SQL IMO -> the transactions work, but at the levels you'd want to get a correct count you would see failures due to transaction violations. How do you handle this? You can retry but that may never complete before the http timeout. The 'correct' way to do that is with raw sql (and parameterized input). you can do 'update count = count + 1' instead, that avoids selecting a value out and writing it back to the database in the same transaction.
@ioannischristou236211 ай бұрын
that's why we have the concept of Xaction (transaction) that "pools together" several SQL statement that must be all correctly carried out or else return to the original state as if no statement in the transaction were ever attempted. At the application level, the DBMS always provides a "start transaction" demarkation, and an "commit" or "abort" command at the end of a series of statements (plus check-points and roll-backs). Add to that an appropriate isolation level for each transaction, and you're guaranteed that even if some user requests fail (under heavy loads), the ones that succeed, always leave the DB in a consistent state (that's what ACID is mostly about).
@gadgetboyplaysmc Жыл бұрын
I encountered this a crap ton at my job a few weeks ago. Examples are reordering, order number (checking the max and incrementing by 1), reservation, and basically operations where you don't do atomic operations and have to involve the application layer for a write. Kinda had to figure it all out on my own though. I wish you talked about all this sooner haha. This is really useful.
@gokukakarot632310 ай бұрын
one thing to remember is no matter how much parallelism and concurrency you do, at some point they will have to converge. If the data structure at the convergence point are there to ensure atomicity and durability, it will always be the limiting factor. Which is where you see this serialisation error. Databases and all have mvcc and sst trees, wal logs to control and resolve this errors as much as possible But at some point errors need to be handled. Things inside the computer fail all the time, but the cpu is so freaking fast, we don't notice it. And sometimes when we do, the system or process hangs/crashes
@becomingweleteeyesus67353 ай бұрын
Could you please share the solution you chose to solve the problem?
@st-jn2gk Жыл бұрын
Also, would absolutely love to hear more about dynamo. I always hear it in passing, but as a person who just throws PostgreSQL everywhere, I'd love to hear you talk about what dynamo db kinda is, where it would be useful for, and how you would use it for projects. (how would you model real world data on something that feels like a kv store as you said)
@WebDevCody Жыл бұрын
I’d personally avoid dynamo unless you have a good reason to use it. It’s the biggest regret on my current project imo
@driden1987 Жыл бұрын
@@WebDevCody why is that ?
@WebDevCody Жыл бұрын
@@driden1987 most data is relational, so usually you’d always want sql. Dynamo is great when you have the need to store infinitely large data without needing to worry about scaling your databases. It can also save you a lot of money up front because you don’t need a dedicated database running. At some point you’ll start needing to denormalize data so that you can get the data you need and display in the ui without needing to manually fetch and join tons of records. You can add GSI but then your write count costs end up being multiplied because each GSI basically costs an entire RCU in addition to when you update the original record. Additionally, when you start changing schema requirements on your data, such as making a field required, looping through your data and migrating it all is very slow. So if you don’t want data integrity, you end up needing to use different approaches so prevent your api from finding bad data and patching it. Idk maybe we just using dynamo incorrectly, but I’d always start with sql on a new project and utilize dynamo if you find query patterns that would help your system out
@driden1987 Жыл бұрын
@WebDevCody Have you looked at One table design pattern championed by AWS ? It's kind of a lot at the beginning but you end up being able to do data relation and still get all the benefits of DynamoDB But I agree that designing a DynamoDB table wrongly is setting yourself up for failure in the long run
@1Caja Жыл бұрын
@@driden1987If it's indeed a skill issue then I have it as well and we used the one table pattern. I will stick to any SQL dbms for now and maybe find some motivation to learn about how and why DynamoDB is/should be used later.
@mormantu8561 Жыл бұрын
Hi Cody, you forgot to put the links of the articles in the description. :)
@JohnWasinger11 ай бұрын
6:10 So why was countTest 110 instead of 100 in the count-fixed-3 test?
@WebDevCody11 ай бұрын
Probably forgot to reset the column from the previous test which only added 10 items?
@JohnWasinger11 ай бұрын
@@WebDevCody I thought all the tests performed 100 individual queries for each test script.
@WebDevCody11 ай бұрын
@@JohnWasinger oh I see what you mean. Yeah so I’m just pointing out that you can write code which will know if the record has updated, it’s not really a fix until you add retry logic
@DiegoAHdezF2 ай бұрын
What is the framework are you using? I'm interested.
@JuMi896 Жыл бұрын
Is optimistic update the same as OCC - Optimistic concurrency control? Iduno about nodejs world but there is a chance that some ORM or Query builders might already support it out of the box. In java word things like JPA or JOOQ already have that built in so you don't have to do manual work of checking version. It's done by your data access library.
@Dezdichado100010 ай бұрын
whats the vscode theme cody
@WebDevCody10 ай бұрын
Bearded theme stained blue
@a_m_dev4 ай бұрын
On Fix 4, If user A's entire query and update happens within a transaction, how user B can update the record in between the time that user A reads and tries to write to the same record? Transactions come with a locking mechanism on a record right? so that can't be happening in practice?
@lakhiakhter56593 ай бұрын
Even when i tried transaction for insertion and tested concurrency, I found problem app crash. Can you help me
@DhruvRed Жыл бұрын
locking is the approach used by me since, the concurrent users doing the update is not that large to effect the performance.
@suikast420 Жыл бұрын
4:48 every relational db with acid has a transaction isolation setting. Thats the relavant stuff.
@tonywang793310 ай бұрын
Sry if asked before what’s that canvas tool?
@orcdev Жыл бұрын
Drizzle is really cool!
@alirezvani914911 ай бұрын
Not paying attention to the race conditions if one of the most common mistakes I see in others code, and if you don't know where to look, finding the root of the problem can be a nightmare from my experience pessimistic lock (locking the row for update) is the best approach most of the times
@UP209D11 ай бұрын
versioning or timestamp updated_at checking is always most reliable, locking behavior is not good for scaling, also have risk of dead lock
@thefreemarketdev11 ай бұрын
It would be nice if optimistic writes could either be queued on exception or retried within a random short interval until success to avoid a rollback.
@oszi7058 Жыл бұрын
In the first example i would just increment it with sql so the database handles this case
@dilp7911 ай бұрын
What if we use buffer or queue- all updates goes to buffer and then special procedure execute them one by one.
@EducationAllTheTime1232 ай бұрын
Bro, I am really worried about this issue. What should be the pool.max value for a project where 15,000 people log in and spend 20 minutes every day for the sql connection pool? If pool.max is high but there is not that much usage, could it cause terrible things for the server?
@sachinsindhu4182 Жыл бұрын
You add retries with optimistic locks so that to handle concurrent update exceptions
@Dsouza10082 Жыл бұрын
In this case, I'm not feel much confortable in use ORM due to the internal connection handling, what I did in a NestJS project was write my own connection handling method, dependind how you create your entities the object mapping will be transparent. Witch my personal library I'm able to handle the amount of connections using some PG resourses like : select max_conn,used,res_for_super,max_conn-used-res_for_super res_for_normal from (select count(*) used from pg_stat_activity) t1, (select setting::int res_for_super from pg_settings where name=$$superuser_reserved_connections$$) t2, (select setting::int max_conn from pg_settings where name=$$max_connections$$) t3 Returning a PostgresUsedConnections, with this the platform is able to be resilient and handles the amount of connections in the pool by default. This approach handles thousands of inserts/reads per second, in the real word with this approach, I've reachead at least 600 connections per second so far consuming very low memory/processing amount.
@eNtrozx Жыл бұрын
Does "FOR UPDATE" lock only the specified record or the entire table? And I think you should use raw SQL for these type of videos so people who don't use the monthly ORM can more easily follow
@WebDevCody Жыл бұрын
Yes it should put a lock on the row specifically so that any future reads or writes on that same row will wait
@GawblyOG2 ай бұрын
In mysql, if the columns in the where clause are not indexed, it will lock up all rows it scanned up until it found the row in question. Something to keep in mind.
@SeibertSwirl Жыл бұрын
Great job love❤
@dadas78526 ай бұрын
Can you share code sample for this?
@bobobobo-ki2fw11 ай бұрын
thank you for your hard work. can you please tell me what diagram you using looks so pretty.
@irvanicsmoke113210 ай бұрын
Guys any idea to use optimistic lock but for more than 1 tables? The case is we have a webhook for 3rd party to notify us something. But they often hit the webhook multiple times at a time. We've tried to use optimistic lock for one table and it's OK, but not with the other tables.
@mohamedenn2096 Жыл бұрын
Question Cody, what tool do you use to select?
@WebDevCody Жыл бұрын
build in screenshot tool in MacOS
@kingbling191511 ай бұрын
Could you make a video how make that count table
@yousafwazir3167 Жыл бұрын
Do some leetcode / hackerrank question videos like going through one I would love to see you tackle a few
@olivareslifshitzdanielstan704711 ай бұрын
Does anyone know what theme he's using?
@WebDevCody11 ай бұрын
Bearded theme stained blue
@olivareslifshitzdanielstan704711 ай бұрын
@@WebDevCody Thanks!
@Kaltste1n Жыл бұрын
What table library are you using for updating it inside the row? 😁
@WebDevCody Жыл бұрын
Drizzle orm
@bobobobo-ki2fw11 ай бұрын
Its sad the people that are doing real dev videos dont have the millions of subs like the clickbait alledged scammer channels like "sonny sangha". bare bones front end with logo of big brand. The backend doesnt use AWS or azure or google cloud guy just a gimmick. This is a real channel that helps you at your job. (understand not practical to do most industry standard architecture but he is doing 10hour code alongs and charging hundreds of dollars for courses)
@CHOLIO03811 ай бұрын
What is the name of this theme?
@GulzarAhmed7 Жыл бұрын
you got a subscriber
@astb019 ай бұрын
As a tech lead and someone who also teaches, I’m glad someone else doesn’t discourage feature branches as it all comes down to how you’re dictated by your client and how quickly you need to churn out deployments. Personally I’m not a fan of trunk based development because it requires discipline which many don’t have and also as Cody says, feature toggles which just clogs up the code base with a whole heap of “plaster” changes.
@st-jn2gk Жыл бұрын
I'm confused, why did you need different bash scripts for the different tests? Wouldn't the first bash script have sufficed?
@WebDevCody Жыл бұрын
Because then I’d have to modify the url before each test and I didn’t want to accidentally run the test and forget to update the url.
@samirneupane756310 ай бұрын
great video.
@robertsibanda77379 ай бұрын
Great videos ,
@kumaravelg2348 Жыл бұрын
can you please share test shell scripts files?
@thechrisgate Жыл бұрын
where are the article links
@WebDevCody Жыл бұрын
sorry, just uploaded them
@abdelrahmanmostafa9489Ай бұрын
you only talked about with SQL what about with noSQL databases like mongo?
@ioannischristou236211 ай бұрын
To anyone interested in the subject, I would recommend taking an (online?) undergrad level course on relational database management systems, where they will definitely talk about concurrency control in DBMS: both pessimistic protocols (involving locking such as 2-PL etc.) and optimistic protocols (lock-free) are discussed. ALL these issues (lost updates, phantom reads, black-Friday problem etc.) have been recognized since the 70's and have been fully resolved in the world of RDBMS; in fact most of these problems are discussed in the more general context of parallel/distributed computing. In the context of programming a web-app that is meant to serve many concurrent users and is actually subject to possible race-conditions when the same data item can be accessed by more than 1 concurrent thread and modified by at least one, an appropriate combination of versioning and setting transaction isolation levels in the O/RM you're using (Hibernate usually for Java-based server apps) will be sufficient. For more "exotic" databases that run on clusters (sharding etc.) CAP theorems from distributed computing apply...
@yasser276811 ай бұрын
That's why we use distributed locks
@agent_ly Жыл бұрын
Why did you choose to show this using a nextjs app?
@WebDevCody Жыл бұрын
Because that’s what use mainly on this channel these day. I trust you are smart enough to understand the core concepts I’m trying to discuss 😉
@AlexandruVoda Жыл бұрын
While the solutions presented are fine, I think the approach is wrong. You are doing 3 things in order: getting some data from the database, computing some new data by using the data you got, putting the new data in the database. The first thing to do is ask whether you really need to get the data and compute on it in the first place. Maybe you don't. And in this case, incrementing a count, the most scalable way is to not do that at all. As suggested by others, just insert a row for each event and then use count when querying. Even better, if only the count matters, have a recurrent job that compresses past data into a sum with the compression happening inside a transaction obviously. Then, if you decided that you do need to compute on existing data, you need to ask what kind of computation is involved. Is it something that depends only on the data retrieved from the database, or does it depend on something else. Does it depend on user input? How long does the computation take? What resources does the computation use. Depending on those answers, you may or may not want to push the computation close to the database by using a stored procedure. You may or may not want to lock. You may or may not want to serialize the operations. And you may or may not want to just split the work into different parts.
@JakeAndDaddy11 ай бұрын
Sorry, it’s wrong. You need to store the version number (used to be timestamp) when the user sees the data. It’s then passed all the way through from the request to ensure the version the user saw is the version being updated. You shouldn’t be getting the version number just before updating. It’s only working in this scenario because you’re updating a count.
@macarron604110 ай бұрын
Oh yeah, DDB!
Жыл бұрын
Great video. I would use a message broker for that scenario.
@IvanRandomDude Жыл бұрын
Race condition bugs can destroy your company. Read about Flexcoin for example and how having "engineers" who do not understand databases well can destroy multi million dollar business in matter of minutes.
@rotivanov Жыл бұрын
I wish prisma supported "row locking"
@curious.lad.36913 күн бұрын
every database will handle this, databases are concurrent systems and have latches that prevent writing to relation at same time
@UP209D11 ай бұрын
Yeah option 3 is still industry standard, and client should implement retry mechanism, like 5 retry before throw out something like server is busy, try it again bla bla bla.
@moosegoose12829 ай бұрын
give us the repo bro!
@Hippo0o11 ай бұрын
just do inserts in these scenarios
@christianrazvan10 ай бұрын
Well u should use transactions...
@rmrz2225 Жыл бұрын
ACID
@willzin-da-esfiha Жыл бұрын
Using mongodb you can just do a update using the increment operator doing an atomic operation.
@coffeeintocode Жыл бұрын
Should just learn how to use mutex's rather than rely on an ORM. Way too many other problems to worry about that a simple database overwrite
@ioannischristou236211 ай бұрын
A long time ago, I wrote a "document management" app for a VERY BIG public organization that had a lot of paperwork to digitize. At the time I wasn't yet familiar with O/RM's so I used my own (well-tested) concurrency control mechanisms (including shared read/exclusive writes, FIFO locks etc.) for my web-app that would actually keep some state in the web-app as well in order to increase performance, and only have to go to the DBMS to store (or update) new document information. The approach worked (passed all stress-tests etc.) and we were ready to submit the code to the client, when the project manager decided to do a code-review: upon seeing the use of classes that had methods with the "synchronized" keyword in them, immediately called a whole-team meeting and (correctly) pointed out that if he sent this code to the client, the client would NEVER give us another project!!! ALL concurrency control had to be delegated to Hibernate (the O/RM used by the client), and eventually to the underlying Oracle RDBMS. When we rewrote the backend to use Hibernate (and even its HQL language for many of the queries) the web-app was still passing all stress-tests, and was on average 15-20% faster. PLUS, there was not a single line in the application that was about concurrency control or anything like that. Just model entity classes, repos, controllers and service classes. THAT's how big organizations write code, and it's for a reason.
@oladiransegunsolomon775011 ай бұрын
this is wrong, you’re not relying on the ORM, you’re relying on the database concurrency control
@zrbecker11 ай бұрын
Even with a mutex you can still have race conditions at the database level. Especially if your application scales horizontally as you'll need a mutex that can be used between all instances of your application. Which is essentially what the database locking provides.
@oladiransegunsolomon775011 ай бұрын
@@zrbecker this is also correct.
@wildebeest145411 ай бұрын
This type of consistency is a well-researched topic with existing solutions. I suggest looking it up on Google Scholar and learning about them because they're not really that hard and work for anything from a single DB server to a globally distributed one.