MongoDB vs. PostgreSQL: Performance & Functionality

  Рет қаралды 35,943

Anton Putra

Anton Putra

Күн бұрын

Пікірлер: 269
@AntonPutra
@AntonPutra Ай бұрын
🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)
@LotnyLotnik
@LotnyLotnik Ай бұрын
I think you might have an error here with MongoDB configuration. MongoDB writes all write operations to disk as well for data consistency. Given that we only see spikes, that means you did not provide proper paths to all writable information. MongoBD writes and keeps data in memory, yes. But to preserve consistency it writes to Journal. Journal is by default configured on a different path then actual data of the MongoDB! Please make sure that you put journal on the same external drive to check for write usage! It might be that you speedup mongo by allowing it to use two separate drives for all operations!
@svetlinzarev3453
@svetlinzarev3453 Ай бұрын
The postgres test is not taking advantage of the GIN indexes. Each index in PG supports only specific "operator classes". For GIN's jsonb_ops, it supports "? ?& ?| @>" and does not support "->". So when you are doing filtering, you should use "@>" instead. Also instead of the default jsonb_ops, you can create the index with "jsonb_path_ops" which is smaller and faster, but supports only @>
@luca4479
@luca4479 Ай бұрын
Create a pull request
@pytorche6206
@pytorche6206 Ай бұрын
Interesting but how would you rewrite the query to use the index ? The @> test if a document contains some path/values from what I read. Here the filtering is done on a price lower than a given value , not having a specific value. Maybe the product's price should be externalized in its own column and indexed... For every update you'd need to update the 2 columns. Sorry if the question is stupid I know close to 0 on the document/json features of postgresql.
@svetlinzarev3453
@svetlinzarev3453 Ай бұрын
@@pytorche6206 You cannot with a GIN index :) But one can create a BTREE index for that specific property to use with the "->" operator. The issue with BTREE indexes is that you cannot index the whole JSON blob with them, but have to create a separate index for each property. Also you have to use the exact expression you used during the index creation when filtering, otherwise the DB will not use the index.
@ojonathan
@ojonathan Ай бұрын
@@svetlinzarev3453 You can, with jsonb_path_query, which uses the GIN index. It was not possible in the past, but now we have a lot of new jsonb functions.
@professortrog7742
@professortrog7742 Ай бұрын
No sane dba would model the data such that the price field would be within a json blob.
@zuzelstein
@zuzelstein Ай бұрын
In this test databases work in different Durability mode: 1. MongoDB buffers writes in RAM and flushes them to disk each 100ms by default. 2. On the graph Postgresql IOPS are equal to the RPS which makes it clear that Postgresql writes to the disk immediately. To make Postgresql act like Mongodb in postgresql.conf set: "synchronous_commit = off" and "wal_writer_delay = 100ms"
@lucasrueda3089
@lucasrueda3089 Ай бұрын
so, these graphics are wrong?
@zuzelstein
@zuzelstein Ай бұрын
@@lucasrueda3089 these graphs reflect how two databases work in different modes. The configuration is "wrong" here.
@ramonpereira4460
@ramonpereira4460 Ай бұрын
Postgres is actually writing data to the disk. Mongo is not. That is why you see such higher throughput and lower io/s
@navidfiisaraee9992
@navidfiisaraee9992 Ай бұрын
mongo is writing 20 GB in disk usage at the end it's just that maybe it's indexing mechanism does not need to be updated as constantly
@twitchizle
@twitchizle Ай бұрын
To ram?
@GolderiQ
@GolderiQ Ай бұрын
Where does Mongo write?
@andreydonkrot
@andreydonkrot Ай бұрын
It is indexing high likely. I think we eill have another test later with the optimized indexes.
@supercompooper
@supercompooper Ай бұрын
​Mongo has different types of write concerns so you can say it's okay just store it in RAM or you can say I want it written to disk or I want to written to certain number of cluster members etc.. This is in case the power goes out you might lose something so you can choose the option when you write to the API. ​@@GolderiQ
@AliMuharram-q3l
@AliMuharram-q3l Ай бұрын
We need scylladb vs casandra
@Алексей-о9б4г
@Алексей-о9б4г Ай бұрын
Yes, and CouchDB.
@kamurashev
@kamurashev Ай бұрын
Maybe worth looking at new sql solutions like tidb. I might not understand your use case but Cassandra is a sh!tty thing.
@RupamKarmakar-s8z
@RupamKarmakar-s8z Ай бұрын
ScyllaDB vs ClickHouse
@rida_brahim
@rida_brahim Ай бұрын
if these databases are that important why would anyone teach MySQL, MongoDb or Postgresdb! i don't get it. is it a legacy software or a big tech company thing or a usecase thing or what?
@kamurashev
@kamurashev Ай бұрын
@@rida_brahim cause in CAP they are CA dbs where as 1 node postgre is CA and mongo is CP and yes it’s use case specific. And also there are many more things to consider and that’s why there are so many of them
@MrLinusBack
@MrLinusBack Ай бұрын
So in regards to the large amount of disc writes, as some people have pointed out it most likelly is related to WAL, you could try to turn off fsync to check. Another related thing is how updates are implemented in postgres, it is essentially a delete followed by an insert (not exactly but it is creating a new tuple and marking another one as dead). It would be interesting to know if excluding the update test changed the amount of disc writes or not for any of the tests. I actually had a real life example of a rather small table getting millions uppon millions of upserts each hour where almost all became updates that didn't change any values. It was about 100mb big if you copied the data to a new db or did a full vacuum. But around 15gb otherwise. Writing a where statement where we checked every single one of the 15 columns against the incomming values, and only updating the rows that differed where actually much more performant in the long run (but probably a rather nieche case since the entire table fit into memory when not bloated). But it probably would be worth trying.
@Neuroszima
@Neuroszima Ай бұрын
The most awaited battle of history! EDIT: Ok so i was in PostgreSQL camp, i think you managed to convince me to start learning MongoDB as well and this weird syntax that they have
@AntonPutra
@AntonPutra Ай бұрын
if you want json, mongo is a no-brainer
@cotneit
@cotneit Ай бұрын
Maybe a bit niche, but I would love to see SurrealDB compared to mainstream databases, maybe in-memory comparison with sqlite and server comparison with PostgreSQL
@antoniong4380
@antoniong4380 Ай бұрын
I would like to it happen in some later date. Because if we assume it can only happen once, then I would rather see it after some time than now (I'm betting that it might improve a lot in some later date or that there's a clearer understanding where surrealdb perform the best )
@cotneit
@cotneit Ай бұрын
@@antoniong4380 It has been quite some time since SurrealDB released. Heck, they released v2!
@def-an-f3q
@def-an-f3q Ай бұрын
@@antoniong4380not once, at least, there are two videos which compare nginx and traefik
@TotalImmort7l
@TotalImmort7l Ай бұрын
I was building a tool for a company. Initially we used SurrealDB, but ended up going back to SQLite because SurrealDB is too slow.
@stefanszasz6510
@stefanszasz6510 Ай бұрын
Anton, your statement regarding disk write iops "Postgres always performs many more operations compared to many other databases [in MySql]" contradicts your "Postgres vs MySql benchmark" where you've shown that MySql writes ~3x more...
@kamurashev
@kamurashev Ай бұрын
For the disk usage, I’m not an expert in PostgreSQL configuration but eg for MySQL/innodb there’s innodb_buffer_pool_size which gives you the ability to take advantage of RAM if you have a lot. Like in one of the projects I was working for we had 128/256Gb of ram and we were able to fit entire dataset there, it gave us sub ms blazing fast queries. Another issue might be that your dataset is small and you are reading and updating the same data all the time thus evicting/eliminating the page caching.
@LtdJorge
@LtdJorge Ай бұрын
For Postgres, you have shared_buffers (recommended baseline is 25% of RAM) which is the internal Postgres cache. Then there is effective_cache_size (recommended baseline is rest of RAM, so 75% here), which hints to Postgres how much memory will be used by the page cache, for files. This is the Linux cache itself, which Postgres will benefit from when reading pages from disk. This last one is only taken into account by the optimizer to parallelize aggregates and such, unless the parallelization would use so much page cache that it would invalidate the performance gains. Postgres also has work_mem which is the memory used by each worker for things like sorting or hashing, it’s a relatively very low value compared to shared_buffers, at least on machines with ample RAM (higher than 16GB). Edit: TimescaleDB recommends setting work_mem to 25% RAM / connections. This a bigger than normal value, since Timescale uses more memory heavy algorithms, but it gives an idea.
@daymaker_trading
@daymaker_trading Ай бұрын
Bro, you came now to databases, wow! :D These comparison tests are so fun!
@AntonPutra
@AntonPutra Ай бұрын
i'll do more, redis vs dragonfly is next
@Nikhil-jz5nm
@Nikhil-jz5nm Ай бұрын
Good video! Postgres prioritizes consistency so it would always write to disk, and mongo prioritizes availability and promises eventual consistency so it writes to memory first... The same experiment with queries across shards &/ replica sets should give interesting results
@LotnyLotnik
@LotnyLotnik Ай бұрын
I think that's where the diffrence would matter. What is shown here is maybe 99% of startups will get, but at one point you will need to scale and it would be interesting to see how sharding/replica sets improve/degrate performance on bots. Also Transactions!
@andyb012345
@andyb012345 Ай бұрын
This isn't true. MongoDB default configuration will always write the journal to disk on every write operation, the same as PostgreSQL.
@BosonCollider
@BosonCollider Ай бұрын
@@LotnyLotnik Postgres has an excellent sharding story with Citus (which is used in Cosmos DB on azure but is also self-hostable, I've briefly tried it with stackgres). Most people should just scale writes vertically and add replicas to scale reads though. Sharding is only actually needed at the PB scale and most people should just ensure that they are not running on something slower than a bare metal $100/mo hetzner server. Postgres also has a better insert performance than mongoDB if you are doing bulk inserts with COPY (performs better than insert already at 2-3 rows per copy statement), which generally is what you will use anyway if the data being written is from a message broker like Kafka or something similar. For bulk operations the overhead of committing ensuring that it flushes to disk is negligible. In Go webservers another common pattern is to use a channel to queue up writes instead of having the handler write straight to the DB, which is better than the plain repository pattern because it gives the repository full freedom to autobatch writes. With that said, in those case you have a strongly typed channel of struct types, and then you can usually just write straight to a normalized table instead of writing jsonb, and then postgres is much faster because normalization often means you only need to write a tenth of the data or less (I've seen order of magnitude size reductions migrating mongo/elastic dbs to normalized postgres).
@CodeVault
@CodeVault Ай бұрын
Very nice video, very informative, thank you! I'll be keeping a keen eye on the channel for an update to see the differences when implementing the recommendations listed by the people in the comments. Also, would be nice to see the difference in using SQL (on Postgres) vs JSONB (on Mongodb). I feel that's where most people would like to find the differences to (me included)
@codeSTACKr
@codeSTACKr Ай бұрын
Thank your for this one! Great comparison.
@MarmadukeTheHamster
@MarmadukeTheHamster Ай бұрын
4:40 this database design NOT okay for eCommerce, please don't design your ecommerce database like this! The concept of database normalisation demonstrated here is of course accurate and important to learn but, in this specific case, your order table should contain the duplicated address and product price information. This is so that you can query historical orders and find accurate information about the price the customer actually paid at the time, and the address that the order was actually delivered to. Using foreign keys exclusively in this example, would mean that your historical order information will be inaccurate if you ever decide to change product prices, or if a customer updates their address.
@szpl
@szpl Ай бұрын
I guess it is just a small sample database with a basic use case which meant to be easy to understand. As you correctly emhasize, accountability for past transactions is a business requirement at many cases, especially in commerce / banking / bookkeeping etc
@tom_marsden
@tom_marsden Ай бұрын
Having an order date/time and a product history table would solve this. Same approach you'd use for anything that needs to be audited.
@sanchitwadehra
@sanchitwadehra Ай бұрын
Dhanyavad for commenting this I was working on one such project and its my first time coding such a full stack project I was about to make the mistake you mentioned you saved me from hours of frustration
@JonCanning
@JonCanning Ай бұрын
Thanks
@cmoullasnet
@cmoullasnet Ай бұрын
For disk writes, perhaps Postgres is configured more conservatively with respect to always making sync writes out of the box compared to Mongo?
@nidinpereira5898
@nidinpereira5898 Ай бұрын
Let's get this man to the top of KZbin
@sudipmandal2497
@sudipmandal2497 Ай бұрын
This is the video I was looking for after studying about DBMS scaling and ACID, BASE properties.
@Zizaco
@Zizaco Ай бұрын
MongoDB is ACID compliant btw
@thesupercomputer1
@thesupercomputer1 Ай бұрын
I would really like to see mariadb VS. MySQL. Mariadb is a drop in replacement for MySQL, developed for some years now independent from MySQL. Would be interesting to see who has gotten the better performance since the split.
@alexvass
@alexvass Ай бұрын
Thanks
@luizfernandoalves6625
@luizfernandoalves6625 Ай бұрын
What are the configurations of this PostgreSQL database? Number of connections, WAL, concurrency, etc.?
@spruslaks26
@spruslaks26 Ай бұрын
Thanks!
@AntonPutra
@AntonPutra Ай бұрын
thank you for support!! ❤️
@MusKel
@MusKel Ай бұрын
When NATS vs Kafka?
@coolplay3919
@coolplay3919 Ай бұрын
+
@Pom4H
@Pom4H Ай бұрын
+
@MapYourVoyage
@MapYourVoyage Ай бұрын
+
@winfle
@winfle Ай бұрын
+
@PedrodeCastroTedesco
@PedrodeCastroTedesco Ай бұрын
Anton, I love your videos because they highlight the trade offs in software architecture. I dont know if it is too much to ask but could you highlight the pros and cons of the databases, the ideal cenarios for each one? That will be very helpful. Thanks!
@mkvalor
@mkvalor Ай бұрын
Best practice for high throughput with PostgreSQL is to mount yet another separate physical disk volume for the Write Ahead Log (WAL). The write I/O on the log volume will always be sequential, while the data volume will experience much more random read and write I/O ops as the database engine processes the updates, deletions, and reads associated with the SQL statements. It's not about the "speed" of an SSD, in this case it's about isolating the two dramatically different I/O patterns onto different firmware controllers (of the separate volumes).
@eddypartey1075
@eddypartey1075 Ай бұрын
"in this case it's about isolating the two dramatically different I/O patterns onto different firmware controllers" why is this benefitial? Please, explain where profit comes from. I'm curious
@andyb012345
@andyb012345 Ай бұрын
I think it would have made more sense to compare a b-tree index in postgresql as this is how they are implemented in mongodb.
@jm-alan
@jm-alan Ай бұрын
I'd love to see this test repeated w/ the Postgres table created UNLOGGED; I bet it would drastically reduce the disk usage. For tables with very hot writes, if you're willing to sacrifice some crash reliability, it can substantially speed up writes.
@georgehelyar
@georgehelyar Ай бұрын
When you actually get to this scale, big mongo for enterprise gets expensive. It's a bit apples and oranges anyway though. If you have highly relational data then use relational database like postgres, and scale it with something like citus. If you are just reading documents that don't have a lot of relationships with each other then use nosql like mongo. The real cost in nosql is in dev time when you need to query it in a new way and then need to denormalise it and write it all the different ways so that you can query it efficiently.
@ikomayuk
@ikomayuk Ай бұрын
I've left a comment about Citus and decided to look into whether I'm alone here with it. Good evening, sir ))
@brahimDev
@brahimDev Ай бұрын
SurrealDB is perfectly balancing between mongodb and postgresql, but the only concern right now is it's not fully ready for production
@Mr.T999
@Mr.T999 Ай бұрын
Could you create a video, how you are creating this infrastructure, test loads, configurations, dashboards etc.
@AntonPutra
@AntonPutra Ай бұрын
well i have a bunch of those tutorials on my channel including terraform, kubernetes, prometheus etc
@luizfernandoalves6625
@luizfernandoalves6625 Ай бұрын
Postgres performs more disk operations because of transactional logs, doesn't it?
@jricardoprog
@jricardoprog Ай бұрын
I believe it would be fairer when inserting/... data into MongoDB to use the writeConcern option so that it behaves more like PostgreSQL. Otherwise, it will trade data durability for performance.
@TheAaandyyy
@TheAaandyyy Ай бұрын
What i would also like to see at the end is the bill from the AWS :D A cost comparison between these two would be super nice, as it is also a big factor when it comes to choosing the database. Great wideo anyways!
@AntonPutra
@AntonPutra Ай бұрын
thanks, it's about $15-$20 for the test, including prep, for running it for 2-3 hours
@Future_me_66525
@Future_me_66525 Ай бұрын
Thanks Anton, unvaluable video
@StanislawKorzeniewski-yr1ii
@StanislawKorzeniewski-yr1ii Ай бұрын
Can you compare TimescaleDB vs QuestDB??
@AntonPutra
@AntonPutra Ай бұрын
ok, added!
@StanislawKorzeniewski-yr1ii
@StanislawKorzeniewski-yr1ii Ай бұрын
@@AntonPutra Thanks, coffee on the way!!!
@user-lv3hn6uz4e
@user-lv3hn6uz4e Ай бұрын
Which value have default_toast_compression parameter? you should try lz4 in PostgreSQL for test with blob types. And storage and related parameters also very important
@naczu
@naczu Ай бұрын
Wow. I am really surprised. This is RDBMS vs NoSQL. And I thought NoSQL always faster than all rdbms out there. But I see postgresql is really doing great job here. Amazing video. Thanks.
@minhphamhoang2894
@minhphamhoang2894 Ай бұрын
I have checked the repo and it seems the database is not tuned. Also, it is better to use the alpine-based image as it already consumed a quite of resources. Also in PostgreSQL the WAL and archiving is still there throughout the test, so from my point of view, we need to tune those configuration first as all two database default settings are not too good.
@AntonPutra
@AntonPutra Ай бұрын
sorry i forgot to upload postgres config, it is optimized. i use pgtune for that. i don't use docker to run databases, i use systemd
@minhphamhoang2894
@minhphamhoang2894 Ай бұрын
@@AntonPutra The pgtune actually tune only the resources, specifically the memory like shared_buffers, but it ignored the WAL. Since we aimed to make test reliably from business level, please check and extend the archive_timeout, checkpoint_timeout, disable full_page_write, ... . For synchronous_commit, since we dont use replication, set it to local mode.
@minhphamhoang2894
@minhphamhoang2894 Ай бұрын
@@AntonPutra Also, please can you switch to the alpine image to see if it can perform better as most production workload are based on Alpine OS rather than Debian
@johnsmith21123
@johnsmith21123 Ай бұрын
Apparently Postgres writes WAL to the disk
@IvanRandomDude
@IvanRandomDude Ай бұрын
It has to. Otherwise you can lose the data. WAL is essential for consistency.
@szpl
@szpl Ай бұрын
11:20 Were you able to find the root cause of those disk writes ? Or maybe the absence of this write in MongoDB ?
@AntonPutra
@AntonPutra Ай бұрын
not yet
@anassfiqhi3517
@anassfiqhi3517 Ай бұрын
Very good video, thank you. Can you test scaling up both databases
@AntonPutra
@AntonPutra Ай бұрын
yes i was thinking about performing some distributed tests, i'll start with redis cluster next with dragonfly
@DavidDLee
@DavidDLee Ай бұрын
The guarantees here are quite different. Postgres has strong guarantees, which means you can't lose data and transactions are consistent. Not an expert on MongoDB, but I think that a successful write has weak default guarantees, e.g. the write may not be flushed to disk and no consistency guarantees beyond a single collection write. That said, I am sure Postgres can be tweaked here to perform better.
@Zizaco
@Zizaco Ай бұрын
It was the case 10 years ago. but now mongoDB defaults to strong guarantees too
@AntonPutra
@AntonPutra Ай бұрын
interesting, i'll take a look
@BeeBeeEight
@BeeBeeEight Ай бұрын
Finally, a match of the juggernauts! Although, I somewhat expected this result as I knew MongoDB works better with JSON documents than SQL databases. The conclusion here is really, there is no one size fits all database - while MongoDB works well with JSON, data with rigidly defined shapes are perfect for SQL databases, and SQLite is well suited for mobile/embedded applications.
@SDAravind
@SDAravind Ай бұрын
Please mention higher/lower is better for each metric in all the videos and please provide your conclusion in the end.
@ErmandDurro
@ErmandDurro Ай бұрын
Thank you for this video. It was very insightful 🙂
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@omarsoufiane4evr
@omarsoufiane4evr Ай бұрын
according to some benchmarks i saw online postrgres outperforms MongoDB in paginated queries
@AntonPutra
@AntonPutra Ай бұрын
well maybe but i hardcoded a limit of 10 for both, as soon as i get enough feedback i'll refresh it
@jesusruiz4073
@jesusruiz4073 Ай бұрын
This confirms (I already knew it) that I DO NOT NEED MongoDB for my applications, even if they use JSON. PostgreSQL provides the flexibility of SQL (which I need), and also BETTER performance. The default MongoDB container stores the WAL (write-ahead log) in memory, writing it to disk periodically, impacting durability and improving performance. As the MongoDB manual describes: "In between write operations, while the journal records remain in the WiredTiger buffers, updates can be lost following a hard shutdown of mongod." By the way, this is probably the reason for PosgreSQL writing more to disk: default PostgreSQL provides better durability, which is what I need in my applications. Even with this "trick", both are similar in performance below 5.000 qps (which is more than what I need in practice). If MongoDB is configured with the same ACID properties (especially the D in ACID), I expect MongoDB to have worst performance even below 5.000 qps. Good job, Anton, and thanks.
@jesusruiz4073
@jesusruiz4073 Ай бұрын
I add my summary: for applications which can not tolerate data loss, go with PosgreSQL. When you can afford data loss in some occasions, you can use MongoDB. For me, the situation is clear (it has been for a long time, by the way).
@andyb012345
@andyb012345 Ай бұрын
Your durability argument is misleading, this is only true when journaling is set to false on a write operation. The default configuration sets journaling to true when write concern is majority, which is the default write concern. This is like saying "I can turn fsync off in postgresql so it isn't durable!"
@supercompooper
@supercompooper Ай бұрын
​@@jesusruiz4073When you use MongoDB you can specify the write concern so you can totally ensure it's written to disk just fine and safely.
@jesusruiz4073
@jesusruiz4073 Ай бұрын
@@andyb012345 No, it is your argument the one misleading, because you confuse journaling with where the journal is written and when. For a deployment with a single server with the default configuration (like in this benchmark), journaling is enabled but the default write concern is { w: 1 }. But the journal is flushed to disk every 100 milliseconds (because w:1 does not imply j:true) With 5.000 writes/sec, up to 500 transactions can be lost. When a replica set is used, then I agree with you that by default the write concern is w:majority, which implies j:true and the journal is flushed to disk immediately. But this is not what was compared to PostgreSQL in this benchmark.
@Zizaco
@Zizaco Ай бұрын
​​​​@@jesusruiz4073 Nah. writeConcernMajorityJournalDefault is the default. And the default writeConcern is majority. Even for a single node replicaset
@wardevley
@wardevley Ай бұрын
I loved the dubbing
@artursradionovs9543
@artursradionovs9543 Ай бұрын
How to make animated display with grafana and prometheus?
@LucasVinicius-ex4lr
@LucasVinicius-ex4lr Ай бұрын
Thanks for providing an audio track in Portuguese!
@the-programing
@the-programing Ай бұрын
I still don't understand if he did do join and what settings are used on Postgres. If he used the default settings, then this is not a good test at all. MongoDB is caching so much more, and Postgres never caches more than few kb's in the default settings. This is dumb... Comparing disk writes to in-memory database...
@AntonPutra
@AntonPutra Ай бұрын
i used pgtune to optimize postgres, just forgot to upload it to github
@the-programing
@the-programing Ай бұрын
@ Thank you for the video though. Another suggestion is that I don't know any projects that store json objects in relational databases. It is usually stored in columns and only made into json when you need to send it to a client-side. You shouldn't query the json objects inside a relational database, because it is made to work with rows and columns only.
@alienmars2442
@alienmars2442 Ай бұрын
any plan to compare cockroach db and yugabyte db ?
@artiomoganesyan8952
@artiomoganesyan8952 Ай бұрын
I wanted to use Neo4j for my pet project, however it is so hardware hungry, I decided to use postgres. But I really want to see comparison of neo4j and postgres.
@AntonPutra
@AntonPutra Ай бұрын
ok i used it in the past, it is an optimized graph database but i think i can use a postgres plugin as well
@vishnusai4658
@vishnusai4658 Ай бұрын
Helpfull TQ can you make backup incement backup for database in real time mogodb marinadb, in eks
@renatocron
@renatocron Ай бұрын
the reason that postgres uses more io is that it's actually a DATA base so when it's says it's written to disk, it's. if you disable fsync you boy gonna not find anything better in one node, except maybe sqlite or duckdb but that's another beast
@AntonPutra
@AntonPutra Ай бұрын
make sense 😊
@jesusruiz4073
@jesusruiz4073 Ай бұрын
You do not have to disable fsync completely. I would say that doing in PostgreSQL the same "trick" that MongoDB is using here would increase performance by an enormous amount. MongoDB in the default standalone server config (like here) writes the journal to disk every 100ms. If your use case allows some data loss, then PostgreSQL is a beast. However, do not try this for financial applications, pease ...
@galofte6067
@galofte6067 Ай бұрын
Hey this is a great video. How you created those infographic btw?
@AntonPutra
@AntonPutra Ай бұрын
thanks, adobe uite
@amaraag9435
@amaraag9435 8 күн бұрын
When watch backend tutorials everyone using postgresql and drizzle in recent times. Why developers use more often postgresql than mongodb?
@enginy88
@enginy88 Ай бұрын
MySQL also provides support for native JSON data type from version 5.7. It supports optional indexing by auto-extracting values from JSON fields. It will be great if you include MySQL to this test or do a standalone MySQL vs Postgres.
@AntonPutra
@AntonPutra Ай бұрын
thanks, i may do it in the future but i don't think most people would use postgres and mysql for storing json docs
@enginy88
@enginy88 Ай бұрын
@@AntonPutra Thank you for your response! In fact, people realized that they don't want to maintain another database server in addition to the RDMSs. That's why nowadays everyone interested in storing unstructured JSON data in RDMSs.
@milagos09
@milagos09 Ай бұрын
Postgres vs. Microsoft sql server, please 🙏
@AntonPutra
@AntonPutra Ай бұрын
ok, it's on my list
@Nick-yd3rc
@Nick-yd3rc Ай бұрын
Are you running Postgres with the vanilla default config again? 😢
@AntonPutra
@AntonPutra Ай бұрын
no i forgot to commit postgres config, i use pgtune to optimize it based on the hardware i use
@prashanthb6521
@prashanthb6521 Ай бұрын
Postgres has a shared buffer lock & yield problem ! The problem will get pronounced as you increase the number of threads.
@AntonPutra
@AntonPutra Ай бұрын
ok interesting i'll take a look
@prashanthb6521
@prashanthb6521 Ай бұрын
@@AntonPutra Read this paper : Why Parallelize When You Can Distribute ? By Tudor-Ioan Salomie, Ionut Emanuel Subasu, Jana Giceva, Gustavo Alonso at ETH Zurich, Switzerland.
@sh4lo
@sh4lo Ай бұрын
Nice job, thanks!
@amvid3464
@amvid3464 Ай бұрын
Nice video, thanks man! Would be nice to see EdgeDB in upcoming videos
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
It's me again :). If you want to reduce the PostgreSQL writes to disk try to set commit_delay = 300 or higher.
@luca4479
@luca4479 Ай бұрын
Je kunt een pull request op de repo achterlaten, of een issue. Grotere kans dat het dan gezien wordt.
@szpl
@szpl Ай бұрын
My first impression was also, at this order of magnitude difference is either pg doing something unnecessary or mongo skipping corners
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
@@luca4479 ow dat heb ik al genoeg gedaan. maar in dit geval kon ik de juiste code niet vinden. Want hij deelde zijn postgresql config desze keer.
@AntonPutra
@AntonPutra Ай бұрын
no, i'll keep it as is, but thanks for the tip! i'll take a look
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
@@AntonPutra no problem.. @
@AcidGubba
@AcidGubba 14 күн бұрын
Who would have thought that writing to memory would be faster than to a hard drive? What level is this?
@shahriarshojib
@shahriarshojib Ай бұрын
Would've loved to see a more apples to apples comparison and memory usage.
@jashakimov8578
@jashakimov8578 Ай бұрын
Couchbase vs MongoDB
@reybontje2375
@reybontje2375 Ай бұрын
It'd be interesting to see this compared to embedded databases that use LSM trees, such as RocksDB. RocksDB rust crate alongside serde_json or simd_json versus Postgres would be an interesting comparison.
@kamurashev
@kamurashev Ай бұрын
If we are talking dbs, there’s a new player out there called “newSQL” like eg several big companies I worked/work for are using TiDB and switching their existing solutions to it. It’d be a bit hard so setup a cluster but I’d do anything to see the performance comparison with both classic relational 1 node db and eg mongo.
@AdamPoniatowski
@AdamPoniatowski Ай бұрын
can you do a cassandra vs mongodb comparison next?
@AntonPutra
@AntonPutra Ай бұрын
columnar vs document database, maybe will do in the future 😊
@kryacer
@kryacer 17 күн бұрын
can you compare performance postreSql vs MS Sql Server?
@professortrog7742
@professortrog7742 Ай бұрын
No sane dba would model the data such that the price field would be within a json blob. The price and all other non-optional fields should have their own columns and be indexed with a normal btree index.
@Dr-Zed
@Dr-Zed Ай бұрын
This once again proves that PostgreSQL is the only thing you need for most applications. It can replace document DBs, caches like Redis and even message brokers.
@AntonPutra
@AntonPutra Ай бұрын
yes it covers 99% of all you need
@rohithkumarbandari
@rohithkumarbandari Ай бұрын
Please make another comparision video after setting postgres up such that it acts like mongo db.
@agustikj
@agustikj Ай бұрын
Mongodb vs elasticsearch? :)
@TariqSajid
@TariqSajid Ай бұрын
laravel octane please ?
@NatanStreppel
@NatanStreppel Ай бұрын
Video idea: a postgres X postgres with fsync turned off :) that'd be very cool!
@AntonPutra
@AntonPutra Ай бұрын
ok :)
@iaaf919
@iaaf919 Ай бұрын
I wish you had memory compare too
@thepuma8558
@thepuma8558 Ай бұрын
Can you test DuckDB vs Sqlite ?
@behroozx
@behroozx Ай бұрын
Do the same for Join query please.
@AntonPutra
@AntonPutra Ай бұрын
i did (not in the video), actually for the first test i did postgres relational model vs mongodb. join failed very quickly in mongo, it's not really efficient
@tkemaladze
@tkemaladze Ай бұрын
Can you do MongoDB - MariaDB? please
@tkemaladze
@tkemaladze Ай бұрын
or even ScyllaDB - MongoDB
@AntonPutra
@AntonPutra Ай бұрын
yes mariadb will be next
@news3951
@news3951 Ай бұрын
Very Nice video Love you brother 🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@PavitraGolchha
@PavitraGolchha Ай бұрын
Would have been nice to also include SQLite here
@AntonPutra
@AntonPutra Ай бұрын
i'll do sqlite soon as well using unix socket instead of tcp/ip as i did in the previous test
@sanchitwadehra
@sanchitwadehra Ай бұрын
Dhanyavad
@rebelwwg1wga431
@rebelwwg1wga431 28 күн бұрын
anyone compered scylladb vs postgresql citus cluster?
@chneau
@chneau Ай бұрын
Awesome video
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@jacsamg
@jacsamg Ай бұрын
¡Esta en españo! ¡Increíble! Muchas gracias 😬
@Takatou__Yogiri
@Takatou__Yogiri Ай бұрын
finally it's here.
@animeverse5912
@animeverse5912 Ай бұрын
Redis vs scaletable (rust alternative)
@theo-k4i8m
@theo-k4i8m Ай бұрын
Hi, do you mean Scaleout, I couldn't find any Scaletable
@animeverse5912
@animeverse5912 Ай бұрын
@ I’m sorry I meant skytable
@DhavalAhir10
@DhavalAhir10 Ай бұрын
Apache Solr vs ElasticSearch
@AntonPutra
@AntonPutra Ай бұрын
noted!
@emreapaydn4064
@emreapaydn4064 Ай бұрын
So MongoDB is more performant than PostgreSQL?
@xenostar3606
@xenostar3606 Ай бұрын
Same question
@emmanuelolowu6768
@emmanuelolowu6768 Ай бұрын
For JSON object
@DanielMescoloto
@DanielMescoloto Ай бұрын
yeah... but with a cost. mongodb data is first written to the in-memory, but postgres ensures is written in disk
@nexovec
@nexovec Ай бұрын
lol, no. It is more scalable though.
@supercompooper
@supercompooper Ай бұрын
That's not really true You should look into it a bit more. ​@@DanielMescoloto
@mavriksc
@mavriksc 4 күн бұрын
How about tiger beetle. And run some financial transactions test the 1kx improvement
@inithinx
@inithinx Ай бұрын
Interesting video. I'm sure postgres/mongo is done something smart here, someone will make a PR to improve postgres i think. Also, KTOR just hit 3.0, so i will really appreciate a speedtest between kotlin and golang with ktor as the backend framework for kotlin and maybe echo as the golang backend framework. Thank you for your awesome work!
@reactoranime
@reactoranime Ай бұрын
OLS vs Nginx, I've was surprised how OLS make sites flying..
@AntonPutra
@AntonPutra Ай бұрын
ok interesting
@AntonPutra
@AntonPutra Ай бұрын
🍿 Benchmarks: kzbin.info/aero/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn&si=p-UOaVM_6_SFx52H
@Ilja903
@Ilja903 Ай бұрын
Addicting channel, I literally check your channel every day. Plz MariaDB vs MySQL. Отличный канал, эти тесты пушка, такого просто нет нигде
@AntonPutra
@AntonPutra Ай бұрын
spasibo! ❤️
@Milano274
@Milano274 Ай бұрын
Can you please compare postgres and clickhouse? I believe it would be interesting to see row based vs column based db.
@exzolink
@exzolink Ай бұрын
Все еще ждем Postgre vs Maria
@Valeriooooh
@Valeriooooh Ай бұрын
try mongodb vs surrealdb
@AntonPutra
@AntonPutra Ай бұрын
ok will do!
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
Inside the V3 Nazi Super Gun
19:52
Blue Paw Print
Рет қаралды 2,5 МЛН
i dove down the 7z rabbit hole (it goes deep)
12:50
Low Level
Рет қаралды 621 М.
Redis vs Memcached Performance Benchmark
8:44
Anton Putra
Рет қаралды 34 М.
SQLite vs PostgreSQL Performance
14:01
Anton Putra
Рет қаралды 44 М.
FastAPI vs Go (Golang) vs Node.js: Performance & Price
13:12
Anton Putra
Рет қаралды 39 М.
A New Era for C and C++? Goodbye, Rust?
9:08
Travis Media
Рет қаралды 135 М.
Solving one of PostgreSQL's biggest weaknesses.
17:12
Dreams of Code
Рет қаралды 225 М.
AI Is Not Designed for You
8:29
No Boilerplate
Рет қаралды 349 М.
PostgreSQL vs Amazon RDS: Performance & Price
7:03
Anton Putra
Рет қаралды 17 М.
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН