► What should I test next? ► AWS is expensive - Infra Support Fund: buymeacoffee.com/antonputra ► Benchmarks: kzbin.info/aero/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn&si=p-UOaVM_6_SFx52H
@QuentinBzt13 күн бұрын
Thanks again for this test Anton! Would be really interesting to see how latency compare when both client/server are on the same machine (with communication over a socket rather than localhost). This would help provide a more realistic scenario as SQLite would be a drop in replacement when everything is on the same machine. When client and servers are on different machines, usually SQLite can’t step in without non trivial architectural changes 🎉
13 күн бұрын
Thanks for all of your test. Mezzio vs Laravel next. Please.
@sailcraft13 күн бұрын
I would like to see your thoughts on Aurora versus RDS MySQL, and I appreciate you sharing the results of so many technologies already evaluated. Congrats!
@svetlinzarev345313 күн бұрын
It would be i teresting to compare different embedded databases: apache derby, h2 and sqlite
@AGAU102213 күн бұрын
Do you think any of your tests allow you to take full advantage of the concurrency of Go/Elixir and fault tolerance of BEAM? Or would you need to design new tests to properly check the real world performance of languages like Erlang, Elixir and Gleam? Or is it just too expensive to properly test concurrency?
@opensourcedev2213 күн бұрын
I use SQLite as an intermediate log storage engine before logs get shipped out. We can do billion+ inserts per day. Every day. It never fails. It's a Swiss Army knife database
@AntonPutra13 күн бұрын
cool! thanks for providing a use case
@jitxhere13 күн бұрын
where is it finally stored?
@NerdyWasTaken13 күн бұрын
@@jitxhereprobably at some S3 bucket in the cheapest TB/dollar area
@opensourcedev2212 күн бұрын
@@jitxhere we use Loki/grafana and opensearch for long term storage. SQLite is only present as a temporary buffer and gets pushed out by an agent to long term storage
@Xevion11 күн бұрын
@@NerdyWasTaken S3 is expensive, better alternatives exist if storage expense is a concern.
@sanjibsen746113 күн бұрын
Man you are truly a modern day hero. Thanks for listening to the requests
@AntonPutra13 күн бұрын
haha, thank you!
@InXLsisDeo13 күн бұрын
That request was a bit silly.
@AntonPutra13 күн бұрын
Just consider this test as another data point. Don't take it too seriously.
@aliandy.jf.nababan13 күн бұрын
Too bad embedded client sides library make unsafer and SQLite incapable for both client/server sides. You're right on this particular benchmark, why so serious
@ManiBalajiC13 күн бұрын
@@aliandy.jf.nababan Cause people are sensitive with the technology they work with , they hate if its inferior
@aliandy.jf.nababan13 күн бұрын
@@AntonPutra upsss
@jesusruiz407313 күн бұрын
@@aliandy.jf.nababan Could you elaborate on this? It seems a prejudice more than something based on facts. My experience is the contrary.
@aliandy.jf.nababan13 күн бұрын
@@jesusruiz4073 You'll get faster low latency and more clients access processed at once if you put SQLite embedded in your client software devices. Library full of clients as user datasets on their device used endangering data theft by hacker who targeting clients devices because user data stored in user devices always connect to your software developer server who provide related services. Correct me if wrong.
@bart201913 күн бұрын
I run potsgrSQL more often than MySQL (or rather MariaDB), but mostly the client and server are the same machine. That is a configuration that you really shouldn't have ignored, because I expect a significant difference with the case where the database is on a different server.
@AntonPutra13 күн бұрын
for sure! i'll consider it as long as i get enough requests
@yasu1113 күн бұрын
@@AntonPutra well, then I have to second this request :D
@navossoc13 күн бұрын
+1
@edism13 күн бұрын
Yes I hope those thinking sqlite is better remember this fact lol no one separates postGres and expects local performance.
@navossoc13 күн бұрын
@@edism damn, I was replacing my database at this exactly moment
@mileselam64113 күн бұрын
NOT just because of network latency. Postgres is made for concurrent access, not just sequential queries. If you try to write to SQLite with multiple threads, the locking will degrade its performance dramatically compared to Postgres.
@edism13 күн бұрын
Yep.
@youtubehhhh13 күн бұрын
But you can use only one thread to write and multiple to read in the SQLite
@jesusruiz407313 күн бұрын
That is a prejudice not based on actual facts. It would be good if Anton makes a benchmark to prove it. The current test is good, but I would have preferred to test a "typical" scenario for a Web application (eg. in Go), without limiting the threads. The bottleneck is the disk, not the number of writer threads.
@habba596513 күн бұрын
You have to be enormous scale to oversaturate a modern CPU and SSD for writes to SQLite. My laptop does 50k inserts/second. Reads can be parallelized.
@jesusruiz407313 күн бұрын
@@habba5965 Fully agree, though in my applications I tend to observe on order of magnitude less (But it is yet much more than needed in practice). Glad to see that I am not alone on considering SQLite ready for many production use cases.
@InstaKane13 күн бұрын
It would be interesting to see the costs incurred while running these tests in a separate video or appended to the end of each video
@inithinx13 күн бұрын
Yes!
@AntonPutra13 күн бұрын
sure, it’s between $10 and $50, depending on the time it takes to run
@kevikiru13 күн бұрын
@@AntonPutra That range is too wide. They are saying that they would like a small video for each test you do. This allows them to compare prices and make decisions.
@startappguy13 күн бұрын
@@AntonPutra Maybe a separate section every end of video that Test 1: PostgreSQL running for 2 hours costs 10 USD. Test 1: MySQL running for 2 hours costs 11 USD. Test 2: PostgreSQL running for 1 hour and 31 minutes costs 50 USD. Test 2: MySQL running for 1 hour and 31 minutes costs 49 USD.
@AGAU102213 күн бұрын
@@startappguy Yeah, I think knowing the cost of each technology would be really interesting, but it's not really fair if one is cheaper because the requests failed, so you'd really want to know the cost of the first 100K requests or something like that., for each technology.
@trailbaseio13 күн бұрын
Great video. Regarding spikes, you can see them for either just more or less pronounced. They're likely maintenance tasks such as vacuuming, rebalancing, checkpointing (in case of SQLite WAL). If you wanted a more apples-to-apples comparison you could bench SQLite-servers such as PocketBase, TrailBase (author here), or libsqlD (not to be confused with local libsql, which is just SQLite). On a tangent: given how cheap SQLite operations are the FFI overhead of Go is actually not insignificant, you would probably get quite a bit more performance from a C/C++/Rust client.
@davidsiewert864913 күн бұрын
Suggestion how to improve the test: Use multiple clients in parallel (slowly increasing them over time), since sqlite will lock and bottleneck on multiple concurrent writes (as per the official sqlite docs). Lets find the point on how much RPS or concurrent client writes a separate client-server db like postgres is better.
@KennyChowPD13 күн бұрын
Just the explaination of the differences in the beginning between two tools is to the point, keep up the good work!
@Zeenixx13 күн бұрын
What about the same test, just that the PostgreSQL server and client are installed on the same local server, then compare it with SQLite. Love your content, keep them coming!
@kevikiru13 күн бұрын
There is still going to be a latency due to the network functions, even if local. I think.
@shekharnandkoemarsing15813 күн бұрын
@@kevikiru That is correct. Though it will for sure be faster due to it running on localhost now essentially, you still have the translation layer of Ethernet frames/TCP/PG protocol. So that will always add time compared to just a function call and a file open syscall.
@PanSzelescik13 күн бұрын
@@kevikiru On the same server you can use unix sockets
@jesusruiz407313 күн бұрын
Yes, That would be a more similar comparison. And also to raise the limitation on threads, to make the comparison more "real-world". However, I expect to see SQLite beating PostgreSQL (as per my experience). Up to 500 tx/second and when I can have a single application server, I use SQLite. Much simpler to operate and maintain.
@PrestonThorpe-d1x13 күн бұрын
Really appreciate the introduction here, very well done
@AntonPutra13 күн бұрын
my pleasure!
@QuentinBzt13 күн бұрын
Thank you so much for this! I’m one of the (many) who have asked for it. For the haters out there: yes they don’t totally compare, BUT, both can be used to store and manipulate data with SQL and in sooo many situations, especially on small/medium websites, the setup ends up being application and DB running on the same machine. In these situations (being either MySQL [whatever the flavour Percona, Maria…], Postgre or MS) having a client/server configuration just adds complexity, latency, configuration overhead and we needed to know about potential performance! Without this enlightening test, people would still go choosing based on what’s trendy or heard of “best” in a completely different setup.
@AntonPutra13 күн бұрын
true, many personal projects would greatly benefit from using simple SQLite rather than spending time configuring and maintaining traditional databases
@QuentinBzt13 күн бұрын
@@AntonPutra all the more when you get to see the application is often using the db’s root user…
@cmoullasnet13 күн бұрын
Use SQLite as often as possible unless you absolutely need the extra features of a full RDBMS. It’s incredible how much SQLite can handle. It’s even more impressive if you have a use case where it can run in ramdisk :)
@wadyn9513 күн бұрын
Thank you for showing this! I have known about and recommended SQLite for small projects for a long time. Thanks for sharing! Really good comparison to prove this for someone who didn't trust it at all! I think SQLite is a really good solution as a database for small and even medium-sized projects because it simplifies infrastructure and provides awesome performance on modern SSDs.
@AntonPutra13 күн бұрын
rue, if you consider running your project or a website on the same VM/server, SQLite can work well!
@jesusruiz407313 күн бұрын
Yes, I agree. Except that SQLite supports up to 1000 tx/second, which is not a "medium-sized" project. In reality, the limitations of SQL for "normal" projects are not related to performance, but with how the project is structured. For example, you can not have different applications (even with different business logic) writing to the same instance of the database. Well, you can not with the "vanila" version of SQLite. With things like rqlite that limitation dissapears (together with most of the performance advantage, of course).
@martinhotmann786813 күн бұрын
Please always mention which version of what you are using :) Postgres v17 was just released in September 2024 and if according to some blogs aimmense step forward in performance compared to pg_16. Also which version of SQLite3 are you using? Thanks for the Video ^^
@jesusruiz407313 күн бұрын
It does not matter. The difference is in the network roundtrip when using PostgreSQL (or any network database). For SQLite, the bottleneck is the disk, which is typically much faster than the network.
@johnchukwuma41313 күн бұрын
I enjoy these comaprison videos. Thanks for making them. FYI: I think the unexplained spikes are due data re-indexing and re-organisation that would happen as more data is inserted.
@toragodzen13 күн бұрын
What is DB size in the end? Will something change if we start with prefilled DB , a ~few Gb ?
@AntonPutra13 күн бұрын
it's quite small, under gig
@justabdurauf540813 күн бұрын
i really love your videos! Can you do the a single video, where you prepare all this infrastructure (Grafana, Prometheus and etc) and AWS itself. I guess, it would be great to see the process of preparation
@krissukoco929413 күн бұрын
I was just searching for Postgres vs SQLite benchmark 2 days ago. Nice breakdown, keep things up!
@busterhale706911 күн бұрын
I am always impressed when non-native English speakers give technical presentations. Very good.
@AntonPutra10 күн бұрын
thank you!
@tandemwarhead13 күн бұрын
SQLite can be placed in RAM, drasticly increasing its performance. Read the SQLite doc about mode=memory and preferably cache=shared
@jesusruiz407313 күн бұрын
Yes, but then you do not have persistence of the data in case of a crash. But I agree with you, in many cases, SQLite is a fantastic in-memory database with a standard (mostly) SQL interface. I have used it like that in many projects for many years.
@tandemwarhead13 күн бұрын
@jesusruiz4073 you will have persistance if you do periodic backups and then importing latest one on restart. Or you can do io copy on the memory database itself, basicly duplacating its contents to peraiatent storage.
@jesusruiz407313 күн бұрын
@@tandemwarhead Ok, for some use cases you can trade durability for performance, for example if it is not a disaster to lose some data when the machine crashes between backups. But with the very high performance of SQLite, I tend to have only two extreme cases: 1) full durability (if the app received OK, then the data is in the disk. 2) In-memory SQL database. What I mean is that for "business-type" applications the performance of SQLite with full durability is more than enough.
@YuriyNasretdinov13 күн бұрын
I believe that in memory mode you can't have reads parallel with writes, so, surprisingly, WAL can still often be faster
@jesusruiz407313 күн бұрын
@@YuriyNasretdinov Yes, memory mode serializes everything (because it is in memory in the same process), but it is still faster than writing to a disk. Especially if you sync to disk every write: "Write transactions are very fast since they only involve writing the content once (versus twice for rollback-journal transactions) and because the writes are all sequential. Further, syncing the content to the disk is not required, as long as the application is willing to sacrifice durability following a power loss or hard reboot. (Writers sync the WAL on every transaction commit if PRAGMA synchronous is set to FULL but omit this sync if PRAGMA synchronous is set to NORMAL.)"
@awksedgreep13 күн бұрын
You asked about the spikes. With WAL SQLite has checkpoints just like PostgreSQL.
@swayok13 күн бұрын
Results are very interesting indeed. I even may try to use SQLite for some parts of the projects where it might be faster. Also it would really be interesting to compare SQLite with Postgres without network latency because for small/medium projects it is more common to have DB on the same server as application. It also might be interesting to compare connection to Postgres via local network and via unix socket. It is unclear for me how much faster unix sockets are.
@roger-sei13 күн бұрын
wow, that was sweet. Had to watch with popcorn on big screen. Thanks for the detailed test, buddy ❤
@Dontcaredidntask-q9m13 күн бұрын
What about libsql instead of sqlite?
@AntonPutra13 күн бұрын
i think it's worth comparing. let's see what people think
@piotrbielski962413 күн бұрын
JVM vs .NET vs Go! Battle of the titans
@kingo5512 күн бұрын
Vs. Rust...
@MrEnriqueag12 күн бұрын
@@kingo55 It's comparing languages with gc Rust doesn't...
@Judge3710 күн бұрын
well its not gonna be jvm
@GuzikPL413 күн бұрын
Insane production, amazing channel. Became a fan only one video! Keep up the good work!
@flyingsquirrel327113 күн бұрын
I would love to see two more tests when comparing these two: 1. Running Postgres on the same machine as the client 2. Make the application/client multithreaded (maybe using more cores as well), so they can actually issue more read/write requests at the same time. I'd like to see a scenario where the advantage of postgres being able to execute multiple write requests simultaniously could actually be observed (if that's possible).
@jesusruiz407313 күн бұрын
I second this request. It will prove that SQLite is better than PostgreSQL in more realistic scenarios. This test was good, but to be more relevant it would be fantastic to prove that SQLite is better in those scenarios.
@habba596513 күн бұрын
Don't try to parallallize writes to SQLite, not worth it. It will actually be slower than doing them single threaded in nearly all cases.
@jesusruiz407313 күн бұрын
@@habba5965 That is not true in the (admittedly limited) tests that I performed 1 or 2 year ago. And it makes sense, when you use the WAL. Do you have factual proof of what you say? I think that Anton could just test it by not putting any limit on the threads. I know that many people are "passing around" that fake fact, coming from remote times before the WAL.
@habba596512 күн бұрын
@@jesusruiz4073 WAL helps here but is still slower. The SQLite docs themselves express that it much prefers single threaded to avoid lock contention. Different scenario if you are using some Async runtime in your program though, since those are usually built to be pretty good at scheduling. In my testing I have found that for Rust Tokio is actually faster than SQLite itself when doing "parallel' (concurrent) writes.
@jesusruiz407312 күн бұрын
@@habba5965 Fair enough. I was thinking on using Go (as Anton does many times) as a backend server for a web application, and do not put any restrictions on threads. Normal application developers in Go just access the DB in each incoming HTTP request, letting Go handle the multiplexing of goroutines to OS thrreads. This would be a simple but realistic scenario that application developers can relate to. And of course, in this scenario I would expect SQLite to win by a big margin, because it is an embedded database (with all the advantages and disadvantages that this may have ...)
@PanSzelescik13 күн бұрын
Cool test! You can also try communicating with Postgres not using network but by Unix socket?
@AntonPutra13 күн бұрын
yes i will consider it
@shreyaspatange865313 күн бұрын
Can you do postgres vs mongodb . Please gotta end this war.
@arkeynserhayn837013 күн бұрын
Thank you for providing this community requested benchnark, Much appreciated.
@AntonPutra13 күн бұрын
my pleasure!
@lucianoseibel12 күн бұрын
these are the best and useful videos nowadays .. congrats anton
@soubinan13 күн бұрын
Always so interesting to see your benchmarks! Thank you for that! I think it will be very interesting to see how duckdb is behaving compared to sqlite on larger db size since duckdb is meant to be for analytics.
@HVossi9213 күн бұрын
Thank you so much for taking up this comparison :)! Also a very good introduction and explanation of the differences. Personally I'd be more interested in seeing this comparison with postgres on only one single machine, but this was already enough for me to move from postgres to sqlite for our next project. Previously I didn't trust sqlite enough, since a lot of information used to say that sqlite isn't for production, but I love working with it (we develop enterprise software, which runs on a single server. Currently we use postgres, since "nobody gets fired for choosing postgres", but sqlite would off er a lot of benefits to our use-cases, and the performance would apparently be more than enough). So thanks again, I found this highly valuable!
@jesusruiz407313 күн бұрын
If you have less than 500 tx/second in your application, SQLite is better than PostgreSQL for many types of applications. And do not listen to the whiners complaining about the limitation of one writer: the bottleneck is the disk speed for most applications (like "normal" web applications). However, the main limitation is architectural: with "vanila" SQLite, the database belongs to the application. If the application exposes APIs, this is not a big thing, but you should look at these things before you make a decision.
@roger-sei13 күн бұрын
Same here. ppl always telling me "do not use sqlite for prd". I did my own tests and started to using it in all projects, incluind e-commerce's with large datasets, with lots of writes and reads.. It's incredible how good SQLite performs. So, don't underestimate the potential of sqlite
@pixiedev13 күн бұрын
I store my data in A4 size paper using a printer and to read I use scanner and ocr and that is faster than mongo. . . . . . Just joking I use sqlite ❤️ for my personal projects. btw great video 👍🏻.
@AntonPutra13 күн бұрын
thanks :)
@isenewotheophilus648513 күн бұрын
I understand why you had to include the last line, some will think it's true 😅
@gost13 күн бұрын
@@AntonPutra Actually, It would be a great test for the 1st of April video 😆
@kulikgabor76247 күн бұрын
You could boost your setup by faxing the data over a phone network instead of taking the papers by car.
@gost6 күн бұрын
@@kulikgabor7624 Faxing is a very outdated method of transmitting data. Wake up! Paper is a future of data storage.
@lolenderchebyrender405012 күн бұрын
awesome benchmark! would be interesting to see libsql by turso
@victormadu163513 күн бұрын
Thanks so much for this. Is it possible to do Postgres Vs Mongo DB and Postgres Vs Cassandra/ScyllaDB
@AntonPutra13 күн бұрын
yes, i'll do mongo soon as well
@victormadu163513 күн бұрын
@@AntonPutra That's great! Thank you
@ljkeller_yt13 күн бұрын
Thanks for the upload, I was one of the people asking!
@elephantum13 күн бұрын
Sqlite is capable of handling concurrent writes with WAL enabled
@lemonardot12 күн бұрын
The tests completely proved my points on the previous video comments, and some users didn't understand it.
@AntonPutra8 күн бұрын
i'll update soon using a unix socket as well, just curious now
@linuks879 күн бұрын
Thanks for the video. I discover you lately. I like see other case. The case postgresql and its client on the same server. (not over the network)
@AntonPutra8 күн бұрын
yes, i'll release it soon using a unix socket
@startappguy13 күн бұрын
Again thanks for the videos. Really helpful. I only use SQLite on local device specially on mobile and PostgreSQL most of the time on server. Suggestion: Maybe you can add a separate section every end of video that talks about the pricing or total cost incurred during the test? Thank you! Example: Test 1: PostgreSQL running for 2 hours costs 10 USD. Test 1: MySQL running for 2 hours costs 11 USD. Test 2: PostgreSQL running for 1 hour and 31 minutes costs 50 USD. Test 2: MySQL running for 1 hour and 31 minutes costs 49 USD.
@LoukasMertzanis13 күн бұрын
What a champion. Thanks for the amazing content. Do you run these benchmarks locally or in the cloud? Is it possible for us to reproduce and possibly extend the benchmarks? Also, what is your favorite tool for creating diagrams and animations in your videos? They are very nice!!!
@JJOULK13 күн бұрын
I love your analysis videos, they give so much insight into systems performance and I think this type should be tough at university. A couple of comments: 1. Why is Delete operation latency closer to insert than update? Deleted are usually combined with a where clause, so I would expect a table scan of some type. 2. I think showing the measures metrics along data size in the db (or some related value) would be very useful for picturing how much "data" it takes to reach certain behavior. 3. Using docker containers for postgres db and client in a same VM would be more like running on same server or different VMs as your test? My intuition would be that it would be faster, but then again VMs are usually in the same physical server
@JonCanning13 күн бұрын
Great video again, what do you use for the animated architecture diagrams?
@foobarf876613 күн бұрын
I wonder how the performance compares with HDD latencies (HDDs are sometimes still used for DB due to reliability, but obviously not on AWS). Also I wonder if it would be fairer to postgres to test using UNIX socket instead of AF_INET. The spikes are probably AWS, the AWS has a bad reputation with some SQL admins I worked with.
@erkintek13 күн бұрын
Many people I ve worked with unaware of Unix sockets, thanks for mentioning it here
@AntonPutra13 күн бұрын
yes, if enough people ask me to rerun this test and colocate Postgres and a client on the same VM, I'll use a Unix socket for the client
@LukaszChelmicki13 күн бұрын
The spikes can also be caused by vacuum process. It's Postgress's garbage collection process which uses quite a lot of resources and acquires locks in the process. But if you know any other reason that could cause the same spikes, please let me know, because we see them in our projects as well.
@foobarf876613 күн бұрын
@@LukaszChelmicki it's time sharing in the KVM hypervisor. HyperV does it too, I don't mean to single out AWS. I/O is the biggest loser when virtualising.
@dusanjovanovic185713 күн бұрын
Great test. What charting library did you use for app?
@ИльяО-б5щ11 күн бұрын
Spike on postgres is checkpoint + autovacuum which drop buffers to disks, reindex and update table stats
@jesusruiz407310 күн бұрын
Yes, I agree. But you meant "SQLite", instead of "postgres", right?
@AntonPutra10 күн бұрын
make sense, thanks
@garretmh13 күн бұрын
I really would have liked if this included a third run with Postgres co-located with the client. Either way, great video as always
@mjerez602912 күн бұрын
It would be handy to have a graph or the data size either in rows or Gb so we can know when a database could start to degrade too much?
@AntonPutra8 күн бұрын
yes, that makes sense, but i need to design a test for heavy writes
@linhub13 күн бұрын
Thank you for this, very informative. Could you compare postgres with libsql?
@violanes12 күн бұрын
Hi! Do you plan on benchmarking database conn poolers such as pgCat, pgBouncer, and odyssey in the future?
@thinkingcitizen13 күн бұрын
This is a great video. Happy Diwali from New York !
@code.monkey13 күн бұрын
Привет! Спасибо за бенчмарки, было бы интересно сравнить производительность при размерах БД не помещающихся в память. Не понял из видео был ли произведён тюнинг постгри? У неё по умолчанию настройки такие чтобы запускаться на чайнике =)
@code.monkey13 күн бұрын
Думаю пики у постгри связанны либо с чекпойнтером либо с вакумом
@abdelazizabdelioua89013 күн бұрын
Sqlite with pocketbase is all you need for your side projects believe me ❤
@AB-ms7my13 күн бұрын
6:43 It's possible by interleaving work among workers. SQlite doesn't do it for users, let alone automatically.
@AB-ms7my13 күн бұрын
2:42 We can use low level tools to write files or modify disks in parallel... They should persist. It's not clear what do you mean by safe.
@duztdruid12 күн бұрын
Should have included local postgres listening on UNIX socket. Also please do a test with a few more CPU cores and concurrency. SQLite should do extremely well on concurrent reads. A bit less so on concurrent writes, but on 2 CPU cores I don't believe Postgres would pull ahead since SQLite does writes significantly faster than Postgres in general.
@AntonPutra10 күн бұрын
yes, I’ll do a UNIX socket test soon. I’m not sure yet if I’ll use 4 or 2 cores
@matthewrideout267713 күн бұрын
Using docker compose to add a Postgres database to the same virtual machine that the web server is running on is common (like with FastAPI server.) You could compare a docker FastAPI server using SQLite vs a docker compose version that adds a postgres container. Many times people do not want to pay for a separate VM for a database. This is the situation that sqlite vs postgres is more common, because you're in maximum cost savings mode and are very concerned about the extra resources postgres needs, vs what it offers.
@ghislaincarrier11 күн бұрын
Just saw in SQLite documentation that WAL by default is not durable. From the documentation: "Further, syncing the content to the disk is not required, as long as the application is willing to sacrifice durability following a power loss or hard reboot." This means it is not fair to compare in this mode with other durable RDBMS. You should set PRAGMA synchronous to FULL if you test SQLite again.
@AntonPutra10 күн бұрын
yYeah, I know there are some limitations with wal, but in the first place, it's hard to compare them
@nithinraj142812 күн бұрын
I would still choose Postgres because it can store Arrays natively without doing a relation with other tables which has nothing to do with benchmarks but it is a DX win
@AB-ms7my13 күн бұрын
13:14 Risky and unreliable are bad words when talking about such things. What do you mean exactly by this?
@ghislaincarrier12 күн бұрын
Thanks for doing these benchmark comparisons. I have a few comments for improvement In the first test, you should have reads as well. You could for example select from the product table as if you wanted to check the availability. in the second test you should fix the join types. do INNER JOIN of customer table, there will always be a customer for an order. It probably won't change the results because the customer table is small and you select a single order. Maybe you could select a range of orders instead?? You should definitely start with a large database. Any database engine can be fast with very little data. You should have thousands or customers and hundreds of products. Also thousands or millions of orders. Ideally, it would be nice to test a database larger than memory. You can generate CSV files, then some database engines have the option to quickly load a csv so you don't waste hours. Or, for example with SQLite, you could initialize the DB at home and upload the database to Amazon. Like others said, use multithreading. Let SQLite lock the DB for writes. The issue with your test is some people will think that SQLite is better in any use case which I'm sure it is not.
@AntonPutra10 күн бұрын
Thanks for the feedback. I’ll try to implement some of it for the next test
@garretmh13 күн бұрын
Lately, I start projects on SQLite and only upgrade to Postgres if I want to containerize it or use some fancy DB feature.
@committedcoder335213 күн бұрын
When designing tests you should consider the number of “independent variables”, which is the number of “things” you’re changing between two tests. In this video, you have two: postures vs sqlite and local vs network access. This means you can’t know how much of the dependent variable (outcome) is because of which independent variable. IMO you should have done two separe tests: one with just Postgres, both remote and local And another with both Postgres and sqlite, both local.
@committedcoder335213 күн бұрын
Don’t get me wrong, loved this all all other videos! Keep it up :)
@SultaoFeliz13 күн бұрын
You could use Turso instead of raw SQLite That could be interesting
@recklessroges13 күн бұрын
Thank you for this test. These two are so vastly different but comparing apples and Elephants is valid if you are comparing their vitamin C content. I hadn't considered disk IO cost as a possible factor.
@Nick-yd3rc13 күн бұрын
I didn’t notice how you configured your Postgres cluster. Just the defaults?
@Gaijin10113 күн бұрын
Long live SqueelLite!
@AntonPutra13 күн бұрын
haha yes
@paca310713 күн бұрын
I love the philosophy of SQLite. Imo this is the best DB for the new projects.
@himmetavsar216810 күн бұрын
Nice work 🎉 subbed
@AntonPutra8 күн бұрын
thank you! i'll make an updated one using a unix socket
@michaelutech478610 күн бұрын
I would love to see a benchmark between a sqlite and postgres serving a rest API. Postgres could use postgrest (rest interface embedded in postgres), while for sqlite an app had to be written providing the API with sqlite as backend. This setup could test simple CRUD interfaces to the tables as one test case and complex operations simulating real business logic. In sqlite this would be implemented in code, in postgres probably as stored procedures in some (to be fair fast) language supported by postgres. The CRUD segment would test the infrastructure or architecture overhead and show a base line either for simple use cases but also when compared to the other segment. In the logic segment, I would like to see a test that performs the kind of logic you would see in a web shop. A bit more complex than CRUD, more involved queries using 1-4 table joins and enough data to get some IO cache misses. I would expect sqlite to win all of these competitions, because it plays into its architecturally defined strengths. I would expect Postgres to scale better, f.e. in situations with a high volume of concurrent updates, ideally with a query workload in parallel and also when really complex queries are used (Common table queries doing graph operations, imagine a file system implemented as nodes with foreign keys for directories and files - all recursive children of node X having some property). I don't know if sqlite supports that. Just an example of something you might want a DB in a complex App to do. Not everybody uses SQL for key-value data o shopping carts only. And another example to add some fairness would be to use the second VM you used as postgres client to run a postgres slave replica that is used for read-only queries and to write updates only on the master and compare that to sqlite under heavy load. Disks should be local SSD (or arrays of such to boost IO). I don't think that any disk IO should involve the network, this just doesn't make any sense, both for sqlite and postgres, no matter what cloud folks have to say about it. This is where you want the absolute best performance. Redundancy and fault tolerance has to be done on the DB layer, not on storage (f.e. using replicas). I would expect from such a benchmark that both sqlite and postgres would be uncontested winners for some of the tests and the baseline should be much closer (sqlite should win that too, just not with a huge margin).
@AntonPutra10 күн бұрын
Thank you for your feedback. Some of it seems a little complex, but I’ll see what I can do to implement at least some of i
@michaelutech478610 күн бұрын
@@AntonPutra I would love to help, but I'm drowning in work lately. If there is something I can do, let me know. Can't promise anything but I would love to see this benchmark
@michaelutech478610 күн бұрын
Regarding costs for these benchmarks: Wouldn't it be more cost effective to use VPS instead? I'm paying between 5-10€/m for them (4cpu6ram100nvme - 6cpu16ram200nvme). That's Contabo, all other providers I tried are more expensive. Hetzner is also good. Beats Azure, AWS and Google by orders of magnitude. On the job we have a monthly azure bill of around 3000€ and I could easily get the same workload up for at most 1-200€. It's of course a whole lot more work to setup everything yourself, but most of the work is a one off task and then some extra effort to maintain it.
@kyesil13 күн бұрын
Good video. For db comparison, should include db size for same data
@thepashuks13 күн бұрын
this what I needed. But its better to see few more scenarious: both apps are on single instance with finetuned connection pool for postgres.
@STNSecondToNone11 күн бұрын
Hi! Can you please add rust SurrealDB to comparison? It also has go client
@AntonPutra10 күн бұрын
yes it's in my list
@wedding_photography12 күн бұрын
Pretty sure SQLite allows concurrency for reads. You could test that against Postgres.
@AntonPutra8 күн бұрын
yes, concurrent reads are supported, based on the docs, unlimited number of them
@meganuke200510 күн бұрын
Can you do postgres vs mongodb? I am really curious
@AntonPutra8 күн бұрын
yes, it's close to the top of my list
@TurtleKwitty12 күн бұрын
Really wish you hadnt brushed off the postgres advantage with concurrent users; that's the question not "if we use postgres wrong so it fits the model of sqlite how does it compare" it's "using both correctly how do they compare". You should really do a test with concurrent users emulating a normal user of a website and see how far you can push them. Im firmly in the sqlite is more than good enough for most things, and yes that includes concurrent users because the filesystem and not having the overhead is fast enough that single writer really doesnt matteru ntil you scaleto ridiculous numbers but it would be nice to see where that falls exactly but also the resource usage/costs of each
@AntonPutra8 күн бұрын
thanks for the feedback, i'm about to refresh this video using a unix socket and may include multiple concurrent users as well
@shoddits215612 күн бұрын
A test to determine how SQLite performance degrades with increasing file size and record count would be quite insightful.
@yzd5147 күн бұрын
Pocketbase (SQLite) vs Supabase (PSQL) would be interesting, to see the real effect on the open source backend apps
@AntonPutra7 күн бұрын
ok added to my list
@rosadyhalid817212 күн бұрын
Could you create a benchmark video comparing the performance of ClickHouse with its competitors like PostgreSQL, MySQL, and others ?
@AntonPutra10 күн бұрын
will do in the future
@NoBodyknow17213 күн бұрын
Can you please tell us how to configure this kind of server and how can I implement it for testing
@mariogomezarr13 күн бұрын
Awesome content. Thanks a lot.
@VampireSilence5 күн бұрын
Just a visual thing: Please don't use mutliple blue tones for lines in the charts, it's difficult to distinguish. ^^
@supqerior2610 күн бұрын
Maybe Dqlite against PostgresSql? It's just a Sqlite with raft based replication for clustered apps.
@AntonPutra10 күн бұрын
interesting, i'll take a look!
@spermakonya13 күн бұрын
make a test using unix sockets on postgres/mysq/sqlite please
@kelownatechkid13 күн бұрын
Yugabyte is worth considering... it has the most scalability, postgres compatibility, and NO VACUUM!!
@leuhenry803113 күн бұрын
sqlite has a huge potentials of adapting to diff scenarios, even wrapping as diff newsql db
@timhilt1185Күн бұрын
I would love to see ScyllaDB compared to something else. ScyllaDB is a document DB, so comparing it to MongoDB or Cassandra would make most sense. But I guess the result would be clear already.
@wesleycoder13 күн бұрын
10:02 insert latency for sqlite should be the period when the engine reconciles the Wal (write ahead log) into the db. For postgres I dunno
@theunskruger121111 күн бұрын
Id love to see SQL Server vs PostreSQL
@AntonPutra10 күн бұрын
ok noted!
@maulanamalique13 күн бұрын
please compare Java vs Rust for REST Web Service Performance.
@matthewdee602313 күн бұрын
Any chance of comparing PostgreSQL to SqlServer?
@AntonPutra13 күн бұрын
very high chance! 😊
@МихаилКузьмин-й8ц13 күн бұрын
Try to use a unix domain socket to connect to Postgres on the same server
@AntonHolovan-f7z5 күн бұрын
top content, great job!) tiny not important correction - it's конкАрент, not конкУрент
@carlos-rian13 күн бұрын
Python web frameworks FastAPI vs Flask vs Django vs Robyn
@CalifornianViking13 күн бұрын
Thank you for doing these tests, however, this is not relevant. You should test SQLite against plain file access for the single client and single core example. Run one of the larger database test frameworks to test 100s or 1000s of clients on computers with 10s of cores and 10s of GB RAM. You will see that SQLite deteriorates once you need advanced functionality and concurrent data access. Your scenario is not a good fit for Postgres.
@nknapsterr13 күн бұрын
What was the data size after which the requests per second deoped.
@keneola12 күн бұрын
Are the spikes a result of garbage collection perhaps?