The Only External System You Need

  Рет қаралды 51,917

CompSciGuy

CompSciGuy

Күн бұрын

Пікірлер: 134
@dumbotterlover2558
@dumbotterlover2558 Жыл бұрын
In my experience, the unnecessary complexity is due to the boredom of principal devs, tech leads, or CTO's that have been pulled away from direct implementations. Or leadership that just wants to be able to put tags on a resume so they can tell the next company that they "lead X company into a new technical era", but leave out the fact that their support engineers are now having to work 3X as much to maintain their forced garbage.
@RU-qv3jl
@RU-qv3jl Жыл бұрын
Those or that developers are doing resume driven development where they just want to play with cool things to put on their resume. Admittedly this tends to happen when you have a “Research” team that comes up with all the new ideas/architectures that are then automatically implemented. That happened at a shop I worked at recently where I was in Ops. It meant that they wanted me to move from a specialist into a jack of all trades who knew 12 different types of database, was a cloud expert, etc. All in a mere few months of course. It made the workload unbearable and I left. Some people hate to think about how to run their systems and just want to play. Then when management let them do that it’s awful.
@alextenie
@alextenie Жыл бұрын
@@RU-qv3jl "resume-driven development" I'll forever remember this phrase.
@GriaustinisTech
@GriaustinisTech Жыл бұрын
I've been on the both sides of equation. Database doing everything was fine at the start, but made life a living hell for everyone when company grew... No kind of money spent on servers, or outside consultants managed to solve the problem, only evolving into that overengineered solution helped to move forward. Yes, you can do basically everything with Relational databases, but bottleneck can be reached much sooner than expected, and not having plan B could mean failure of business
@brencancer
@brencancer Жыл бұрын
"The elephant in the room"
@sonicjoy2002
@sonicjoy2002 Жыл бұрын
This is a very interesting concept but actually older than most of the developers today: don't over-engineer it. In my software engineering career, I was young and naive too, and tried to bring the latest shining tools whenever I can, without really considering the downside: additionally dependencies that could break and the cost of them. When I think back, most of the time I didn't need the extra capacity/scalability, but I did spend thousands of hours fixing the bugs and issues coming from those unnecessary complexity. I get it we developers are secretly trying to use employer's resources to learn and gain experience in some new tools, languages, libs, that's probably the main motivation behind it.
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Especially today, how many people would realistically work with over 1TB of data? At least to me, that's kind of when you might want to start really thinking of architecture from a performance POV, and even then, you shouldn't design something that works for more than 10x the scale you expect. Because when you get there you'll anyways have rebuilt the system multiple times anyways (and you have the time to rebuild it again for another 10x increase of scale)
@huveja9799
@huveja9799 Жыл бұрын
It reminds me of a very basic and crucial principle, KISS .. Keep It Simple, Stupid!
@DevynCairns
@DevynCairns Жыл бұрын
I agree in principle with keeping things simple, but I've been bit before by not using specialized tools that are available when the demand for certain features or certain levels of performance grows beyond what a simple solution can handle. Even if you're very good at abstracting the functionality into a limited interface and can come up with a drop-in replacement, you're going to run into issues, but there are also cases where that just isn't possible at all because the better solution fundamentally requires a different API. Postgres is incredibly flexible as a programmer and can do almost anything you want it to. But that's dangerous, because it's not actually good at everything.
@knoppix87710
@knoppix87710 Жыл бұрын
Yep, completely agree here, like a queue is a queue is a Q. Until you need to activate message replay and RBAC etc, all of a sudden you're writing a barebones ActiveMQ in your own services stack. So yeah, an experienced team is needed to find that balance. Overall the message of KISS (keep it simple silly) resonates and is well understood, seldom practiced.
@Sarwaan001
@Sarwaan001 Жыл бұрын
Also a big player in the software tweeted this for career advice “do not accept tradeoffs a lot of advice out there is not real advice, it's the author trying to cope with their own shortcomings by claiming some fundamental rule you will hear things like "while you obsess over tech i shipped a product" or "being a good engineer and being a good manager are different skills" the truth is you can be good at both and there's no reason you can't - there's always an example of someone who is do not limit yourself out the gate by defining some narrow role/identity for yourself - all of that is fake and you should strive to be great at programming, product, design, marketing, business, etc you'll fail but you'll realize it's all the same and land somewhere pretty special” That being stated, you might not actually be over engineering your problem. Also in big tech, we make sure that all items scale independently and are cost effective. That way we can write off a feature and not worry about it as much anymore. Especially with spikes
@dadisuperman3472
@dadisuperman3472 Жыл бұрын
I think the problem nowadays for the over engineering dilemma, is that everyone became programmers instead of engineers. They just jump in, use whatever technology available and new to implement their task. Days when engineers sit down and think the problem with pencil and paper is no longer among us. In my career, every boss wants to see me writing code, while I'm thinking using pencil and paper, and because of that pressure you finish picking up ready made general solution tweak it and show it to them, no matter how much the cost of dependency. Well...🤷🏻‍♂️, what can you do!
@huuhhhhhhh
@huuhhhhhhh Жыл бұрын
Yup!
@yanakali2452
@yanakali2452 Жыл бұрын
among us
@bob_kazamakis
@bob_kazamakis Жыл бұрын
Ironically it is simpler and less buggy to use AWS services for this than using a db. Take the queues for example: you aren’t driving that data to be the input of a function, and as a consequence have no scaling concurrency, it would require polling to work, and has no failure mode in the same way a DLQ would work.
@georgeFulgeanu
@georgeFulgeanu Жыл бұрын
Please listen to this video and put postgres everywhere so when I join the company for tons of money, I'll look like a superhero proposing specialized tools that work out of the box.
@bioshazard
@bioshazard Жыл бұрын
Or you can use Redis for everything too, but love the pitch, great points
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
True! But then you should keep in mind you need to keep everything in memory (and memory is more expensive than disk). Also, in my opinion, Redis' key-value interface is much more limited than SQL. But I don't think it's a bad idea
@oddym5788
@oddym5788 Жыл бұрын
Redis offer persistance too
@imkunet
@imkunet Жыл бұрын
​@@MrCompSciGuyThere are a few databases that utilize SQL which are just high level abstractions over a key value storage engine (CockroachDB, SurrealDB, TiDB, ...) Something like joins can be recreated in something like redis in a way that feels just as hacky as replacing a LRU with postgres :) As for memory usage, if your main concerns are making your application work and have only ten requests per second throughout your entire system, it would be more useful to apply engineering time to features that increase customer acquisition... According to Redis' documentation (mind the bias gap), "1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory." I spun up a stock postgres docker image and it consumed 66.8MB. I don't mind sparing 1GB of memory even if it means that if all things scale linearly (let's suppose) I can store 6.25 million very simple users with instant seeking since it's all in memory.
@MattHudsonAtx
@MattHudsonAtx Жыл бұрын
Redis is just a KV store. Can't do tons with that.
@Pigeon-envelope
@Pigeon-envelope Жыл бұрын
It's possible I guess but not a good idea
@dienvidbriedis1184
@dienvidbriedis1184 Жыл бұрын
indeed, the friendly blue elephant got your back, friend!
@fabiolean
@fabiolean Жыл бұрын
Duuude. THANK you. If I have to look at one more over-engineered architecture diagram that's really just a desperate justification for deploying Kafka where it isn't needed, I'm going to S C R E A M
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Well how else will they be able to add Kafka to their resume? 😂
@sixtyfivewatts65
@sixtyfivewatts65 Жыл бұрын
Im new to this and i think this is what i was looking for . A simple database system to start learning about .
@abdirahmann
@abdirahmann Жыл бұрын
great choice, postgres is by far the best, it's used by most companies in the world, and it's very good, the documentation and community is great!
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Indeed the best one to start with
@CodingHaribo
@CodingHaribo Жыл бұрын
Sorry, but the message queue being a simple table, it has missed so many quirks that people really should have regardless of how scaled you are: retries and dead-letter queues are just some features that you’re code should not have to be in charge of. Not to mention that it would probably require polling for new messages
@Jean-vf3pi
@Jean-vf3pi Жыл бұрын
Indeed, if an operation with the message fails or for whatever reason the process operating on it dies, you have now lost your message. Using a database as a queue sounds simple on paper, but anyone who has actually done it for a while has the scars for it.
@nikolaimanek582
@nikolaimanek582 Жыл бұрын
You could flag it and delete on completion. The polling is of course problematic but could be replaced with Supabases realtime or Hasura.
@Jean-vf3pi
@Jean-vf3pi Жыл бұрын
@@nikolaimanek582 that feels like reinventing the wheel. Just use a managed queue service. You do not have to be a specialist who knows all about the underlying tool to do that. What this video presents as truth is but a fallacy.
@Jean-vf3pi
@Jean-vf3pi Жыл бұрын
By managed I mean any of the many cloud offerings available.
@lfm3585
@lfm3585 Жыл бұрын
you can can lock row, do whatever, release lock then delete .
@shaygus
@shaygus Жыл бұрын
This approach doesn't solve anything. It just moves complexity from one place to another. In order to handle all the different edge cases each of those systems handle and in some cases even to handle the main usecase, such as in the case of airflow for example. The engineers would have to writhe the entire base of logic by themeselves, this could accumulate to hundreds or thousands of hours of developing something you can have off the shelf. So even thought there is a complexity to learning and running those systems (in part can be automated), you can be up and running very quickly with hight quality systems developed by devoted people. By developing everything by yourself you will need to handle all of the bugs and issues that the original developers of those systems already handled. So in the end does it worth it? Not sure at all!
@liamwatts7105
@liamwatts7105 Жыл бұрын
What do you mean by "using apache airflow as a data warehouse"? Apache Airflow is mostly just a feature rich job runner, I don't think it stores any data (apart from metadata it uses to run the jobs)
@Winslow_Tech
@Winslow_Tech Жыл бұрын
Making sure i understand correct: For 1. Basically create a cache table and enable shared buffers to make it happen prior to a DB call by keeping the table in memory, and implement the logic to check if the cached entry exists on the server side, if its a hit read the cache and if miss, execute your query and write server code to store result in cache table as it returns response to front end, then write a CRON job to delete old entries. For 2. Create a queue table, insert incoming entries for processing and on the server it seems vague how exactly these will be processed but I'm guessing you'd have a long running service for dequeuing that locks the table, attempts to process, if fail, leaves the entry in DB (not sure how delete works here, it sounds like you'd have to reverse the transaction if it fails server side which seems complex) if success, deletes the entry and moves on to next. My concern here is how to write this service so it does not slow down other server operations, and make it generic enough to use for more than 1 use case. For 3. Not really sure what 3 is? You mentioned timescale DBs in the intro, and im not sure it makes sense to store time series data in a SQL DB unless you have a very small amount of entries because lets say your users submit time series data in only one request but it contains tens of thousands of points, then even with a small userbase youre putting the whole application at risk by building this inside a SQL DB. I'm no backend expert, I'm frontend focused, so maybe I'm missing some things but I'd definitely like to see more elaboration on the queue service and handling large quantities of data (again this isn't even large scale, it might only be a few thousand users but if they all create 10,000 new rows in your table every day I would imagine this would cause problems?) Thanks for the video.
@mandokir
@mandokir Жыл бұрын
You're not missing something. The guy that made this video is a clown who has no idea what he's talking about. Even though the fundamental idea of not overengineering might be a good one, he fails to prove it.
@huveja9799
@huveja9799 Жыл бұрын
pg_cron from citusdata is super useful
@bnssoftware3292
@bnssoftware3292 6 ай бұрын
It seems that by the comments people are generally opposed to this. I personally like the idea of a "jack of all trades" database like postgresql because it keeps everything in one place. No need to orchestrate a bunch of different services. Ome good example is saving a record from the front end and not having to then call a separate message queue. The database takes care of it for you. It also ensures that no matter what or who saves something to the database the message always gets cued on the database side.
@neoplumes
@neoplumes Жыл бұрын
Which smart person was it who said, "Haven't we learned not to use databases for message passing?"
@sandworm9528
@sandworm9528 Жыл бұрын
Someone who thinks message queues don't have their own database under the hood 😂
@___gg421
@___gg421 Жыл бұрын
Crazy this channel is so small, really good content.
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Thank you so much, appreciate it :)
@BrazilMentionedHueHue
@BrazilMentionedHueHue Жыл бұрын
I think the use postgres as a memory database by increasing its cache is not a good take, since the cache size would apply to every table in the database and could make other big tables "steal" the cache from the table that you actually need the cache. Instead (if your RPS is low) I think it would be a good idea to use the resources of the language (or simple external deps) as a cache, an example would be expiring_dict library in python that it makes very easy to make a lru cache, however this would work only in server environments with a single instance, which is common in low RPS usecases
@user-qi5kb5th7y
@user-qi5kb5th7y Жыл бұрын
postgres is the single best piece of software ever
@nichtverstehen2045
@nichtverstehen2045 Жыл бұрын
if all you have is a hammer, everything looks like a nail... there is a good reason why specialized services like a message queue exist. i've met developers who knew one thing only and used it for everything. they are a disaster to any project. following your idea who needs postgresql when there are good old files and sockets?
@martinvuyk5326
@martinvuyk5326 Жыл бұрын
I get your point.. but as a Data Engineer I really can't agree with building a DWH on a row DB. Aaand Airflow is a job handler, not a DWH
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Yeah I realized Airflow is a job handler after finishing the video, not worked with data warehousing much other than proprietary some file based ETL systems, so just took the first Google search result for that tech (which was obviously a mistake in hindsight, should've done some due diligence on that). Imo, for small scale, row DB is sufficient and the cost of having specific tech outweighs the benefits - which is the main point with this video for everything I'm talking about. I should've said small scale like 50 times more probably :p What would you use instead for DWH? (DuckDB seems reasonable, but not aware of any other open source columnar DBs)
@martinvuyk5326
@martinvuyk5326 Жыл бұрын
@@MrCompSciGuy yeah I also googled a bit and didn't find any open source alternative, I did find a Postgres extension though, seems a company made it and they offer their service as consultants (didn't read which licence it uses). I have used row DB for DWH and it's a pain, at 2 million rows it's doable, at 50M+ it's just impossible if you don't shard the table per month or whatever and make your index very carefully. I still haven't tried the Postgres extension for vector DB, which I imagine does columnar calculations. I did use the PostGIS extension which also does up to 4D vector search and it was very fast. So maybe Postgres is implemented in a way in which it can be column as well as row based... so idk.
@Felix-on7mx
@Felix-on7mx Жыл бұрын
This video is gonna pop off!!! It's because it's a great insight.
@eitaDev
@eitaDev Жыл бұрын
The queue part is a bit strange to be honest, i think listen and notify are the most powerful feature in postgres you did not mention. In my opinion, people overuse q's as a whole.... Listen and notify give you reactive programing using data as originator of function and that is the biggest gain for 99.99999% of systems in the world
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Good point about LISTEN/NOTIFY. That way you can avoid polling. But, you'd still want to actually use the push/pop primitives because otherwise you potentially lose data. Of course, one thing not mentioned is that the polling must be done inside a transaction that commits once the worker is done.
@nubunto
@nubunto Жыл бұрын
yup, Temporal and Postgres are the only 3rd party systems you need
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
I've not actually heard of Temporal before, seems to be a very interesting twist of lambda functions. Super cool, thanks for sharing
@wojtekwozniak9272
@wojtekwozniak9272 Жыл бұрын
Using kafka add's a buffer to the database, but I get your point
@ToumalRakesh
@ToumalRakesh Жыл бұрын
Implementing a queue in a relational DB is the bad youtube advice I was looking for. I can't wait to destroy a multi-million dollar project armed with this knowledge. Seriously, just because you can does not mean you should.
@beachbum868
@beachbum868 Жыл бұрын
it's just easier to use the other services because they have everything you could need out of the box with all the language parts good to go. no design work needed
@nickbryantfyi
@nickbryantfyi Жыл бұрын
im a startup CTO with 12 years in the industry and i approve this message
@yurisich
@yurisich Жыл бұрын
Especially the part where they said you could fire half your engineering staff.
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Hey I did also say you could have them build products instead of maintaining useless infra 😅
@nickbryantfyi
@nickbryantfyi Жыл бұрын
@@yurisich a saas startup needs 3-4 hackers to get to a couple mil arr, no more
@iwolfman37
@iwolfman37 Жыл бұрын
I mean, MySQL is simpler and adheres to SQL guidelines more closely
@oefzdegoeggl
@oefzdegoeggl Жыл бұрын
hmm ... good idea to combine the delete with the "select for update skip locked" in one go.
@lepatenteux592
@lepatenteux592 Жыл бұрын
Whenever MongoDB is part of your stack, it needs simplification! (And removal of that cancer!)
@gejer123
@gejer123 Жыл бұрын
if you delete the enqueued item in the database, you wont be able to reprocess items if something happens to the node, losing jobs in the process. Updating the queue with a timestamp to lock an item, sending a heartbeat from time to time to keep it lock, and then deleting it when it's done is safer imo.
@zeroows
@zeroows Жыл бұрын
Postgresql is great but try reading or writing to/from a table while writing 80k rows per-second to another table.
@awmy3109
@awmy3109 Жыл бұрын
Why would you want or need to do that? Does your machine even have the resources to do that?
@huveja9799
@huveja9799 Жыл бұрын
After all the BS that there is over there, your video is more than welcome, thanks a lot for it!
@xXxRK0xXx
@xXxRK0xXx Жыл бұрын
I use Kafka at work and it has been a painful experience for the most part. Way too much complexity around it for something that can be done well by something like Postgres I am learning.
@HrHaakon
@HrHaakon Жыл бұрын
Or just JMS queues. They are boring, and that's why we love them! ^_^
@nikjs
@nikjs Жыл бұрын
Imagine trying to do this thru some ORM
@MrAtomUniverse
@MrAtomUniverse 11 ай бұрын
I kinda tried like even having postgresql like a redis cache but the more you write to postgresql the more complex and hard to manage it is, you have to deal with garbage but yea i prefer to keep things simple
@TarcisioXavierGruppi
@TarcisioXavierGruppi Жыл бұрын
I agree things are usually too complex, but I orefer MongoDB as my mains database. It can do everything you described, everything Postgress does and more. No need to use Blobs or weird Json fields, or even create a schema (you can create it if you want). Finally, the aggregation pipeline is crazy powerful.
@lainwired3946
@lainwired3946 Жыл бұрын
Postgres is best when you onow your data will be well structured, and you want easy way to link data across tables. MongoDB is best if you dont always have data thats stuctured the same. But you know therea nothing wrong with uaing both, right? It can be powerful to atore atuff like users in a relational DB like postgres, and content like posts and comments in a document style database. You dont have to choose just one. You can use both to their strengths to great effect.
@J_i_m_
@J_i_m_ Жыл бұрын
Mongo is like storing everything in a trash
@MattHudsonAtx
@MattHudsonAtx Жыл бұрын
In mongo everything is a weird blob. It doesn't do anything else.
@greenjacket6305
@greenjacket6305 Жыл бұрын
​@@lainwired3946Except jsonb let's data be stored in a schemaless fashion. It is effectively a polyglot database.
@rommellagera8543
@rommellagera8543 Жыл бұрын
Good luck with your transactional data especially when $$$ is involved 😊
@vaansimplex
@vaansimplex Жыл бұрын
If all you have is a hammer, everything looks like a nail.
@Mozescodes
@Mozescodes Жыл бұрын
Postgres is best but it's overrated, at my work we just use Postgres we have a lot of JSONB columns that are ugly AF especially when it's 3lvls deep. It can become such an anti pattern trying to be a Nosql with heavily unstructured data. My work colegue switched to Mongo/reddis from Postgres as they mainly had unstructured data thus postgresql is an overkill.
@yurisich
@yurisich Жыл бұрын
I agree having to play ball with what is essentially someone refusing to create normalized data is demotivating. At least you can cast to ::jsonb when making queries and secretly hope the performance gets bad enough to justify creating dedicated tables for it, but that rarely happens. These places are typically where all the thorns are in the apps design, too.
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
If you need JSONB columns I feel like you've failed architecturally way earlier. NoSQL is an antipattern in itself, at some point somewhere you have a schema - whether that be implicit (from how you use it or how you validate it before ingesting) or explicit in terms of how you deserialize/serialize
@Mozescodes
@Mozescodes Жыл бұрын
@@MrCompSciGuy If you use SQL in all cases would say you failed architecture wise. In my work place we have parent data and then a split between export/import data set and internally other sub data JSONB col array [int option codes 1,2,3]. If you think it would be better to use foreign keys in this case(double foreign keys would be needed to parent export/import and HScodes) your an idiot. It would be more applicable to use Mongo in this case our postgresql query uses inner - > to access deeply nested JSONB what in fact makes SQL not the right decision.
@Sarwaan001
@Sarwaan001 Жыл бұрын
This advice can potentially shoot yourself in the foot. everything is nuanced and you can do everything right in a Postgres database can’t even scale to one user. (Been there, had a Django-Postgres server that couldn’t scale to even 5 users.) A better way to think of feature development is to use the best tool for the job. You wouldn’t use Postgres for graph problems (well you can but I don’t know if you should).
@mcspud
@mcspud Жыл бұрын
sounds more like you had a django problem
@nichtverstehen2045
@nichtverstehen2045 Жыл бұрын
reinventing the wheel is over-engineering.
@sarun37823
@sarun37823 Жыл бұрын
You are saying everyone problems look the same to yours; that's where the video did not holds. The video is based on lots of assumptions that won't applied to real-world situations. However, if I were to pick one, it will be MongoDB over Postgres. I'm not assumed that everyone's problem are the same to mine. MongoDB solved ways more common issues than Postgres. People who bashed MongoDB just want to applied relational database design to MongoDB and found that it is a bad idea; instead of reflect on their design, they blamed MongoDB. It is easier that way, isn't it? 😢
@azursmile
@azursmile Жыл бұрын
Don't use PostgresDB for queuing
@xxcryicesxxcryices3382
@xxcryicesxxcryices3382 Жыл бұрын
Hey great video! I’m curious what’s your point of view on using AWS DynamoDB? People say it’s great for scalability, but I worry about things like costs, vendor lock in, and query limitations
@MarthinusBosman
@MarthinusBosman Жыл бұрын
This goes too far, yes you can use postgres as your primary and only database for everything, but no you should definitely use a dedicated message queue service, it's easier and far more robust than trying to write your own. Also, there's a fine balance between keeping architecture simple and reinventing the wheel.
@Voltra_
@Voltra_ Жыл бұрын
Simple, yet crazy effective
@ShoeboxRacer
@ShoeboxRacer Жыл бұрын
Why do I need pg at all? I can do anything pg does in my own code, and then I don't need to be an expert in pg.
@yosserdavanzo9639
@yosserdavanzo9639 Жыл бұрын
This makes me angry but isn’t entirely wrong. Lots of quirks would come up, leading to tribal knowledge of how patches/hacks are added, but not philosophically dumb approach
@FurfelOfficial
@FurfelOfficial Жыл бұрын
For cache, maybe MySQL/MariaDB memory table would be better?
@trumpetpunk42
@trumpetpunk42 Жыл бұрын
No. Just no. Why would you use mysql for anything when pg exists?
@FurfelOfficial
@FurfelOfficial Жыл бұрын
@@trumpetpunk42 for in-memory table?
@vasiovasio
@vasiovasio Жыл бұрын
Deleting the enqueued item is very, very naive and irresponsible! Everyone who just implements this from the video will be very badly surprised when see how the workers for Many reasons cannot execute the jobs, but they already deleted the record and you cannot try again. For everyone who wants to implement it the right way with a database, you must read and implement principles that SQS from AWS used in their service.
@ordinarygg
@ordinarygg Жыл бұрын
Yeeess, I'm not alone who sees this, like kids din't read documentation and create own new NOSQL xD!!!!
@PhucHoang-gz8yu
@PhucHoang-gz8yu Жыл бұрын
Cool video, take my sub!!
@luabagg
@luabagg Жыл бұрын
why db just use plaintext
@colemichae
@colemichae Жыл бұрын
Yes keep things small but build it to be able to scale, to a point where a partial redesign will be needed, putting it all in postgres is also a problem if postgres is the fault, where a msg queue and a cache could have helped and lowered costs initially.
@Ombladon1991
@Ombladon1991 Жыл бұрын
This is a great example video of how not to do sytem design. Just the fact that you're using one single point of failure for 3 different systems should be enough but making a video that inexperienced ppl may use as a "good" example is absolutely mind bogggling to me. PPL please use the individual systems instead of a database engine to do the work that caching or queueing systems are designed for. Generally each system is designed specifically for that task and should perform much better than botching together a similar solution using an outdated relational db with less features and less stability.
@bradfordleak
@bradfordleak Жыл бұрын
Regarding single-point-of-failure, I would assume each of the three use cases would be setup as three different and highly available database systems. But highly available (and multiple) database systems can be very complex which would (probably) defeat the purpose of using a database to reduce complexity. The biggest probably though as far as I can see would still be scalability of the database systems. Nonetheless, even if this proposal is bad design for scalable systems, it’s actually good to force engineers and designers to think through *why* some things work and don’t work.
@MrCompSciGuy
@MrCompSciGuy Жыл бұрын
Well sure, but now you've added 3 systems you need to be an expert on instead vs just having 3 instances of the same system (doing slightly different things). All of a sudden you're spending your time on maintaining production rather than actually solving business problems. When you actually need to scale out (or get very strict reliability SLOs) this video of course would go out the window... but most people don't and won't need that - and would be better served spending their time doing things that actually provide business value. And at the time they need it, just rearchitect the parts of the system you need to rearchitect. Imo, many times you don't actually need something that "scales", the data is most times small enough to just put in memory. Of course, one thing not covered here is to just use AWS/Google Cloud services, which is probably acceptable for most people as well.
@oleksandrsova4803
@oleksandrsova4803 Жыл бұрын
place some hate speech into this comment when you fail your first latency, scalability, or any other NFR. Only then you'll understand why.
@jairajsahgal7101
@jairajsahgal7101 Жыл бұрын
Thank you
@ToumalRakesh
@ToumalRakesh Жыл бұрын
Please do NOT follow this video's advice. There is a reason specialized solutions exist. While you can do almost anything in postgresql, this is akin to saying you can drive any type of screw with a flathead driver. Using sql as MQ is not something you should do, ever. There is a reason brokers exist: performance, interoperability, just to name two. Using sql like that is an insane overhead. Use the right tool for the job. This guy tells you one screwdriver is enough. Don't follow his advice.
@vitalyl1327
@vitalyl1327 Жыл бұрын
Any third party dependency is a liability. An ideal system is the one that does not have third party dependencies. In practice this ideal is impossible to achieve, but everyone must try to get as close as possible to it.
@tobiaspucher9597
@tobiaspucher9597 Жыл бұрын
Thats so clever!
@rochaaraujo9320
@rochaaraujo9320 Жыл бұрын
KISS ❤
@BaldyMacbeard
@BaldyMacbeard Жыл бұрын
I'd be inclined to argue you should pick mongodb over postgres. Spares you any database design issues and will most likely hold up for a very long time before you hit any sort of design limitations. Storing objects is much more convenient than relational data when you start a new project from scratch.
@skumpuntele8941
@skumpuntele8941 Жыл бұрын
get rid of that music
@sivtech
@sivtech Жыл бұрын
Sqlite
@bobanmilisavljevic7857
@bobanmilisavljevic7857 Жыл бұрын
Nice! Can't wait to take the Dr. Chuck Postgres course 🦾🥳
I’ve been using SQL wrong this whole time
9:50
CompSciGuy
Рет қаралды 23 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 1 МЛН
小路飞还不知道他把路飞给擦没有了 #路飞#海贼王
00:32
路飞与唐舞桐
Рет қаралды 79 МЛН
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 132 МЛН
ТЫ В ДЕТСТВЕ КОГДА ВЫПАЛ ЗУБ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 3,7 МЛН
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 35 МЛН
Wait... PostgreSQL can do WHAT?
20:33
The Art Of The Terminal
Рет қаралды 201 М.
The purest coding style, where bugs are near impossible
10:25
Coderized
Рет қаралды 1 МЛН
Protocol Buffers vs JSON: Efficient Data Serialization Explained
7:21
The Coding Gopher
Рет қаралды 2,4 М.
Using docker in unusual ways
12:58
Dreams of Code
Рет қаралды 460 М.
When to Use Kafka or RabbitMQ | System Design
8:16
Interview Pen
Рет қаралды 125 М.
The Most Legendary Programmers Of All Time
11:49
Aaron Jack
Рет қаралды 605 М.
How to Check if a User Exists Among Billions! - 4 MUST Know Strategies
12:44
I've been using Redis wrong this whole time...
20:53
Dreams of Code
Рет қаралды 368 М.
100+ Docker Concepts you Need to Know
8:28
Fireship
Рет қаралды 1 МЛН
When Optimisations Work, But for the Wrong Reasons
22:19
SimonDev
Рет қаралды 1 МЛН
Не клади телефон под подушку
0:43
veloloh
Рет қаралды 145 М.
Это ЛУЧШИЕ Смартфоны 2024 Года. Недорого и Качественно
15:23
Thebox - о технике и гаджетах
Рет қаралды 66 М.
СКОЛЬКО СТОИТ КАЖДЫЙ КОМП APPLE? (Ч.1)
0:37
ТЕСЛЕР
Рет қаралды 171 М.
Me Charging My Phone Before Going Out
0:18
Godfrey Twins
Рет қаралды 4,6 МЛН
The M4 Mac Mini is Incredible!
11:45
Marques Brownlee
Рет қаралды 5 МЛН