Derek, I wanted to thank you. I am learning so much from you, and I'm becoming a better software engineer with every video you upload.
@CodeOpinion8 күн бұрын
I'm glad you're finding the videos helpful!
@beowulf_of_wall_st7 күн бұрын
this is a great tip, I realized this when I joined a team that had 50 different serverless units deployed with tons of complexity in deployment, I consolidated everything into one container with different endpoints and life got so much easier. the only reason to do microservices is to scale each service independently but serverless lets you just scale according to total volume, there's nothing wasted by having rarely used code living along side stuff on a hot path. I found that you can have your monolith publishing to a message queue and then subscribing to its own messages for decoupling, this insight changed everything for my serverless work. if you use a framework that lets you mount subapplications under one common router you can build natural cut points into your monolith around modules that may need to be broken out into standalone deployments once you prove there is a concrete benefit to doing so.
@CodeOpinion4 күн бұрын
Thanks for the comment! Should help people realize their own situation because I get the sense the will find the same insight at some point.
@adambickford87208 күн бұрын
A stateless modular monolith can do almost everything MSA does, just far more simply. There are ways to enforce logical separation between the modules with code. Even if you think you 'need' msa cuz 'our reports service takes way more ram than our web api' you can handle that with routing. Just have the ALB route all your `/reports` requests to the instances that have a lot of ram. If you need to write the services in different languages, mutually exclusive dependencies in a singular language, etc. then MSA might be a better call. But if you can get away with just properly managing a modular monolith, you should.
@spomytkin8 күн бұрын
While I completely agree with point about coupling, I believe you are missing couple important points about AWS Lambda/X-functions 1) pricing model with GB-seconds, 2) limitations (execution duration, disk space, deployment package size, payload size) , 3) cold start. So, 1 and 2 forcing you to trim your deployment unit footprints and 3 diminish (or prohibits) benefits of solutions like connection pools and app.tier caching. In short - you just trade some $$ and design freedom for absence of configuration and support hassle. Somewhere in between Lambda and EC2 you can pick Elastic Beanstalk.
@Antash_6 күн бұрын
This is a very incisive comment, and I had the similar thoughts while watching given I experienced this stuff myself: - my client insisted on using AWS Lambda, and to not be overburden with managing thousands of functions we decided to build couple monoliths, - and that worked out ok-ish, as I still think that this is a better approach, but at the same time cold starts became a huge problem for some of the services (the ones that are user-facing, usually HTTP APIs but also some event-driven that need to give quick results), - which then forced us to start using provisioned concurrency (so paying for keeping some lambda workers "warm"). Even though some parts could be more optimized instead of throwing money at the problem my main takeaway from that project is that serverless is not a solution for everything.
@CodeOpinion4 күн бұрын
Absolutely, thanks for the comments. All those are considerations if serverless is even applicable for your context.
@TomasJansson7 күн бұрын
Another great video. Being on Vercel for our site is basically what you describe here. We have one codebase for the site, but everything is deployed as functions by Vercel. That part is abstracted away from me as the developer using Vercel, but in reality that is what we have.
@CodeOpinion4 күн бұрын
Exactly, it's not a 1:1. You're development view is different than the physical deployment view
@corsaro00716 күн бұрын
maybe I didn't get it 100%, but I think there is a bit of confusion monolith is a deployment model. single codebase, I refer to as a monorepo. so you can have a monolith with a code base spread across different repos (a library per module, probably), but a monorepo that deploys to several different (serverless or not) processes it's SOA. semantics aside, always appreciate your content.
@CodeOpinion4 күн бұрын
Thanks. Ya, we probably have a disconnect because I wouldn't exactly define it that way. Like I mentioned in the video, you can have a single codebase that has multiple deployment artifacts come out of it (eg, container). Or we're talking the same thing and I'm not realizing it. Either way, appreciate the comment and support!
@AtikBayraktar8 күн бұрын
just thinking about this yesterday and searching in AWS, and this video gets uploaded today :)
@CodeOpinion8 күн бұрын
I'm a mind reader!
@aivarelis3 күн бұрын
Wow, so you may run as serverless, deploy as a monolith (single Lambda), but structure your code like microservices... Funny? Yes. Practical? Not really... 😆 But understanding code architecture, deployment strategy, and execution model separately is so powerful - it gives you true flexibility in designing scalable and maintainable systems 🙏
@thedacian1236 күн бұрын
In general azure functions contains very litlle code (a transactional script) and they have function of service tier a limited amount of execution time
@fabiomoggi8 күн бұрын
Do we really need load balancer / API gateway in a serverless architecture? Part of the beauty of serverless is letting your cloud provider to handle scalability based on your traffic while maintaining a single URL entry-point for your functions. I would assume a LB/Gateway is already behind the function calls. Wondering what's the best practice for the monolith examples in the video.
@gabrielverle94698 күн бұрын
An API Gateway can do a few more things, like requests limits based in tiers, so sometimes is better to delegate this outside the core code logic
@ZorakWars6 күн бұрын
Use an interpreted language for your backend serverless monolith or cold starts will kill you.
@victormendoza32957 күн бұрын
I wasn't even looking for your channel today, but seeing nothing but crap on youtube trying to find a good video then yours popped up like think god aside from all the mind numbing bs.
@CodeOpinion4 күн бұрын
Well glad mine wasn't mind numbing 😂
@LowrollerWTF6 күн бұрын
instant subscribe, very well done video! However in the case of a serverless monolith how does it behave with cold-start? you would still have to initialize this whole monolith before execution right? or you can somehow initalize separate modules?
@CodeOpinion4 күн бұрын
Ya cold start can be an issue, it really depends on how often and the situation that would occur and if it's applicable to you. For example, if you have predicable start times you can provision that in AWS lambda for example.
@LowrollerWTF3 күн бұрын
@CodeOpinion I was afraid you said that and besides I assume that of course this would depends on the dependencies it has to deal with...but anyways still very valuable insight and there are several use cases for this approach! But I imagine having a single nest JS lambda would not make sense
@lodevijk7 күн бұрын
Great explanation. I think something like 40 micro services scattered around the network are inherently a worse turd pile than a monolith with the same functionality
@CodeOpinion4 күн бұрын
Agree
@melnor826 күн бұрын
Great video. What's your opinion of the .NET Aspire "framework" ?
@CodeOpinion4 күн бұрын
I looked at it awhile ago and thought it was interesting for dev environment perspective but I wasn't about to jump out and switch it from a simple docker-compose. kzbin.info/www/bejne/o5K5iaGhjqp3f6c
@jntaca8 күн бұрын
Another master class of common sense
@jntaca8 күн бұрын
BTW. I use horizontally scaled monolith, CPU heavy async services, Redis Sentinel por web sessions and Postgres replicas. Never had a problem.
@SkyLee918 күн бұрын
How to handle coupling?
@pilotboba8 күн бұрын
@codeopinion How would you approach an app deployed to lambda that also does background processing that might take longer than the max 15 minutes you are allowed to run a lambda? For example, we have a provisioning app that is called to provision new tenants of our application. It's called by a corporate portal. When the provision API gets a call to provision a tenant the API returns an OK (accepted) and queues the job with hangfire to provision the tenant. The background process initializes a database for the tenant, seeding it, etc. It also updates a db with tenant info for the apps tenant service to use. However, it could take more than 15 minutes to run. I'd like to make this a serverless monolith rather than a ECS service, because like 90% of the day it's not needed. (Its actually deployed on EC2 VMs right now). My thought is a lambda to host the provisioning API, and a container. The API would send a message to a queue and another lambda would start an ECS task to do the processing. But, this seems like a lot of moving parts. Any advice?
@gabrielverle94698 күн бұрын
You can add a progress tracker for all the tasks that need to be done. If the remaining time of the Lambda reaches a certain limit (e.g., 30 seconds), it calls the same Lambda again, passing the last checkpoint. It's not ideal, as this is a weird use case for Lambda, but if your workload varies and may take a long time without requests, it might be good for Lambda
@CodeOpinion8 күн бұрын
Yes this is a good approach is to make it re-entrant, or often what I do is break things into smaller chucks. You might be able to iterate over something, and enqueue more work to be done separately.
@sirg75738 күн бұрын
In Azure functions, we have something called "Durable functions" for situations like this. I'm sure there's something similar in AWS Lambda.
@pilotboba8 күн бұрын
@@CodeOpinion sounds like step functions.
@pilotboba8 күн бұрын
@@sirg7573 I think AWS batch might be the closest to that. I'll have to research it. But I think now we aren't talking about a monolith any more.
7 күн бұрын
Kurrent? Where the f is event store?
@AkosLukacs428 күн бұрын
Yes, scale to zero, not everybody has constant high load. It's funny aws has this option, but azure not (no one-line library option to just sever less an existing api). Do you know a good scale-to-zero-ish db options?
@funkdefied18 күн бұрын
DynamoDB is industry standard scale-to-0 DB. They even have a relational option.
@CodeOpinion8 күн бұрын
I want to say AWS Aurora can do this now
@hiyelbaz7 күн бұрын
Azure Container Apps with consumption based pricing = scale-to-zero serverless with any tech stack.
@fabiomoggi3 күн бұрын
I’m skeptical about scaling functions to 0 as it usually leads to cold start. I rather have a few cheap instances warmed up for low traffic rather than sacrificing user experience.