I really like your presentation style and explanation style ; you do a fantastic job incorporating multi-colored visual elements , and personally, it helps me to digest and focus on the multiple running pieces of the architecture : basically viewer engagement.
@interviewpen8 ай бұрын
Thanks, glad you liked it!
@randomforest_dev8 ай бұрын
I am confused. The explanation is not specific to 'Code Execution System' logic, it is just distributed scaling in general.
@thinkingcitizen8 ай бұрын
I think that's the point....
@interviewpen8 ай бұрын
There's a lot more challenges involved when we have to spin pods up and down at the user's request. Sure, we can use Kubernetes for distributed scaling in general, but the way we're using it here is not to host a distributed service but rather to create a fresh isolated environment every time a user makes a request.
@Rustamushko8 ай бұрын
I expected to see an architecture of scalable FaaS with interpreters\compilers. K8S is a concrete orchestration system which shall be selected among other similar systems based on some rationale. How a user receivers execution result? Synchronously or asynchronously - nothing about this?
@rigveddesai58438 ай бұрын
i am not sure why we can't just run a basic check on the server side when it spins up for whether the standby container exists or not? why is this not optimal? is the server initially checking for the container, accordingly spinning up/using existing standby and then assuming the standby container exists for the entirety of its uptime a bad idea?
@interviewpen8 ай бұрын
The challenge comes when we start scaling the API server horizontally. We could easily end up with race conditions if there was only one standby pod, so we'd probably want one standby pod per API node. This prevents concurrent requests to different nodes from affecting each other.
@bhanuprakashrao14604 ай бұрын
Containers are run on an linux nodes using c-groups.
@mateuszsiwiecki62677 ай бұрын
Hi, I really like your channel and have learned a lot from it. However, in this particular case, I think your solution is not good enough because it mixes application with infrastructure. In this case, a Docker in Docker solution would work much better. With such a solution, you would separate the process of load balancing and providing access to your service from the application itself, for which it would be necessary to create an environment for the given language on an ongoing basis. Besides that, keep up the good work. I always look forward to your next videos!
@interviewpen7 ай бұрын
Thanks for the suggestion, I chose Kubernetes here since it allows the infrastructure executing the code to scale separately from the API servers. Of course, the API could be deployed on any infrastructure system we want. But docker in docker would work as well and has the benefit of being easier to set up for a smaller scale system. Thanks for watching!