Opening a shell in a cloud run container would be extremely helpful in some maintenance / debug use cases.
@TheMomander8 ай бұрын
Thank you for your comment. Could you give me some specific examples of maintenance and debugging use cases? Maybe I should create a video around those use cases.
@alexpirine8 ай бұрын
@@TheMomander sure, thanks for the proposal : I’ll just check with my team first to be sure to come up with the most useful use cases.
@Jasper35559 ай бұрын
Great feature! Would be cool to elaborate a bit more on how to add cache layer, such as Redis or Cloud CDN in this context, as mentioned in the video.
@TheMomander8 ай бұрын
Thank you for your comment. I made a note of it and hope to be able to return to it in a future video.
@christophstanger95417 ай бұрын
if you want to use a caching layer to our example with NGINX, you can either use NGINX native Content Caching capabilities, or configure "Serve static assets with Cloud CDN" or use the "Firebase Hosting Integration" for a custom domain and integrated CDN. I can not share linkes here in youtube - but you will find everything about it, in our docs
@destined2doom9 ай бұрын
That was really nice to explain about new mount features of cloud run..and the discussions on cost saving using cache…we would like a separate video on caching please with primary focus on costs and how various factors can reduce it..Also cloud armour is really expensive for just starting bootstrapped entrepreneurs..so may be how to deploy some kind of waf in nginx will help us to stay protected from ddos ..please look into this aspect..In the buildinpublic community we see many ddos attacks..and that kind of forces us to lean to vps..😅
@TheMomander9 ай бұрын
Thank you for the topic suggestion! I'm adding it to my list. In the meantime, here are two ways to protect your Cloud Run services without the extra cost or work of setting up a load balancer: 1. Set max-instances to 1 and you will never pay for more than one container instance. If you get more traffic than one instance can handle, excess requests return errors. Easy to set up, but can deny requests from legitimate users during an attack. 2. Add a rate-limiting middleware to your code, like "express-rate-limit" if you're using Node.js and Express. It's one or two lines of code in most cases. There are other rate limiting middlewares for other frameworks, like Flask. Requires a little more work, but this will shut down attackers while still serving requests from legitimate users because traffic is limited per IP address.
@destined2doom9 ай бұрын
@@TheMomander Dear Martin...thanks a lot...You are a life saver..I will do that..and wait for your next video.. regards from Bangalore
@eduferreyraok5 ай бұрын
Hello! and congratulations on the video!... I would like to know if there's any Tutorial to use this feature with a Source Code based deployment (i.e Django web server), So I can setup my cloud run service from the Dockerfile within the Repository. thank you!
@christophstanger95414 ай бұрын
He @eduferreyraok - there is no direct tutorial for Django web server, however, you can use the source based deployment command gcloud run deploy NAME --source=. together with the feature flags for volume mounts (--add-volume-mount and --add-volume) to deploy Source Code based while directly attaching a volume (like a cloud storage bucket to your Django container).
@jsalsman9 ай бұрын
have been doing this with gcsfuse/fusermount for years
@christophstanger95419 ай бұрын
Great to hear. We hope this now got simpler to setup with Volume Mounts
@jsalsman9 ай бұрын
@@christophstanger9541 absolutely, an officially supported solution is most welcome and I will be moving to it soon
@johanastborg80669 ай бұрын
it works just fine 👍🏼
@vijay809 ай бұрын
This is a life saver!
@AdnanHodzic6 ай бұрын
Since this video covers very basic use case, it would be great if you made a more in depth video how to optimize the speed and latency between writes/reads between GCS buckets mounts and Cloud Run. For example, I've attached GCS (bucket) volume and mounted it to "/var/www/html" to WordPress instance which is running as part of Cloud Run. Site is barely unusable how slow the reads/writes to/from the GCS are (to be fair, there are a lot of files). If I replace GCS volume with i.e: in-memory volume then everything is fast and works great, but then I miss file persistence and everything will be lost with re-deployment.
@TheMomander6 ай бұрын
Peter Kracik has published a blog post titled "Running Wordpress website on Google Cloud Run - simple and cheap". He uses the WP Stateless plugin, which will take care of storing images in your Google Cloud storage bucket. Would that be a better fit for your use case?
@__mar0ne__28 күн бұрын
I'm facing the same problem! for example if you use directly the Wordpress official image from dockerhub and run it on cloud run then installed some plugins in your wordpress they will be lost once the container re-deploy! if you use GCS as a volume for "/var/www/html", input/output will take years!
@mohamedkarim-p7j4 ай бұрын
Thank for sharing👍
@TheMomander4 ай бұрын
Happy to hear you found it useful!
@btbutler559 ай бұрын
A nice feature. Will it be available on gen 2 cloud functions as well? These use cloud run containers in the background and being able to mount buckets on to them would be extremely useful.
@christophstanger95417 ай бұрын
Yeah you can :) As you mentioned, Cloud Functions Gen 2 are actually Cloud Run Services, which you actually see in the console as well. If you have a Cloud Function deployed, you will see a Cloud Run Service with the exact same name. And if you mount a bucket to this service, that mean you mount it to your Cloud Function gen2. Test it out :)
@RachelC-w5q7 ай бұрын
looks like `gcloud run deploy` command does not have the option --add-volume. i've also updated to the latest via `gcloud components update`
@RachelC-w5q7 ай бұрын
Looks like it's a beta command :) Thanks for this doc!
@NikitaThombre-w4c9 ай бұрын
How to access App Engine privately using Application Load Balancer... can you help me with this approach
@TheMomander9 ай бұрын
Why do you need a load balancer in front of App Engine? If you want to make your App Engine app private, use Identity-Aware Proxy. No load balancer will be needed.
@suetamrossi9 ай бұрын
Nice!! There’s any possibility that feature comes to GKE in the future?
@TheMomander7 ай бұрын
Good to hear you liked the video! I don't know the answer to your question as I focus on serverless computing.
@christophstanger95417 ай бұрын
You can actually mount a Bucket also as a GKE volume - look for the "Cloud Storage FUSE CSI driver " in our GKE documentation.
@leamon90248 ай бұрын
Does Cloud Run Job also support volume mount feature?
@christophstanger95417 ай бұрын
Yes it does. We should have mentioned that in the video - please refer to our docs and the blogpost linked in the description to read how you can mount to a Cloud Run Job
@vyacheslavgorbov79328 ай бұрын
Can you mount SQLite .db files?
@TheMomander8 ай бұрын
Yes, but it won't work very well. You are better off using a separate database tier, like CloudSQL or Firestore.
@NamerMedina8 ай бұрын
does the CLI allow for removing volumes ?
@christophstanger95417 ай бұрын
yes - with "gcloud beta run services update [SERVICE_NAME] --clear-volume-mounts"
@gattabat9 ай бұрын
wow well explained :)
@minisoftpark9 ай бұрын
❤ great
@marcelozlot9 ай бұрын
Would be great if it actually turned possible to use sqlite in a performant and simple way. Not the case… Is not Google’s “best practice”, but a Cloud Run with a small local SSD would be wonderful!
@TheMomander9 ай бұрын
I hear you; that would be pretty useful. But volume mounts are great for frequent reads, not for frequent writes. You can get unexpected behavior when multiple container instances write to the same file at the same time. Luckily there are products that are optimized for frequent writes: databases like Firestore or CloudSQL. You can use one of them together with Cloud Run. If there is a hard requirement to use sqlite, a single Compute Engine instance would work best. It would be a "pet" server that you set up once and watch carefully. Contrast that with Cloud Run, where container instances are "cattle, not pets". We don't name individual cattle and when they die, we replace them quickly. If a container instance crashes, Cloud Run replaces it quickly. If we need more container instances to deal with increased traffic, Cloud Run creates them quickly. By doing this, our web applications become more resilient and scalable.
@eduferreyraok5 ай бұрын
@@TheMomander it is true what you say, but for a tiny project, single based container, sqlite could be an excellent option, since no other container access it, everything happens in controlled envirioment (container server-sqlite-gcs bucket)
@TheMomander5 ай бұрын
@@eduferreyraok I haven't tried it myself. If you do try it out with one of your projects, please let us know how it goes!
@johnathonnewbold81599 ай бұрын
All-time user integration is able to compute this through AI having header and footer notes with a tactical cloud mounting under generative Ai and pi or 1/3 and 2/3 as these are all repeating engine and generative AI form it would ever be increasing when tethered to assets on an exponential market of growth do automated trading