Speed up homelab patching with a CACHE

  Рет қаралды 11,846

apalrd's adventures

apalrd's adventures

Күн бұрын

Пікірлер: 75
@deadlast561
@deadlast561 Ай бұрын
Thanks for this. You always have the coolest ideas and most practical solutions, and it's very inspiring.
@Larz99
@Larz99 29 күн бұрын
Excellent idea! Yet another very helpful service on my network. Thanks as always for your careful explanations. Great work.
@joakimsilverdrake
@joakimsilverdrake Ай бұрын
"We don't need IPv4" - Love it. IPv6 isn't the future. IPv6 is now! Perfect guide. Something I need to add to my ever growing list of things to do.
@N0Reaver
@N0Reaver Ай бұрын
Your solution is quite simple and elegant. What about the apt-cacher-ng ?
@LordApophis100
@LordApophis100 Ай бұрын
apt-cacher-ng is simpler to set up, you just have to add a proxy directive to apt and it will use it to access all apt repos, no need to manually configure each deb repo you want to cache, apt-cacher-ng will cache everything your system uses.
@Darkk6969
@Darkk6969 Ай бұрын
Sweet! This actually gave me some ideas about copying the config files onto my fresh Debian installs. I was using self hosted gitea but having a simple wget command and piped it to bash is a better idea.
@Gagootron
@Gagootron Ай бұрын
Great Video as always! Really nice how much faster my updates are now with a cache in place.
@gaetan3340
@gaetan3340 Ай бұрын
Perfect and thank you for sharing. Simple yet super efficient and works for any repository
@kevinshumaker3753
@kevinshumaker3753 Ай бұрын
Was running apt-cacher-ng on a Pi, before, and a Debian vm most recently. This was 'easier' to set up, mostly. I have a dozen or so iron and vm Debians, and at least that many Pis. Took less than an hour to spin up a minimal VM and install, another 20-30 minutes to get the rest set up, and it's cooking away for them all. It'd be nice to have a text doc with what needs to be changed on your scripts from your domain and particular setup, but other than that, very nice.
@Denis-in6ur
@Denis-in6ur Ай бұрын
Very nice video and overall good idea! I personally like Ansible more than a shell script - especially because you plan to reconfigure all your servers. But that doesn't matter as long as it works for you. :)
@luisalcarazleal
@luisalcarazleal Ай бұрын
Now that i have more than 20 LXCs i'm starting to automate updates with ansible and this will come handy to save some bandwidth (not that I'm behing a metered connection, but if the servers are up 24/7 better put them to good use). Thanks for the video.
@TheCreat
@TheCreat Ай бұрын
I was actually recently thinking about looking into caching for fun, but decided it was too much work for not enough gain. Thanks a bunch, it's now basically a turnkey solution for me (not quite, but close enough), so that equation has flipped and I can "just do it". Neat!
@CharcoalDaddyBBQ
@CharcoalDaddyBBQ Ай бұрын
Nice! Gonna set this up today
@bokami3445
@bokami3445 Ай бұрын
PERFECT! a project for the weekend! Thank You!
@dmynerd78
@dmynerd78 Ай бұрын
Thank you for posting this! I've been meaning to set up apt-cacher-ng for a while now but I'd rather use nginx for the exact reasons you explained
@maurolimaok
@maurolimaok Ай бұрын
What a nice video. Thanks!
@benlenau
@benlenau Ай бұрын
Excellent work. Will immidiately spin up an lxc with nginx + apt cache and try this out 🙂 ps. added a couple of lines so that icons also load.
@projectpanic2291
@projectpanic2291 Ай бұрын
I was worried about a chicken-egg problem running the cacher in a container, especially applying the cache to sources.list of the proxmox hosts. The solution was pretty simple: sources.list supports multiple sources and will prefer the first entry. So I have my local cache entries listed first followed by the default upstream sources.
@apalrdsadventures
@apalrdsadventures Ай бұрын
Also, caching updates isn't going to interrupt operation of PVE, so it's important but not on the critical path for anything
@projectpanic2291
@projectpanic2291 Ай бұрын
@@apalrdsadventuresExactly, not concerned about operational issues. I have the hosts and all containers managed by Ansible. After your video I setup the containers to point to the cache before their first ‘apt update’. Whenever I want to rebuild all the containers I don’t want to have to treat the cacher container as “special” and bring it up first. The Proxmox hosts are similarly setup with a custom ISO with auto answers.
@maxdiamond55
@maxdiamond55 Ай бұрын
Great video as always . thanks
@oooee
@oooee 27 күн бұрын
At the 5:00 minute mark he mentions that a proxy doesn't let him cache https traffic, but by setting up a mirrored cache using an http web server how does that do any better? Does modifying the source list allow for caching https content?
@apalrdsadventures
@apalrdsadventures 26 күн бұрын
Without modifying sources.list, apt will always validate the server's certificate matches the domain it is trying to access, and that the server's certificate is in the system chain of trust (which on Linux comes from Mozilla). So, you cannot intercept a connection to a public domain name which you do not control. By modifying sources.list the domain is now different so I can own the certificate for that domain, but I can also just remove the 's' and use HTTP.
@zyghom
@zyghom Ай бұрын
long time ago I thought about the same but honestly it is a lot to set up - I am surprised nobody made it yet as a package or container that are already preconfigured
@cheebadigga4092
@cheebadigga4092 Ай бұрын
suggestion to improve the rewrite script readability: use cat/EOF to pipe a multiline string to a file.
@kajraske2002
@kajraske2002 Ай бұрын
Man I was just reading about this. Does anyone know how this compares to something like a Squid cache in pf/OPNsense? I've been planning on getting a dedicated machine for *sense, wondering how much I need to worry about storage on it.
@djordje1999
@djordje1999 Ай бұрын
It will be interesting to show how to do cache for docker dependencies.. Homelab cluster with bunch of VMs with docker inside each VM..
@BladeWDR
@BladeWDR Ай бұрын
For reference, the .sources files are deb822 format. Its intended to reduce the need for separate keyring files, iirc. Also, I can't help but want to try and write an ansible playbook to replace that bash script.😂
@NFvidoJagg2
@NFvidoJagg2 Ай бұрын
can also be done with lancache and adding the rules into there. but in that case your also adding in another DNS resolver
@klaernie
@klaernie Ай бұрын
Hmm.. not in a place to try this, but one might rewrite all repos, but put the repo hostname as first part of the URL, where one can extract it in the nginx config again, making it able to cache arbitrary repos.
@seaSwann
@seaSwann 25 күн бұрын
Is there a way to do this for youtube videos or even DRM media like Netflix?
@yezperdk
@yezperdk Ай бұрын
How do you keep all of your containers updated without having to do manual apt upgrades on each one?
@iehfned
@iehfned Ай бұрын
Do you have an automatic way of generating ipv6 DNS entries based on slaac or is this step still a manual process. I assume you use opnsense?
@demanuDJ
@demanuDJ Ай бұрын
I have to do the same for my opensuse systems
@andreasheckel1061
@andreasheckel1061 Ай бұрын
Would love to see a similar solution for RH based linux distros :-)
@markarca6360
@markarca6360 3 күн бұрын
What about setting up a local DNS and use a CNAME for the local cache?
@apalrdsadventures
@apalrdsadventures 3 күн бұрын
CNAMEs still validate the certificate of the name which was originally typed
@shawnsomething4360
@shawnsomething4360 Ай бұрын
While looping over files with find, variable mirror gets emptied after first loop. This results in first file to be ok and subsequent files to write incomplete URLs. I am however not familiar with bash scripting enough to explain why. Edit: Also in your blog you place nginx log files in /var/debcache/ but at the end of blog you tail files in /etc/debcache/
@apalrdsadventures
@apalrdsadventures Ай бұрын
fixed both. Needed to export the name for the second bash call.
@shawnsomething4360
@shawnsomething4360 Ай бұрын
@@apalrdsadventures Tested and it works. Thank you.
@ttass-m9d
@ttass-m9d Ай бұрын
Will this break the dist upgrade process? Say Ubuntu 22.04 to 24.04. Since it is renaming the repositories in sources.list.
@Paxsali
@Paxsali Ай бұрын
does anyone know how to do the same for conda, pip and flatpak?
@williamthunderion
@williamthunderion Ай бұрын
This is interesting. Personally I have a pacman cache for in my homelab using similar setup because I prefer Arch over Debian. Now if someone were to make similar tutorial for RPM-based distro...
@innesleroux9439
@innesleroux9439 Ай бұрын
Awesome thanks!
@Royaleah
@Royaleah 23 күн бұрын
Works well except my proxmox cache falls out of sync in a day or two. If I clear the cache it works again for a couple days and then back to "File has unexpected size (374493 != 380514). Mirror sync in progress?" Edit: Update after looking more into things I think it might be the that it is trying to connect to the download.proxmox server using IPv6 and I don't have IPv6 enabled on any of my systems. Edit2: Update2 after changing the config to point to the IPv4 address and reboot. I still get get the error.
@apalrdsadventures
@apalrdsadventures 23 күн бұрын
Try adding " ~Packages 1; " to the nocache list in the nginx debcache.conf, restart nginx and see if that helps Proxmox seems to not distribute versioned package difference files like Debian does
@Royaleah
@Royaleah 22 күн бұрын
@@apalrdsadventures Thanks for the reply. I have this #Nocache for those entries proxy_cache_bypass $nocache; proxy_no_cache $nocache; in my conf. I added it like "proxy_no_cache $nocache ~Packages 1;"? That still got errors. I put in in the bypass line and got no errors. Should it be in both no_cache and bypass?
@apalrdsadventures
@apalrdsadventures 22 күн бұрын
oh, not there. Up at the top, there's a map {} which contains lines of regex's to bypass the cache (where `nocache` is defined). Add a new line in that table of exactly what was in my last comment. There should be "~InRelease 1;" also.
@Royaleah
@Royaleah 22 күн бұрын
@@apalrdsadventures Thank you! That is working. I don't know how I missed that right at the top of the config with other things that look like it. Searching google and asking chatgpt were not getting me any good answers, even with your "~Packages 1;" directly. Again, Thank you.
@tanja84dk1
@tanja84dk1 Ай бұрын
Actually it is possible to cache even https packages. like I have run in years apt-cacher-ng on my network to cache my apt updates, and true on the clients I have added the apt-cacher-ng as a apt proxy, and if it's a https repo then I have changed in the source list files so if its a https repo then changed it to http and then added "HTTPS///" with 3 / in front of the domain since that tells apt-cacher-ng that it's a https repo so it downloads the update over https ofc internal its then only http between the cache and the client but that is only on the local network
@apalrdsadventures
@apalrdsadventures Ай бұрын
You can't cache packages if the client (apt) is doing HTTPS to the origin unless you have a cert which the client will accept. This either means modifying apt or the system trust to accept your fake cert, or modifying the sources.list to use HTTP instead of HTTPS. If the client is only doing HTTP, you can intercept the connection at the network or proxy level and return a response from cache. Apt will still validate the GPG signatures of the files, but the session is not protected with TLS. The protocol from the cache - >origin can be either, it doesn't matter.
@tanja84dk1
@tanja84dk1 Ай бұрын
@@apalrdsadventures hmm looks like youtube eat my response when I tried to explain it in depth sorry
@tanja84dk1
@tanja84dk1 29 күн бұрын
@@apalrdsadventures but that is also why apt-cacher-ng has a puld in way to let it know that it's a package over https ( small change on the client source file ) for the client to tell the cache that it's a https repo so apt-cacher-ng downloads it over https
@annoyedbybrother
@annoyedbybrother Ай бұрын
Are there any changes you would have to make to allow offline clients to update this way?
@apalrdsadventures
@apalrdsadventures Ай бұрын
The cache isn't a mirror, so it's not actively pulling data until it's requested by the first client. If the client is offline but the cache is not, and the client can route to / through the cache, then it will work fine. If the client is entirely offline, you'd need a different approach.
@treyquattro
@treyquattro Ай бұрын
great idea. Personally I would have hijacked DNS.
@UnderEu
@UnderEu Ай бұрын
Now, I just need a mirror who serves my ISP with anything higher than 300kbps in a Gigabit link.
@nezu_cc
@nezu_cc Ай бұрын
There are catching solutions that analyze what you download and automatically pull new versions to the cache as soon as they are available. This way your cache always has the latest versions of everything ready to download.
@magnusanglert
@magnusanglert Ай бұрын
Why not squid?
@apalrdsadventures
@apalrdsadventures Ай бұрын
nginx's config language is easier
@DanielFSmith
@DanielFSmith Ай бұрын
Seems like a lot of work over apt install apt-cacher-ng on a spare server, and apt install auto-apt-proxy on the clients.
@apalrdsadventures
@apalrdsadventures Ай бұрын
auto-apt-proxy only autodiscovers if using mdns (which does not route across subnets), but also this still doesn't solve the TLS repo problem
@DanielFSmith
@DanielFSmith Ай бұрын
@@apalrdsadventures If you need to cross subnets then you would need to set your proxy manually: echo >/etc/apt/apt.conf.d/01apt-proxy 'Acquire::http { Proxy "your-server-here:3142"; }' You can also configure https (specific sites, or general) as uncached here, but it seems like you had too much fun with nginx!
@genralit16
@genralit16 10 күн бұрын
@@apalrdsadventures I thought you setup an internal CA? That would solve the TLS repo issue.
@genralit16
@genralit16 10 күн бұрын
REF: Self-Hosted TRUST with your own Certificate Authority!
@haonnoah
@haonnoah Ай бұрын
Now put that in a conatiner 😅 and I'll run in my stack
@apalrdsadventures
@apalrdsadventures Ай бұрын
it's just nginx + a conf file, you can use the official nginx container
Put your STEAM Library on your NAS?
23:04
apalrd's adventures
Рет қаралды 16 М.
More Rust in Linux + Pressure on Microsoft + Updates
15:43
SavvyNik
Рет қаралды 35 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
Have you ever used the "column" command in Linux?
8:24
Veronica Explains
Рет қаралды 79 М.
Securely Expose your Homelab Services with Mutual TLS
20:35
apalrd's adventures
Рет қаралды 12 М.
Just Because Its New Doesn't Mean Its Good (neovim) | Prime Reacts
20:50
How To Use Yazi: An Awesome Terminal File Manager Written In Rust
22:05
The NAS You Didn't Know You Could Still Use
15:43
Hardware Haven
Рет қаралды 91 М.
Homelab Caller ID
23:23
clabretro
Рет қаралды 34 М.
UV for Python… (Almost) All Batteries Included
17:35
ArjanCodes
Рет қаралды 88 М.
The Painful world of Linux Ricing | A Hyprland story
13:01
typecraft
Рет қаралды 83 М.
Self-host your own Git platform! // Gitea Tutorial
24:35
Christian Lempa
Рет қаралды 23 М.
Is it time for ALL NVME in your HOMELAB? Ugreen NVMe NAS
21:44
apalrd's adventures
Рет қаралды 42 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН