Thanks for this. You always have the coolest ideas and most practical solutions, and it's very inspiring.
@Larz9929 күн бұрын
Excellent idea! Yet another very helpful service on my network. Thanks as always for your careful explanations. Great work.
@joakimsilverdrakeАй бұрын
"We don't need IPv4" - Love it. IPv6 isn't the future. IPv6 is now! Perfect guide. Something I need to add to my ever growing list of things to do.
@N0ReaverАй бұрын
Your solution is quite simple and elegant. What about the apt-cacher-ng ?
@LordApophis100Ай бұрын
apt-cacher-ng is simpler to set up, you just have to add a proxy directive to apt and it will use it to access all apt repos, no need to manually configure each deb repo you want to cache, apt-cacher-ng will cache everything your system uses.
@Darkk6969Ай бұрын
Sweet! This actually gave me some ideas about copying the config files onto my fresh Debian installs. I was using self hosted gitea but having a simple wget command and piped it to bash is a better idea.
@GagootronАй бұрын
Great Video as always! Really nice how much faster my updates are now with a cache in place.
@gaetan3340Ай бұрын
Perfect and thank you for sharing. Simple yet super efficient and works for any repository
@kevinshumaker3753Ай бұрын
Was running apt-cacher-ng on a Pi, before, and a Debian vm most recently. This was 'easier' to set up, mostly. I have a dozen or so iron and vm Debians, and at least that many Pis. Took less than an hour to spin up a minimal VM and install, another 20-30 minutes to get the rest set up, and it's cooking away for them all. It'd be nice to have a text doc with what needs to be changed on your scripts from your domain and particular setup, but other than that, very nice.
@Denis-in6urАй бұрын
Very nice video and overall good idea! I personally like Ansible more than a shell script - especially because you plan to reconfigure all your servers. But that doesn't matter as long as it works for you. :)
@luisalcarazlealАй бұрын
Now that i have more than 20 LXCs i'm starting to automate updates with ansible and this will come handy to save some bandwidth (not that I'm behing a metered connection, but if the servers are up 24/7 better put them to good use). Thanks for the video.
@TheCreatАй бұрын
I was actually recently thinking about looking into caching for fun, but decided it was too much work for not enough gain. Thanks a bunch, it's now basically a turnkey solution for me (not quite, but close enough), so that equation has flipped and I can "just do it". Neat!
@CharcoalDaddyBBQАй бұрын
Nice! Gonna set this up today
@bokami3445Ай бұрын
PERFECT! a project for the weekend! Thank You!
@dmynerd78Ай бұрын
Thank you for posting this! I've been meaning to set up apt-cacher-ng for a while now but I'd rather use nginx for the exact reasons you explained
@maurolimaokАй бұрын
What a nice video. Thanks!
@benlenauАй бұрын
Excellent work. Will immidiately spin up an lxc with nginx + apt cache and try this out 🙂 ps. added a couple of lines so that icons also load.
@projectpanic2291Ай бұрын
I was worried about a chicken-egg problem running the cacher in a container, especially applying the cache to sources.list of the proxmox hosts. The solution was pretty simple: sources.list supports multiple sources and will prefer the first entry. So I have my local cache entries listed first followed by the default upstream sources.
@apalrdsadventuresАй бұрын
Also, caching updates isn't going to interrupt operation of PVE, so it's important but not on the critical path for anything
@projectpanic2291Ай бұрын
@@apalrdsadventuresExactly, not concerned about operational issues. I have the hosts and all containers managed by Ansible. After your video I setup the containers to point to the cache before their first ‘apt update’. Whenever I want to rebuild all the containers I don’t want to have to treat the cacher container as “special” and bring it up first. The Proxmox hosts are similarly setup with a custom ISO with auto answers.
@maxdiamond55Ай бұрын
Great video as always . thanks
@oooee27 күн бұрын
At the 5:00 minute mark he mentions that a proxy doesn't let him cache https traffic, but by setting up a mirrored cache using an http web server how does that do any better? Does modifying the source list allow for caching https content?
@apalrdsadventures26 күн бұрын
Without modifying sources.list, apt will always validate the server's certificate matches the domain it is trying to access, and that the server's certificate is in the system chain of trust (which on Linux comes from Mozilla). So, you cannot intercept a connection to a public domain name which you do not control. By modifying sources.list the domain is now different so I can own the certificate for that domain, but I can also just remove the 's' and use HTTP.
@zyghomАй бұрын
long time ago I thought about the same but honestly it is a lot to set up - I am surprised nobody made it yet as a package or container that are already preconfigured
@cheebadigga4092Ай бұрын
suggestion to improve the rewrite script readability: use cat/EOF to pipe a multiline string to a file.
@kajraske2002Ай бұрын
Man I was just reading about this. Does anyone know how this compares to something like a Squid cache in pf/OPNsense? I've been planning on getting a dedicated machine for *sense, wondering how much I need to worry about storage on it.
@djordje1999Ай бұрын
It will be interesting to show how to do cache for docker dependencies.. Homelab cluster with bunch of VMs with docker inside each VM..
@BladeWDRАй бұрын
For reference, the .sources files are deb822 format. Its intended to reduce the need for separate keyring files, iirc. Also, I can't help but want to try and write an ansible playbook to replace that bash script.😂
@NFvidoJagg2Ай бұрын
can also be done with lancache and adding the rules into there. but in that case your also adding in another DNS resolver
@klaernieАй бұрын
Hmm.. not in a place to try this, but one might rewrite all repos, but put the repo hostname as first part of the URL, where one can extract it in the nginx config again, making it able to cache arbitrary repos.
@seaSwann25 күн бұрын
Is there a way to do this for youtube videos or even DRM media like Netflix?
@yezperdkАй бұрын
How do you keep all of your containers updated without having to do manual apt upgrades on each one?
@iehfnedАй бұрын
Do you have an automatic way of generating ipv6 DNS entries based on slaac or is this step still a manual process. I assume you use opnsense?
@demanuDJАй бұрын
I have to do the same for my opensuse systems
@andreasheckel1061Ай бұрын
Would love to see a similar solution for RH based linux distros :-)
@markarca63603 күн бұрын
What about setting up a local DNS and use a CNAME for the local cache?
@apalrdsadventures3 күн бұрын
CNAMEs still validate the certificate of the name which was originally typed
@shawnsomething4360Ай бұрын
While looping over files with find, variable mirror gets emptied after first loop. This results in first file to be ok and subsequent files to write incomplete URLs. I am however not familiar with bash scripting enough to explain why. Edit: Also in your blog you place nginx log files in /var/debcache/ but at the end of blog you tail files in /etc/debcache/
@apalrdsadventuresАй бұрын
fixed both. Needed to export the name for the second bash call.
@shawnsomething4360Ай бұрын
@@apalrdsadventures Tested and it works. Thank you.
@ttass-m9dАй бұрын
Will this break the dist upgrade process? Say Ubuntu 22.04 to 24.04. Since it is renaming the repositories in sources.list.
@PaxsaliАй бұрын
does anyone know how to do the same for conda, pip and flatpak?
@williamthunderionАй бұрын
This is interesting. Personally I have a pacman cache for in my homelab using similar setup because I prefer Arch over Debian. Now if someone were to make similar tutorial for RPM-based distro...
@innesleroux9439Ай бұрын
Awesome thanks!
@Royaleah23 күн бұрын
Works well except my proxmox cache falls out of sync in a day or two. If I clear the cache it works again for a couple days and then back to "File has unexpected size (374493 != 380514). Mirror sync in progress?" Edit: Update after looking more into things I think it might be the that it is trying to connect to the download.proxmox server using IPv6 and I don't have IPv6 enabled on any of my systems. Edit2: Update2 after changing the config to point to the IPv4 address and reboot. I still get get the error.
@apalrdsadventures23 күн бұрын
Try adding " ~Packages 1; " to the nocache list in the nginx debcache.conf, restart nginx and see if that helps Proxmox seems to not distribute versioned package difference files like Debian does
@Royaleah22 күн бұрын
@@apalrdsadventures Thanks for the reply. I have this #Nocache for those entries proxy_cache_bypass $nocache; proxy_no_cache $nocache; in my conf. I added it like "proxy_no_cache $nocache ~Packages 1;"? That still got errors. I put in in the bypass line and got no errors. Should it be in both no_cache and bypass?
@apalrdsadventures22 күн бұрын
oh, not there. Up at the top, there's a map {} which contains lines of regex's to bypass the cache (where `nocache` is defined). Add a new line in that table of exactly what was in my last comment. There should be "~InRelease 1;" also.
@Royaleah22 күн бұрын
@@apalrdsadventures Thank you! That is working. I don't know how I missed that right at the top of the config with other things that look like it. Searching google and asking chatgpt were not getting me any good answers, even with your "~Packages 1;" directly. Again, Thank you.
@tanja84dk1Ай бұрын
Actually it is possible to cache even https packages. like I have run in years apt-cacher-ng on my network to cache my apt updates, and true on the clients I have added the apt-cacher-ng as a apt proxy, and if it's a https repo then I have changed in the source list files so if its a https repo then changed it to http and then added "HTTPS///" with 3 / in front of the domain since that tells apt-cacher-ng that it's a https repo so it downloads the update over https ofc internal its then only http between the cache and the client but that is only on the local network
@apalrdsadventuresАй бұрын
You can't cache packages if the client (apt) is doing HTTPS to the origin unless you have a cert which the client will accept. This either means modifying apt or the system trust to accept your fake cert, or modifying the sources.list to use HTTP instead of HTTPS. If the client is only doing HTTP, you can intercept the connection at the network or proxy level and return a response from cache. Apt will still validate the GPG signatures of the files, but the session is not protected with TLS. The protocol from the cache - >origin can be either, it doesn't matter.
@tanja84dk1Ай бұрын
@@apalrdsadventures hmm looks like youtube eat my response when I tried to explain it in depth sorry
@tanja84dk129 күн бұрын
@@apalrdsadventures but that is also why apt-cacher-ng has a puld in way to let it know that it's a package over https ( small change on the client source file ) for the client to tell the cache that it's a https repo so apt-cacher-ng downloads it over https
@annoyedbybrotherАй бұрын
Are there any changes you would have to make to allow offline clients to update this way?
@apalrdsadventuresАй бұрын
The cache isn't a mirror, so it's not actively pulling data until it's requested by the first client. If the client is offline but the cache is not, and the client can route to / through the cache, then it will work fine. If the client is entirely offline, you'd need a different approach.
@treyquattroАй бұрын
great idea. Personally I would have hijacked DNS.
@UnderEuАй бұрын
Now, I just need a mirror who serves my ISP with anything higher than 300kbps in a Gigabit link.
@nezu_ccАй бұрын
There are catching solutions that analyze what you download and automatically pull new versions to the cache as soon as they are available. This way your cache always has the latest versions of everything ready to download.
@magnusanglertАй бұрын
Why not squid?
@apalrdsadventuresАй бұрын
nginx's config language is easier
@DanielFSmithАй бұрын
Seems like a lot of work over apt install apt-cacher-ng on a spare server, and apt install auto-apt-proxy on the clients.
@apalrdsadventuresАй бұрын
auto-apt-proxy only autodiscovers if using mdns (which does not route across subnets), but also this still doesn't solve the TLS repo problem
@DanielFSmithАй бұрын
@@apalrdsadventures If you need to cross subnets then you would need to set your proxy manually: echo >/etc/apt/apt.conf.d/01apt-proxy 'Acquire::http { Proxy "your-server-here:3142"; }' You can also configure https (specific sites, or general) as uncached here, but it seems like you had too much fun with nginx!
@genralit1610 күн бұрын
@@apalrdsadventures I thought you setup an internal CA? That would solve the TLS repo issue.
@genralit1610 күн бұрын
REF: Self-Hosted TRUST with your own Certificate Authority!
@haonnoahАй бұрын
Now put that in a conatiner 😅 and I'll run in my stack
@apalrdsadventuresАй бұрын
it's just nginx + a conf file, you can use the official nginx container