What's the trick to storing large files in Linode from TN? (If I am understanding the limits correctly, Linode has a size limit of 5 GB.) How would you storage a dataset full of files >5 GB to Linode. The Linode documentation talks about using multipart files, but I'm unsure of how that is implemented. Thanks.
@apalrdsadventures7 ай бұрын
s3cmd chunks at 5G into multipart files by default, I'm not sure if TrueNAS has messed up that default config or not
@mithubopensourcelab4822 жыл бұрын
Great video... 100 out of 100. Very concise, object oriented approach...
@apalrdsadventures2 жыл бұрын
Thanks, really appreciate it!
@dfgdfg_2 жыл бұрын
How easy is it to set up rotation? e.g. keep hourly for a week, daily for a month, weekly, for 6 months, monthly for 2 years.
@apalrdsadventures2 жыл бұрын
The data is being copied from an immediate snapshot (TrueNAS create a snapshot, syncs the snapshot to Linode, then deletes the snapshot), so you're relying on Linode's backups on their end. It's protection from a different failure mode than a rotated snapshot. You can simultaneously keep zfs snapshots with rotation on the TrueNAS system to protect from accidental deletions or user error, while backing up to Linode to protect from hardware failure.
@entelin Жыл бұрын
Just so people are clear since the title may be a little deceptive. The cloud sync feature is _not_ a dataset backup. It's a file based sync, and therefore a genuinely terrible backup solution unless you also have some sort of s3 versioning happening on the cloud side. This is because some of the most likely threats are things like crypto-malware and even just user error that results in changing/deleting existing files, the sync will happily go ahead and replicate this, destroying your data on the cloud side as well. With s3 versioning it becomes viable since you can then roll back on the cloud side before you then run your restoration. Note that it's not ideal since it can't efficiently deal with large files that are changed internally, a 1gb file that is slightly modified will be re-uploaded in entirety. So it's not as bandwidth efficient as a zfs-send based solution or most other commercial solutions and therefore probably doesn't scale well.