What is your preferred method for backing up several TB of data?
What storage software could I run to have an archive of my personal files (a couple TB of photos) that doesn't require I keep a full local copy of all the data? I like the idea of a simple and focused tool like Syncthing, but they seem to be angling towards replication.
Is the simple choice to run some S3-like backend and use CLI or other client to append and browse files? I'd love something with fault tolerance that someone can gradually add disks to. If ceph were either less complicated or used less resources I'd want to do that.
Borg Backup. It can work locally or over network. Takes snapshots of the files you give it. Performs deduplication, compression and optionally encryption. You can check the integrity of the backups and repair them. There's a very simple to use GUI for it called Pika Backup to get you started.
that doesn't require I keep a full local copy of all the data
If you don't do that, the place that you call "backup" is the only place where it is stored - that is not a Backup. A backup is an additional place where it is stored, for the case when your primary storage gets destroyed.
In the IT world, we just call that a server. The usual golden rule for backups is 3-2-1:
3 copies of the data total, of which
2 are backups (not the primary access), and
1 of the backups is off-site.
So, if the data is only server side, it's just data. If the data is only client side, it's just data. But if the data is fully replicated on both sides, now you have a backup.
There's a related adage regarding backups: "if there's two copies of the data, you effectively have one. If there's only one copy of the data, you can never guarantee it's there". Basically, it means you should always assume one copy somewhere will fail and you will be left with n-1 copies. In your example, if your server failed or got ransomwared, you wouldn't have a complete dataset since the local computer doesn't have a full replica.
I recently had a a backup drive fail on me, and all I had to do was just buy a new one. No data loss, I just regenerated the backup as soon as the drive was spun up. I've also had to restore entire servers that have failed. Minimal data loss since the last backup, but nothing I couldn't rebuild.
Edit: I'm not saying what your asking for is wrong or bad, I'm just saying "backup" isn't the right word to ask about. It'll muddy some of the answers as to what you're really looking for.
I use Borg Backup to a Hetzner storage box but doing the same thing to a disk array would work fine. How much data are you talking about? What is the usage picture? Backup and archiving are really not the same thing.
I was looking at Borg but that's one of the tools where it seems like I need the entire replicated copy of the dataset locally to add more. I believe Borg can open a view into previous versions of the data, so it's technically append only, but I'd find that process tedious.
These are a couple TB and mostly photos I've taken. I'd like to be able to browse and edit at some point, but my primary concern right now is keeping a copy of everything.
Yeah that's more of an archive than a backup scenario. I have a small self hosted Nextcloud that I use for stuff like that. For a few TB, you might consider Hetzner Storage Cloud which is really Nextcloud. It is backed up daily which is a help.
This would be self-hosted and local, one of the locations in a 3-2-1 strategy. BackBlaze would work for an offsite but I already have that portion covered.
I use rclone, with encryption, to S3. I have close to 3TB of personal data backed up to S3 this way - photos, videos, paperless-ngx (files and database).
Only readable if you have the passwords configured on my singular backup host (a RasPi), or stored in Bitwarden.
That's top of my list for moving the files if I do an S3 or WebDAV backend. I'm overthinking this, aren't I? Just find a WebDAV server, set it up, use rclone to append files and pretty much everything else will be able to browse.
Where will the target be? Online or local? Rsync is really easy to use and the target files are browse-able. I could be too dense but I find online buckets aren't easily browse-able. Even a homemade NAS might be a good choice and it's easily scalable.
All of my machines back up to my home server’s RAID over WebDAV with Nephele.
Then every few days I’ll manually sync them to a server at my parents’ house with a single huge HDD using rsync. I do this manually so that if anything happens to my home server (like ransomware) it doesn’t mirror destroyed data.
Since the Nephele share is just WebDAV, I can mount it locally and move things into it that I don’t want local anymore.
I created Nephele, and I just finished writing an encryption plugin. I wrote it because I’m also going to write an S3 adapter. That way, you can store things in S3, but they’ll be encrypted, so Amazon can’t see them.
Protection against if it happens and they have not noticed within those few days. Probably especially important if they leave the system running while on vacation.
This is really cool. I ended up trying something similar: serving from a ZFS pool with SeaweedFS. TBD if that's going to work for me long term.
I would definitely be able to manually sync the SeaweedFS files with rsync to another location but from what I see it requires me to use their software to make sense of any structure. I might be able to mount it and sync that way, hopefully performance for that is not too bad.
Syncing like that and having more control over where the files are placed on the RAID is very cool.
So i understood you just want some local storage system with some fault tolerance.
ZFS will do that. Nothing fancy, just volumes as either blockdevice or ZFS filesystem.
If you want something more fancy, maybe even distributed, check out storage cluster systems with erasure coding, less storage wasted than with pure replication, though comes at reconstruction cost if something goes wrong.
MinIO comes to mind, tough i never used it.. my requirements seem to be so rare, these tools only get close :/
afaik you can add more disks and nodes more or less dynamically with it.
Yeah it's hard to find something that perfectly fits just what you want. I think it's better if I do something simple like ZFS and maybe some kind of file server on top.
Sounds like something like "git annex" is what you're looking for?
I use this to manage all my photos. It lets you add binaries and synchronize then to a backend server (can be local, can be s3, back blaze, etc).
You can then "drop" files and it ensures a remote exists first. And when you drop the file your still see a symlink of it locally (it's broken) so that you know it exists.
My workflow is to add my files, sync them to both a local server and b2, then I drop and fetch folders as i need (need disk space? "git annex drop 2022*", want to edit some photos? "git annex get 2022_10_01".
Another user said it - what your asking for isn't a backup, it's just data transfer.
It sounds like you're looking for a storage backend that hosts all your data and can download data to the client side on the fly.
If your use case is Windows, Nextcloud Desktop may be what you looking for. I have a similar setup with the game clips folder. It detects changes and auto uploads then, while deleting less recently used data that's properly server side. This feature might be in Mac but I haven't tested it.
Backup wise, I capture an rsync of the nextcloud database and filesystem server-side and store it on a different chassis. That then gets backed up again to a USB drive I can grab and run.
On windows it supports thin sync (meaning that it keep a reference to the file instead of the whole file), on Linux not yet, as it is still in alpha (but you can just connect it as a remote disk and be done with it. That's how I do with mines).
If you don't want the whole Nextcloud, there are standalone cli WebDAV servers.
Save your files to a local s3 object storage mount, enable versioning for immutability and use erasure coding for fault tolerance. You can use Lustre or some other S3 software for the mount. S3 is great for single user file access. You can also replicate to any cloud based S3 for offsite.