I have a 2 bay NAS, and I was planning on using 2x 18tb HDDs in raid 1. I was planning on purchasing 3 of these drives so when one fails I have the replacement. (I am aware that you should purchase at different times to reduce risk of them all failing at the same time)

Then I setup restic.

It makes backups so easy that I am wondering if I should even bother with raid.

Currently I have ~1TB of backups, and with restics snapshots, it won’t grow to be that big anyways.

Either way, I will be storing the backups in aws S3. So is it still worth it to use raid? (I also will be storing backups at my parents)

31 points
*

I always do some level of RAID. If for no other reason, I’m not out of commission if a disk fails. When you’re working with multi TB, restoring from a backup can take a while. If rapid recovery from a disk failure is not a high priority for you, then you could probably do without RAID.

Either way, make sure you test your backups occasionally.

Another way to put it: With RAID, a disk failure is like your Check Engine light coming on. You can still drive, but you should address the problem as soon as you can. Without RAID, it’s like your engine has seized up and you have to tow it for repair and are without your car until it’s fixed.

permalink
report
reply
7 points

Hmm that’s a good point.

Aws also can cost a good chunk if you restore un-optimally

permalink
report
parent
reply
12 points

Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.

Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.

If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.

permalink
report
reply
3 points

Note that you do not need any sort of redundancy to detect corruption.

Redundancy only gains you the ability to have that corruption immediately and automatically repaired.

While this sounds nice in theory, you have no use for such auto repair if you have backups handy because you can simply restore that data manually using your backups in the 2 times in your lifetime that such corruption actually occurs.
(If you do not have backups handy, you should fix that before even thinking about RAID.)

It’s incredibly costly to have such redundancy at a disk level and you’re almost always better off using those resources on more backups instead if data security is your primary concern.
Downtime mitigation is another story but IMHO it’s hardly relevant for most home users.

permalink
report
parent
reply
1 point

backups in the 2 times in your lifetime that such corruption actually occurs.

What are you even talking about here? This line invalidates everything else you’ve said.

permalink
report
parent
reply
1 point

I was thinking whether I should elaborate on this when I wrote the previous reply.

At the scale of most home users (~dozens of TiBs), corruption is actually quite unlikely to happen. It’ll happen maybe a handful of times in your lifetime if you’re unlucky.

Disk failure is actually also not all that likely (maybe once every decade or so, maybe) but still quite a bit more likely than corruption.

Just because it’s rare doesn’t mean it never happens or that you shouldn’t protect yourself against it though. You don’t want to be caught with your pants down when it does actually happen.

My primary point is however that backups are sufficient to protect against this hazard and also protect you against quite a few other hazards. There are many other such hazards and a hard drive failing isn’t even the most likely among them (that’d be user error).
If you care about data security first and foremost, you should therefore prioritise more backups over downtime mitigation technologies such as RAID.

permalink
report
parent
reply
2 points

Can you explain this to me better?

I need to work on my data storage solution, and I knew about bit rot but thought the only solution was something like a zfs pool.

How do I go about manually detecting bit rot? Assuming I had perfect backups to replace the rotted files.

Is a zfs pool really that inefficient space wise?

permalink
report
parent
reply
2 points
*

Sure :)

I knew about bit rot but thought the only solution was something like a zfs pool.

Right. There are other ways of doing this but a checksumming filesystem such as ZFS, btrfs (or bcachefs if you’re feeling adventurous) are the best way to do that generically and can also be used in combination with other methods.

What you generally need in order to detect corruption on ab abstract level is some sort of “integrity record” which can determine whether some set of data is in an expected state or an unexpected state. The difficulty here is to keep that record up to date with the actually expected changes to the data.
The filesystem sits at a very good place to implement this because it handles all such “expected changes” as executing those on behalf of the running processes is its purpose.

Filesystems like ZFS and btrfs implement this integrity record in the form of hashes of smaller portions of each file’s data (“extents”). The hash for each extent is stored in the filesystem metadata. When any part of a file is read, the extents that make up that part of the file are each hashed and the results are compared with the hashes stored in the metadata. If the hash is the same, all is good and the read succeeds but if it doesn’t match, the read fails and the application reading that portion of the file gets an IO error that it needs to handle.

Note how there was never any second disk involved in this. You can do all of this on a single disk.

Now to your next question:

How do I go about manually detecting bit rot?

In order to detect whether any given file is corrupted, you simply read back that file’s content. If you get an error due to a hash mismatch, it’s bad, if you don’t, it’s good. It’s quite simple really.

You can then simply expand that process to all the files in your filesystem to see whether any of them have gotten corrupted. You could do this manually by just reading every file in your filesystem once and reporting errors but those filesystems usually provide a ready-made tool for that with tighter integrations in the filesystem code. The conventional name for this process is to “scrub”.

How do I go about manually detecting bit rot? Assuming I had perfect backups to replace the rotted files.

You let the filesystem-specific scrub run and it will report every file that contains corrupted data.

Now that you know which files are corrupted, you simply replace those files from your backup.

Done; no more corrupted files.

Is a zfs pool really that inefficient space wise?

Not a ZFS pool per-se but redundant RAID in general. And by “incredibly costly” I mean costly for the purpose of immediately restoring data rather than doing it manually.

There actually are use-cases for automatic immediate repair but, in a home lab setting, it’s usually totally acceptable for e.g. a service to be down for a few hours until you e.g. get back from work to restore some file from backup.

It should also be noted that corruption is exceedingly rare. You will encounter it at some point which is why you should protect yourself against it but it’s not like this will happen every few months; this will happen closer to on the order of every few decades.

To answer your original question directly: No, ZFS pools themselves are not inefficient as they can also be used on a single disk or in a non-redundant striping manner (similar to RAID0). They’re just the abstraction layer at which you have the choice of whether to make use of redundancy or not and it’s redundancy that can be wasteful depending on your purpose.

permalink
report
parent
reply
3 points
*

It’s up to you. Things to consider:

  • Size of data
  • Recovery speed (Internet speed)
  • Recovery time objective
  • Recovery point objective (If you’re backing up once per day, is it okay to lose 23 hours of data when a disk fails?)

If your recovery objectives can be met with the anticipated data size and recovery speed, then you could do RAID 0 instead of RAID 1 to get higher speeds and capacity. Just know that if you do that, you better be on top of your backups because they will be needed eventually.

permalink
report
reply
0 points

RAID is a great backup alternative.

/s

permalink
report
reply
0 points

Depends, how much do you value your data? Is it all DVD rips where you still have the DVDs? Nah you don’t really need raid. Are they precious family photos where your only backup copy is S3? Yeah I’d use raid for that, plus having a second copy stored elsewhere.

Plus as others have mentioned there’s checks on your data for bitrot, which absolutely does happen.

permalink
report
reply
2 points
*

RAID does not protect your data, it protects data uptime.

RAID cannot ensure integrity (i.e bitrot protection). Its one and only purpose it to mitigate downtime.

permalink
report
parent
reply

ZFS or other software RAIDs can though. Does anyone stll use hardware raid anyways?

permalink
report
parent
reply
1 point

ZFS and BTRFS’ integrity checks are entirely independent of whether you have redundancy or not. You don’t need any sort of RAID to get that; it also works on a single disk.
The only thing that redundancy provides you here is immediate automatic repair if corruption is found. I’ve written about why that isn’t as great as it sounds in another reply already.

Most other software RAID can not and does not protect integrity. It couldn’t; there’s no hashing. Data verification is extremely annoying to implement on the block level and has massive performance gotchas, so you wouldn’t want that even if you could have it.

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 3.7K

    Monthly active users

  • 2K

    Posts

  • 23K

    Comments