Hi all!
I will soon acquire a pretty beefy unit compared to my current setup (3 node server with each 16C, 512G RAM and 32T Storage).
Currently I run TrueNAS and Proxmox on bare metal and most of my storage is made available to apps via SSHFS or NFS.
I recently started looking for “modern” distributed filesystems and found some interesting S3-like/compatible projects.
To name a few:
- MinIO
- SeaweedFS
- Garage
- GlusterFS
I like the idea of abstracting the filesystem to allow me to move data around, play with redundancy and balancing, etc.
My most important services are:
- Plex (Media management/sharing)
- Stash (Like Plex 🙃)
- Nextcloud
- Caddy with Adguard Home and Unbound DNS
- Most of the Arr suite
- Git, Wiki, File/Link sharing services
As you can see, a lot of download/streaming/torrenting of files accross services. Smaller services are on a Docker VM on Proxmox.
Currently the setup is messy due to the organic evolution of my setup, but since I will upgrade on brand new metal, I was looking for suggestions on the pillars.
So far, I am considering installing a Proxmox cluster with the 3 nodes and host VMs for the heavy stuff and a Docker VM.
How do you see the file storage portion? Should I try a full/partial plunge info S3-compatible object storage? What architecture/tech would be interesting to experiment with?
Or should I stick with tried-and-true, boring solutions like NFS Shares?
Thank you for your suggestions!
“Boring”? I’d be more interested in what works without causing problems. NFS is bulletproof.
You are 100% right, I meant for the homelab as a whole. I do it for self-hosting purposes, but the journey is a hobby of mine.
So exploring more experimental technologies would be a plus for me.
Most of the things you listed require some very specific constraints to even work, let alone work well. If you’re working with just a few machines, no storage array or high bandwidth networking, I’d just stick with NFS.
As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.
NFS is bulletproof.
For it to be bulletproof, it would help if it came with security built in. Kerberos is a complex mess.
I’d only use sshfs if there’s no other alternative. Like if you had to copy over a slow internet link and sync wasn’t available.
NFS is fine for local network filesystems. I use it everywhere and it’s great. Learn to use autos and NFS is just automatic everywhere you need it.
What’s wrong with NFS? It is performant and simple.
By default, unencrypted, and unauthenticated, and permissions rely on IDs the client can fake.
May or may not be a problem in practice, one should think about their personal threat model.
Mine are read only and unauthenticated because they’re just media files, but I did add unneeded encryption via ktls because it wasn’t too hard to add (I already had a valid certificate to reuse)
NFS is good for hypervisor level storage. If someone compromises the host system you are in trouble.
If someone compromises the host system you are in trouble.
Not only the host. You have to trust every client to behave, as @forbiddenlake already mentioned, NFS relies on IDs that clients can easily fake to pretend they are someone else. Without rolling out all the Kerberos stuff, there really is no security when it comes to NFS.
NFS is fine if you can lock it down at the network level, but otherwise it’s Not For Security.
NFS + Kerberos?
But everything I read about NFS and so on: You deploy it on a dedicated storage LAN and not in your usual networking LAN.
It is a pain to figure out how to give everyone the same user id. I only have a couple computers at home. I’ve never figured out how to make LDAP work (including laptops which might not have network access when I’m on the road). Worse some systems start with userid 1000, some 1001. NFS is a real mess - but I use it because I haven’t found anything better for unix.
sshfs is somewhat unmaintained, only “high-impact issues” are being addressed https://github.com/libfuse/sshfs
I would go for NFS.
But NFS has mediocre snapshotting capabilities (unless his setup also includes >10g nics)
I assume you are referring to Filesystem Snapshotting? For what reason do you want to do that on the client and not on the FS host?
I have my NFS storage mounted via 2.5G and use qcow2 disks. It is slow to snapshot…
Maybe I understand your question wrong?
Gluster is shit really bad, garage and minio are great. If you want something tested and insanely powerful go with ceph, it has everything. Garage is fine for smaller installations, and it’s very new and not that stable yet.
go with ceph[:] it has everything
I heard running an object store as a filesystem was considered risky, but that’s not why it sometimes hoses your storage.
Darn, Garage is the only one that I successfully deployed a test cluster.
I will dive more carefully into Ceph, the documentation is a bit heavy, but if the effort is worth it…
Thanks.
I had great experience with garage at first, but it crapped itself after a month, it was like half a year ago and the problem was fixed, still left me with a bit of anxiety.