Not exactly self hosting but maintaining/backing it up is hard for me. So many “what if”s are coming to my mind. Like what if DB gets corrupted? What if the device breaks? If on cloud provider, what if they decide to remove the server?

I need a local server and a remote one that are synced to confidentially self-host things and setting this up is a hassle I don’t want to take.

So my question is how safe is your setup? Are you still enthusiastic with it?

6 points

Absurdly safe.

Proxmox cluster, HA active. Ceph for live data. Truenas for long term/slow data.

About 600 pounds of batteries at the bottom of the rack to weather short power outages (up to 5 hours). 2 dedicated breakers on different phases of power.

Dual/stacked switches with lacp’d connections that must be on both switches (one switch dies? Who cares). Dual firewalls with Carp ACTIVE/ACTIVE connection…

Basically everything is as redundant as it can be aside from one power source into the house… and one internet connection into the house. My “single point of failures” are all outside of my hands… and are all mitigated/risk assessed down.

I do not use cloud anything… to put even 1/10th of my shit onto the cloud it’s thousands a month.

permalink
report
reply
5 points

It’s quite robust, but it looks like everything will be destroyed when your server room burns down :)

permalink
report
parent
reply
1 point
*

Fire extinguisher is in the garage… literal feet from the server. But that specific problem is actually being addressed soon. My dad is setting up his cluster and I fronted him about 1/2 the capacity I have. I intend to sync longterm/slow storage to his box (the truenas box is the proxmox backup server target, so also collects the backups and puts a copy offsite).

Slow process… Working on it :) Still have to maintain my normal job after all.

Edit: another possible mitigation I’ve seriously thought about for “fire” are things like these…

https://hsewatch.com/automatic-fire-extinguisher/

Or those types of modules that some 3d printer people use to automatically handle fires…

permalink
report
parent
reply
2 points

Yeah I really like the “parent backup” strategy from @hperrin@lemmy.world :) This way it costs much less.

permalink
report
parent
reply
0 points
*

Different phases of power? Did you have 3-phase ran to your house or something?

You could get a Starlink for redundant internet connection. Load balancing / fail over is an interesting challenge if you like to DIY.

permalink
report
parent
reply
0 points

Nope 240. I have 2x 120v legs.

I actually had verizon home internet (5g lte) to do that… but i need static addresses for some services. I’m still working that out a bit…

permalink
report
parent
reply
1 point

Couldn’t you use a VPS as the public entry point?

permalink
report
parent
reply
2 points

Absurdly safe.

[…] Ceph

For me these two things are exclusive of each other. I had nothing but trouble with Ceph.

permalink
report
parent
reply
1 point

It depends on how you set it up. Most people do it wrong.

permalink
report
parent
reply
2 points

Ceph has been FANTASTIC for me. I’ve done the dumbest shit to try and break it and have had great success recovering every time.

The key in my experience is OODLES of bandwidth. It LOVES fat pipes. In my case 2x 40Gbps link on all 5 servers.

permalink
report
parent
reply
2 points

You should edit you post to make this sound simple.

“just a casual self hoster with no single point of failure”

permalink
report
parent
reply
2 points

Nah, that’d be mean. It isn’t “simple” by any stretch. It’s an aggregation of a lot of hours put into it. What’s fun is that when it gets that big you start putting tools together to do a lot of the work/diagnosing for you. A good chunk of those tools have made it into production for my companies too.

LibreNMS to tell me what died when… Wazuh to monitor most of the security aspects of it all. I have a gitea instance with my own repos for scripts when it comes maintenance time. Centralized stuff and a cron stub on the containers/vms can mean you update all your stuff in one go

permalink
report
parent
reply
1 point

What does your internal network look like for ceph?

permalink
report
parent
reply
2 points
*

40 ssds as my osds… 5 hosts… all nodes are all functions (monitor/manager/metadataservers), if I added more servers I would not add any more of those… (which I do have 3 more servers for “parts”/spares… but could turn them on too if I really wanted to.

2x 40gbps networking for each server.

Since upstream internet is only 8gbps I let some vms use that bandwidth too… but that doesn’t eat into enough to starve Ceph at all. There’s 2x1gbps for all the normal internet facing services (which also acts as an innate rate limiter for those services).

permalink
report
parent
reply
0 points

My setup is pretty safe. Every day it copies the root file system to its RAID. It copies them into folders named after the day of the week, so I always have 7 days of root fs backups. From there, I manually backup the RAID to a PC at my parents’ house every few days. This is started from the remote PC so that if any sort of malware infects my server, it can’t infect the backups.

permalink
report
reply
1 point

Pretty solid backup strategy :) I like it.

permalink
report
parent
reply
1 point

Off-site backups that are still local is brilliant.

permalink
report
parent
reply
2 points
*

I feel like there are better ways to do this but it isn’t terrible

Better than losing it all

permalink
report
parent
reply
0 points

I got tired of having to learn new things. The latest was a reverse proxy that I didn’t want to configure and maintain. I decided that life is short and just use samba to serve media as files. One lighttpd server for my favourite movies so I can watch them from anywhere. The rest I moved to free online services or apps that sync across mobile and desktop.

permalink
report
reply
0 points
*

Reverse proxy is actually super easy with nginx. I have an nginx server at the front of my server doing the reverse proxy and an Apache server hosting some of those applications being proxied.

Basically 3 main steps:

  • Setup up the DNS with your hoster for each subdomain.

  • Setup your router to port forward for each port.

  • Setup nginx to do the proxy from each subdomain to each port.

DreamHost let’s me manage all the records I want. I point them to the same IP as my server:

This is my config file:

server {
    listen 80;
    listen [::]:80;

    server_name photos.my_website_domain.net;

    location / {
        proxy_pass http://127.0.0.1:2342;
        include proxy_params;
    }
 }

 server {
    listen 80;
    listen [::]:80;

    server_name media.my_website_domain.net;

    location / {
        proxy_pass http://127.0.0.1:8096;
        include proxy_params;
    }
}

And then I have dockers running on those ports.

root@website:~$ sudo docker ps
CONTAINER ID   IMAGE                          COMMAND                  CREATED       STATUS       PORTS                                                      NAMES
e18157d11eda   photoprism/photoprism:latest   "/scripts/entrypoint…"   4 weeks ago   Up 4 weeks   0.0.0.0:2342->2342/tcp, :::2342->2342/tcp, 2442-2443/tcp   photoprism-photoprism-1
b44e8a6fbc01   mariadb:11                     "docker-entrypoint.s…"   4 weeks ago   Up 4 weeks   3306/tcp                                                   photoprism-mariadb-1

So if you go to photos.my_website_domain.net that will navigate the user to my_website_domain.net first. My nginx server will kick in and see you want the ‘photos’ path, and reroute you to basically http://my_website_domain.net:2342. My PhotoPrism server. So you could do http://my_website_domain.net:2342 or http://photos.my_website_domain.net. Either one works. The reverse proxy does the shortcut.

Hope that helps!

permalink
report
parent
reply
2 points

🤷‍♂️ I could spend that two hours with my kids.

You aren’t wrong, but as a community I think we should be listening carefully to the pain points and thinking about how we could make them better.

permalink
report
parent
reply
0 points

fuck nginx and fuck its configuration file with an aids ridden spoon, it’s everything but easy if you want anything other than the default config for the app you want to serve

permalink
report
parent
reply
1 point

I only use it for reverse proxies. I still find Apache easier for web serving, but terrible for setting up reverse proxies. So I use the advantages of each one.

permalink
report
parent
reply
3 points

I had a pretty decent self-hosted setup that was working locally. The whole project failed because I couldn’t set up a reverse proxy with nginx.

I am no pro, very far from it, but I am also somewhat Ok with linux and technical research. I just couldn’t get nginx and reverse proxies working and it wasn’t clear where to ask for help.

permalink
report
parent
reply
2 points

I updated my comment above with some more details now that I’m not on lunch.

permalink
report
parent
reply
1 point

Unfortunately, I feel the same. As I observed from the commenters here, self-hosting that won’t break seems very expensive and laborious.

permalink
report
parent
reply
2 points

Caddy took an afternoon to figure out and setup, and it does your certs for you.

permalink
report
parent
reply
0 points

¯\_(ツ)_/¯ Yeah. It is kinda hard.

Backups. First and foremost.

Now once that is sorted, what if your DB gets corrupted. You test your backups

Learn how to verify and restore

It is a hassle. That’s why there is a constant back and forth between on prem and cloud in the enterprise

permalink
report
reply
1 point

Nothing proves a backup like forcing yourself to simulate a recovery! I like to make one setting change, then make a backup, and then delete everything and try to rebuild it from scratch to see if I can do it and prove the setting change is still there

permalink
report
parent
reply
1 point

We do a quarterly test.

I have the DB guy make a change, I nuke it and ensure I can restore it.

For us. I don’t work for Veeam, while I don’t like their licensing. Veeam is pretty good

Hth

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
16 points

Right now I just play with things at a level that I don’t care if they pop out of existence tomorrow.

If you want to be truly safe (at an individual level, not an institutional level where there’s someone with an interest in fucking your stuff up), you need to make sure things are recoverable unless 3 completely separate things go wrong at the same time (an outage at a remote data centre, your server fails and your local backup fails). Very unlikely for all 3 to happen simultaneously, but 1 is likely to fail and 2 is forseeable, so you can fix it before the 3rd also fails.

permalink
report
reply
6 points

Exactly right there with the not worrying. Getting started can be brutal. I always recommend people start without worrying about it, be okay with the idea that you’re going to lose everything.

When you start really understanding how the tech works, then start playing with backups and how to recover. By that time you’ve probably set up enough that you are ready for a solution that doesn’t require setting everything up again. When you’re starting though? Getting it up and running is enough

permalink
report
parent
reply
3 points

Gonna just stream of consciousness some stuff here:

Been thinking lately, especially as I have been self-hosting more, how much work is just managing data on disk.

Which disk? Where does it live? How does the data transit from here to there? Why isn’t the data moving properly?

I am not sure what this means, but it makes me feel like we are missing some important ideas around data management at personal scale.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 3.7K

    Monthly active users

  • 2K

    Posts

  • 23K

    Comments