Especially server accessible only by SSH…
I can’t be bothered to walk down to the basement, so practically my server is also only accessible by SSH
I’m at college right now, which is a 3 hour drive away from my home, where a server of mine is. I just have to ask my parents to turn it back on when the power goes out or it gets borked. I access it solely through RustDesk and Cloudflare Tunnels SSH (it’s actually pretty cool, they have a web interface for it).
I have no car, so there’s really no way to access it in case something catastrophic happens. I have to rely on hopes, prayers, and the power of a probably outdated Pop!_OS install. Totally doesn’t stress me out I’ll just say I like to live on the edge :^)
Setup a pikvm as ipmi and you’ll have at least another layer of failure required to completely lose connectivity
Currently the server(s) are in my room, which is so messy my dad probably wouldn’t even enter it voluntarily. And in the case grub/fstab/crypttab/etc. are messed up, which is probably the most common error, he probably couldn’t solve it by himself. Soon everything’s gonna live in its own little room in the basement, so it’s gonna be accessible easier actually.
In the old days some of the servers took at hour to reboot. That was stressful when you couldn’t ping it at an hour.
The more disk you had, the longer it took. It walked the scsi bus which took forever. So if you had more disk. It took even longer.
Since everything was remote, you’d have to call hands and they weren’t technical. Also no cameras since it was the 90’s.
Now when I restart a vm or container. I panic if it’s not back up in 10 minutes.
I like how posting got fairly fast. Then we started putting absurd amounts of ram into servers so now they’re back to slow.
Like we have a high clock speed dual 32 core AMD server with 1TB of ram that takes at least 5 minutes to do it’s RAM check. So every time you need to reboot you’re just sitting there twiddling your thumbs waiting anxiously.
I will date myself. These machines had a lot of memory as well which added to the slow reboot. I think it was 16 gigs.
The r series for IBM took forever. The p series was faster but was still slow
Initializing VPC…
Configuring VPC…
Constructing VPC…
Planning VPC…
VPC Configuration…
Step (31/12)…
Spooling up VPC…
VPC Configuration Finished…
Beginning Declaration of VPC…
Declaring Configuration of VPC…
Submitting Paperwork for VPC Registration with IANA…
Redefining Port 22 for official use as our private VPC…
Recompiling OpenSSH to use Port 125…
Resetting all open SSH connections…
Your VPC declaration has been configured!
Initializing Declared VPC…
Never update, never reboot. Clearly the safest method. Tried and true.
When you make a potentially system breaking change and forgot to make a snapshot of the VM beforehand…
Someone set up a script to automatically create daily backups to tape. Unfortunately, it’s still the first tape that was put in there 3.5 years ago, every backup since that one filled up failed. It might as well have failed silently because everyone who received the email with the error message filtered them to a folder they generally ignored.
And no one ever tried to restore it.
Happened to me as well, after a year I learned incremental DB backups were wrongly offset by GMT diff, so we were losing hours every time. Fun.
Luckily we never needed them.
And now we have Postgres with WAL archiving and I sleep so much better.