Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good “homelabing guy” I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing… I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don’t really planned to.
So here’s my thoughts and slowly I’m going to leave docker for more old-school way of hosting services. Don’t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it’s not my case.

Maybe I’m doing something wrong but I let you talk about it in the comments, thx.

10 points
*

Honestly after using docker and containerization for more than a decade, my home setups are just yunohost or baremetal (a small pi) with some periodic backups. I care more about my own time now than my home setup and I want things to just be stable. Its been good for a couple of years now, without anything other than some quick updates. You dont have to deal with infa changes with updates, you dont have to deal with slowdowns, everything works pretty well.

At work its different Docker, Kubernetes, etc… are awesome because they can deal gracefully with dependencies, multiple deploys per day, large infa. But ill be the first to admit that takes a bit more manpower and monitoring systems that are much better than a small home setup.

permalink
report
reply
3 points

yeah I think that at the end even if it seems a bit “retro” the “normal install” with periodic backups/updates on default vm (or even lxc containers) are the best to use, the most stable and configurable

permalink
report
parent
reply
1 point

Do you use any sort of RAID? Recently, ive been using an old SSD, but back 9ish years ago, I used to backup everything with a RAID system, but it took too much time to keep up.

permalink
report
parent
reply
4 points

I have a RAID 1 on the proxmox host to backup vms and their datas

permalink
report
parent
reply
1 point

How isit lore stable or configurable? I have docker containers running backup the my folder daily where all the data lives off-site. Also backup the whole container daily onsite. I have found it so easy. I admit it was a pain to learn but after everything was moved over it has been easier.

permalink
report
parent
reply
5 points

I tend to also agree with your opinion,but lately Yunohost have quite few broken apps, they’re not very fast on updates and also not many active developers. Hats off to them though because they’re doing the best they can !

permalink
report
parent
reply
4 points

I have to agree, the community seems to come and go. Some apps have daily updates and some have been updated only once. If I were to start a new server, I would probably still pick yunohost, but remove some of the older apps as one offs. The lemmy one for example is stuck on a VERY old version. However the GotoSocial app is updated every time there is an update in the main repo.

Still super good support for something that is free and open source. Stable too :) but sometimes stability means old.

permalink
report
parent
reply
4 points

Didn’t really tried YunoHost it’s basically a simple selfhostable cloud server?

permalink
report
parent
reply
6 points

I can recommend NixOS. It’s quite simple if your wanted application is part of NixOS already. Otherwise it requires quite some knowledge to get it to work anyways.

permalink
report
reply
1 point

One day I will try, this project seems interesting!

permalink
report
parent
reply
15 points
*

Yeah, It’s either 4 lines and you got some service running… Or you need to learn a functional language, fight the software project and make it behave on an immutable filesystem and google 2 pages of boilerplate code to package it… I rarely had anything in-between. 😆

permalink
report
parent
reply
7 points

Hey now, you can also spend 20 pages of documentation and 10 pages of blogs/forums/github1 and you can implement a whole nix module such that you only need to write a further 3 lines to activate the service.

1 Your brain can have a little source code, as a threat.

permalink
report
parent
reply
0 points

Nizos is a piece of shit if you want to do anything not in NixOS. Even trying to do normal things like running scripts in NixOS is horrible. I like the idea but the execution needs work.

permalink
report
parent
reply
30 points

It’s hard for me to tell if I’m just set in my ways according to the way I used to do it, but I feel exactly the same.

I think Docker started as “we’re doing things at massive scale, and we need to have a way to spin up new installations automatically and reliably.” That was good.

It’s now become “if I automate the installation of my software, it doesn’t matter that the whole thing is a teetering mess of dependencies and scripted hacks, because it’ll all be hidden inside the container, and also people with no real understanding can just push the button and deploy it.”

I forced myself to learn how to use Docker for installing a few things, found it incredibly hard to do anything of consequence to the software inside the container, and for my use case it added extra complexity for no reason, and I mostly abandoned it.

permalink
report
reply
2 points

I agree with it, docker can be simple but can be a real pain too. The good old scripts are the way to go in my opinion, but I kinda like the lxc containers in docker, this principle of containerization is surely great but maybe not the way docker does… (maybe distrobox could be good too 🤷 )

Docker is absolutely a good when having to scale your env but I think that you should build your own images and not use prebuild ones

permalink
report
parent
reply
10 points

I hate how docker made it so that a lot of projects only have docker as the official way to install the software.

This is my tinfoil opinion, but to me, docker seems to enable the “phone-ification” ( for a lack of better term) of softwares. The upside is that it is more accessible to spin services on a home server. The downside is that we are losing the knowledge of how the different parts of the software work together.

I really like the Turnkey Linux projects. It’s like the best of both worlds. You deploy a container and a script setups the container for you, but after that, you have the full control over the software like when you install the binaries

permalink
report
parent
reply
11 points

I hate how docker made it so that a lot of projects only have docker as the official way to install the software.

Just so we are clear on this. This is not dockers fault. The projects chose Docker as a distribution method, most likely because it’s as widespread and known as it is. It’s simply just to reach more users without spreading too thin.

permalink
report
parent
reply
4 points

You are right and I should have been more precise.

I understand why docker was created and became popular because it abstracts a lot of the setup and make deployment a lot easier.

permalink
report
parent
reply
1 point

Yeah, but it is hard to separate that, and it’s easy to get a bit resentful particularly when a projects quality declines in large part because they got lazy by duct taping in container registries instead of more carefully managing their project.

permalink
report
parent
reply
3 points

I love docker, and backups are a breeze if you’re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

permalink
report
reply
2 points

What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

permalink
report
parent
reply
3 points

I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

I pretty much only use prebuilt images, I run them like appliances. Anything custom I’d run in a vm with snapshots as my docker skills do not run that deep.

permalink
report
parent
reply
1 point

This why I don’t get anything from using docker I want to tweak my configuration and docker is adding an extra level of complexity

permalink
report
parent
reply
2 points

I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHub’s and run updates based on monitored merged).

For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).

The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.

All media etc… is mounted via NFS shares (for applications like immich or plex).

Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.

permalink
report
parent
reply
1 point

I don’t really like portainer, first their business model is not that good and second they are doing strange things with the compose files

permalink
report
parent
reply
1 point

I don’t like docker. It’s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me I’ve had a bad experience. It’s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.

I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didn’t have in docker.

permalink
report
reply
1 point

the old good way is not that bad

permalink
report
parent
reply
9 points

What do you mean it’s hard to update containers?

permalink
report
parent
reply
6 points

For real. Map persistent data out and then just docker compose pull && up. Theres nothing to it. Regular backups make reverting to previous container versions a breeze

permalink
report
parent
reply
0 points

For one, if the compose file syntax or structure and options changes (like it did recently for immich), you have to dig through github issues to find that out and re-create the compose with little guidance.

Not docker’s fault specifically, but it’s becoming an issue with more and more software issued as a docker image. Docker democratizes software, but we pay the price in losing perspective on what is good dev practice.

permalink
report
parent
reply
1 point

Use portainer + watchtower

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 3.7K

    Monthly active users

  • 2K

    Posts

  • 23K

    Comments