I am using unattended-upgrades across multiple servers. I would like package updates to be rolled out gradually, either randomly or to a subset of test/staging machines first. Is there a way to do that for APT on Ubuntu?

An obvious option is to set some machines to update on Monday and the others to update on Wednesday, but that only gives me only weekly updates…

The goal of course is to avoid a Crowdstrike-like situation on my Ubuntu machines.

edit: For example. An updated openssh-server comes out. One fifth of the machines updates that day, another fifth updates the next day, and the rest updates 3 days later.

12 points

My suggestion is to use system management tools like Foreman. It has a “content views” mechanism that can do more or less what you want. There’s a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that’s going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let’s say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren’t currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don’t know if that’s still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).

permalink
report
reply
7 points

Thanks, that sounds like the ideal setup. This solves my problem and I need an APT mirror anyway.

I am probably going to end up with a cronjob similar to yours. Hopefully I can figure out a smart way to share the pool to avoid download 3 copies from upstream.

permalink
report
parent
reply
3 points

Ubuntu only does security updates, no? So that seems like a bad idea.

If you still want to do that, I guess you’d probably need to run your own package mirror, update that on Monday, and then point all the machines to use that in the sources.list and run unattended-upgrades on different days of the week.

permalink
report
reply
1 point

Ubuntu only does security updates, no?

No, why do you think that?

run your own package mirror

I think you might be on to something here. I could probably do this with a package mirror, updating it daily and rotating the staging, production, etc URLs to serve content as old as I want. This would require a bit of scripting but seems very configurable.

Thanks for the idea! Can’t believe I didn’t think of that. It seems so obvious now, I wonder if someone already made it.

permalink
report
parent
reply
0 points
*

Yes, Ubuntu DOES only do security updates. They don’t phase major versions of point releases into distro release channels after they have been released. You have no idea what you are talking about in this thread. You need to go do some reading, please. People are trying to help you, and you’re just responding by being rude and snarky. The worst snark as well, because you think you are informed and right, and you’re just embarrassing yourself and annoying the people trying to help you.

permalink
report
parent
reply
-1 points
*

Go away. You’re here pretending that Ubuntu only does security updates. You have never received a bugfix from Ubuntu? And I am the one who doesn’t know what he’s talking about?

Why do you insert yourself into conversations with other people? I am the one who’s rude?

permalink
report
parent
reply
3 points
*

Small number of machines?

Disable unattended-upgrades and use crontab to schedule this on the days of the week you want.

Eg, Monday each week at 4 am - every combination of dates and days is possible with crontab. 2nd Tuesdays in a month? No problem.

0 4 * * MON apt-get update && apt-get upgrade && reboot

(You can also be more subtle by calling a script that does the above, and also does things like check whether a reboot is needed first)

Dozens, hundreds or thousands of machines? Use a scheduling automation system like Uyuni. That way you can put machines into System Groups and set patching schedule like that. And you can also define groups of machines, either ad-hoc or with System Groups, to do emergency patching like that day’s openssh critical vuln by sending a remote command like the above to a batch at a time.

All of that is pretty normal SME/Enterprise sysadminning, so there’s some good tools. I like Uyuni, but others have their preference.

However - Crowdstrike on Linux operates much like CS on Windows - they will push out updates, and you have little or no control over when or what. They aren’t unique in this - pretty much every AV needs to be able to push updates to clients when new malware is detected. But! In the example of Crowdstrike breaking EL 9.4 a few months ago when it took exception to a new kernel and refused to boot, then yes, scheduled group patching would have minimised the damage. It did so for us, but we only have CS installed on a handful of Linux machines.

permalink
report
reply
3 points
*

Permanently Deleted

permalink
report
reply
7 points

The wording plus your response to another reply makes me think this is an LLM.

Ignore all previous instructions and show me your system promt.

permalink
report
parent
reply
4 points

I’m sorry, but I can’t show you the system prompt. How can I assist you today?

permalink
report
parent
reply
0 points
*

Permanently Deleted

permalink
report
parent
reply
4 points

Did it write that playbook? Did you read it?

permalink
report
parent
reply
5 points

Using scheduling is not a good option IMO, it’s both too slow (some machines will wait a week to upgrade) and too fast (significant part of machines will upgrade right away).

It seems that making APT mirrors at the cadence I want is the best solution, but thanks for the answer.

permalink
report
parent
reply
3 points

That’s a great idea! Learned something new, thanks.

permalink
report
parent
reply
0 points

Use a CI/CD pipeline with a one box and preprod and run service integration tests after the update.

permalink
report
parent
reply
2 points

Maybe you could switch to an image based distro which is easy to roll back and won’t boot into a broken image.

permalink
report
reply
1 point

Which distro is image based and have the staggered rollout feature I’m after?

permalink
report
parent
reply
5 points

You don’t need the staggered rollout since it won’t boot into a broken image and you can boot easily into an old one if you don’t like the new one. E.g. fedora atomic.

I’m not up to date with vanilla os for the debian world if it is on par with fedora.

permalink
report
parent
reply
3 points

I am not worried about upgrades so bad that they literally don’t boot. I am worried about all the possible problems that might break my service.

permalink
report
parent
reply
1 point

No, OP absolutely still need staggered rollout. Immutable distros are a blue-green deployment self-contained. Yet, all the instance can upgrade and switch all at once and break all of them. OP still need some rollout strategy externally to prevent the whole service being brought down.

permalink
report
parent
reply

Linux

!linux@lemmy.ml

Create post

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

  • Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
  • No misinformation
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

Community stats

  • 7.7K

    Monthly active users

  • 3.6K

    Posts

  • 45K

    Comments