IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.
Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”
He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.
Sadly, for our administrator, things are less than ideal.
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.
Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D
Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.
Sure but the chances of your Windows and Linux machines shitting the bed at the same time is less than if everything is running Windows. It’s exactly the same reason you keep a physical copy (which after all can break/burn down etc.) - more baskets to spread your eggs across.
Their point is not that linux can’t fail, it’s that a mix of windows and linux is better than just one. That’s what “heterogeneous environment” means.
You should think of your network environment like an ecosystem; monocultures are vulnerable to systemic failure. Diverse ecosystems are more resilient.
That’s why the 3-2-1 rule exists:
- 3 copies of everything on
- 2 different forms of media with
- 1 copy off site
For something like keys, that means:
- secure server share
- server share backup at a different site
- physical copy (either USB, printed in a safe, etc)
Any IT pro should be aware of this “rule.” Oh, and periodically test restoring from a backup to make sure the backup actually works.
We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.
Sounds like the best time to unionize
I’m in. This world desperately needs an information workers union. Someone to cover those poor fuckers in the help desk and desktop support as well as the engineers and architects that keep all of this shit running.
Those of us that aren’t underpaid are treated poorly. Today is what it looks like if everybody strikes at once.
This dude here coming in hot with a name, Information Workers Union (IWU). Love it
Soo are you gonna create the community or am I?
To preface, I want to see a tech workers union so, so bad.
With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.
Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.
I feel you with this. They do not see themselves as workers. Thank you for the preface.
Agreed, sadly to many there is still the view of tech being a meritocracy, and that they’re in FAANG because of their hard work over everything else, so fuck everyone else. Naturally, many change their tune once their employer actions regressive policies, but it’s surprising how many people just have zero understanding of what a union does. They see cop shows or The Wire and assume it’ll be like the unions there…
Lemmy appears to be weathering the storm quite well…
…probably runs on linux
It runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you’d be fine with the rest. That’s the whole point of federation.
If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:
- Shut down the affected instance.
- Detach the boot volume.
- Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
- Remove the file(s) recommended by Crowdstrike:
- Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
- Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
- Detach and move the volume back over to original instance (attach)
- Boot original instance
Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.
A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.
Lmao this is incredible
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
“Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”
N.B.: Reddit link is from the source
I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.
Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.
Sounds like a lot of architects and admins are going to get thrown under the bus for this one.
“Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”