Cybersecurity firm Crowdstrike pushed an update that caused millions of Windows computers to enter recovery mode, triggering the blue screen of death. Learn …
Are there really a billion systems in the world that run Crowdstrike? That seems implausible. Is it just hyperbole?
Yeah, our VMs completely died at work. Has to set up temporary stuff on hardware we had laying around today. Was kinda fun, but stressful haha.
Could you just revert VMs to a snapshot before the update? Or do you not take periodic snapshots? You could probably also mount the VM’s drive on the host and delete the relevant file that way.
I doubt it’s too much of a stretch, since even here in australia, we’ve had multiple airlines, news stations, banks, supermarkets and many others, including the aluminium extrusion business my father works at, all go down, scale this do hundreds of countries with populations tenfold of ours, it puts it into perspective that there may even be more than a billion machines affected
Despite how it may seem on Lemmy, most people have not yet actually switched to Linux. This stat is legit.
I know that Windows is everywhere, I just don’t know the percentage of Windows computers that run Crowdstrike.
Keep in mind, it’s not just clients, but servers too. A friend of mine works for a decently sized company that has about 1600 (virtual) servers internationally. And yes, all of them were affected.
Sounds pretty plausible to me. An organization doesn’t have to be very big to get into the hundreds or thousands of devices on a network when you account for servers and VM.
A company with 40 employees all accessing and RDS server using a company laptop is looking at 85+ devices already
Whoda thunk automatic updates to critical infrastructure was a good idea? Just hope healthcare life support was not affected.
Many compliance frameworks require security utilities to receive automatic updates. It’s pretty essential for effective endpoint protection considering how fast new threats spread.
The problem is not the automated update, it’s why it wasn’t caught in testing and how the update managed to break the entire OS.
It is pretty easy to imagine separate streams of updates that affect each other negatively.
CrowdStrike does its own 0-day updates, Microsoft does its own 0-day updates. There is probably limited if any testing at that critical intersection.
If Microsoft 100% controlled the release stream, otoh, there’d be a much better chance to have caught it. The responsibility would probably lie with MS in such a case.
(edit: not saying that this is what happened, hence the conditionals)
I don’t think that is what happened here in this situation though, I think the issue was caused exclusively by a Crowdstrike update but I haven’t read anything official that really breaks this down.
Ok Russian comrade. Security in companies is terrible. You’re right. It’s just a giant grift.
Now, go buy some limited time offer fight fight fight shoes from agent orange.
Hospital stuff was affected. Most engineers are smart enough to not connect critical equipment to the Internet, though.
I’m not in the US, but my other medical peers who are mentioned that EPIC (the software most hospitals use to manage patient records) was not affected, but Dragon (the software by Nuance that we doctors use for dictation so we don’t have to type notes) was down. Someone I know complained that they had to “type notes like a medieval peasant.” But I’m glad that the critical infrastructure was up and running. At my former hospital, we used to always maintain physical records simultaneously for all our current inpatients that only the medical team responsible for those specific patients had access to just to be on the safe side.
That’s actually a very smart idea, keeping physical records of every inpatient. Wonder why the ai companies don’t do transcription of medical notes, instead of trying to add ai features to my washer/ dryer combo. Just seems like a very practical use of the tech
This is pretty much correct. I work in an Epic shop and we had about 150 servers to remediate and some number of workstations (I’m not sure how many). While Epic make not have been impacted, it is a highly integrated system and when things are failing around it then it can have an impact on care delivery. For example if a provider places a stat lab order in Epic, that lab order gets transmitted to an integration middleware which then routes it to the lab system. If the integration middleware or the lab system are down, then the provider has no idea the stat order went into a black hole.
I’m an Epic analyst - while Epic was fine, many of our third party integrations shit the bed. Cardiology (where I work) was mostly unaffected aside from Omnicell being down, but the laboratory was massively fucked due to all the integrations they have. Multiple teams were quite busy, I just got to talk to them about it eventually.
There is no learning, companies just move to different antivirus. The new hotness, the cycle repeats over and over until the new antivirus does this same shit. Look at McAfee in 2010, in fact the CEO of Crowdstrike was the CTO of McAfee then. That easily took down millions of windows XP machines.
in fact the CEO of Crowdstrike was the CTO of McAfee then
The hero of Linux adoption then. All hail - what’s the name of that guy?
This isn’t the Windows L you think it is. This can and has happened on Linux. It’s a Crowdstrike/Bad corp IT issue.
Combing over it’s Wikipedia article, this company already had a series of other issues.
Sucks to anyone who ever relied on them. Oh look at that, they’ve been acquiring other security startups and companies. Perhaps that should also be looked into as well?