-removed-
It was what crowdstrike themselves told us to do!!! but I get bad faith questions assumptions and exaggerations out of people allegedly in my field here on Lemmy. Bullshit. You clowns belonged back on reddit. You are the worst kind of people.
Complexity increases exponentially in large organizations, for a number of reasons.
I know my industry thanks. I’ve over a decade of experiance in sizable complex organizations. You know who really likes to cling to outdated hardware and software? Hopefully this scares you because it should: medical organizations.
What the IT people did (and what others claimed was done) during this fiasco was objectively worse than the actual fix but everyone in this thread just isn’t happy that I didn’t join them in shitting on microsoft. This is where lemmy shows that its users are becoming more like reddit users every day. Or don’t know what /s means.
Hopefully none of those systems were exposed to anything internet facing for obvious reasons, but given the shear incompetance observed I wouldn’t be surprised.
Hi, I also know my industry, with over a quarter century of experience in Fortune 500 companies. The old motto of IT used to be ‘if it ain’t broke, don’t fix it’ and then salespeople found out that fear is a great sales tool.
When proper precautions are taken and a risk analysis is performed there is no reason that old operating systems and software can’t continue to be used in production environments. I’ve seen many more systems taken down from updates gone wrong than from running ‘unsupported’ software. Just because a system is old, doesn’t mean it is vulnerable, often the opposite is true.
…and just how many PCs do you intend to “reboot into safemode delete one bad file and then reboot again”? Manually, or do you have some remote access tool that doesn’t require a running system?
If you have no idea how long it may take and if the issue will return - and particularly if upper management has no idea - swapping to alternate solutions may seem like a safer bet. Non-Tech people tend to treat computers with superstition, so “this software has produced an issue once” can quickly become “I don’t trust anything using this - what if it happens again? We can’t risk another outage!”
The tech fix may be easy, but the manglement issue can be harder. I probably don’t need to tell you about the type of obstinate manager that’s scared of things they don’t understand and need a nice slideshow with simple words and pretty pictures to explain why this one-off issue is fixed now and probably won’t happen again.
As for the question of scale: From a quick glance we currently have something on the order of 40k “active” Office installations, which mostly map to active devices. Our client management semi-recently finished rolling out a new, uniform client configuration standard across the organisation (“special” cases aside). If we’d had CrowdStrike, I’d conservatively estimate that to be at least 30k affected devices.
Thankfully, we don’t, but I know some amounts of bullets were being sweated until it was confirmed to only be CrowdStrike. We’re in Central Europe, so the window between the first issues and the confirmation was the prime “people starting work” time.