ItS NoT A wInDoWs PrObLeM – Idiots, even on Lemmy
If you patch a security vulnerability, who’s fault is the vulnerability? If the OS didn’t suck, why does it need a 90 billion dollar operation to unfuck it?
Redhat is VALUED at less than that.
https://pitchbook.com/profiles/company/41182-21
It’s a fucking windows problem.
Sure, but they weren’t patching a windows vulnerability, windows software, or a security issue, they were updating their software.
I’m all for blaming Microsoft for shit, but “third party software update causes boot problem” isn’t exactly anything they caused or did.
You also missed that the same software is deployed on Mac and Linux hosts.
Hell, they specifically call out their redhat partnership: https://www.crowdstrike.com/partners/falcon-for-red-hat/
You can not like windows, and also recognize that CrowdStrike isn’t from Microsoft - so a problem that CrowdStrike caused isn’t the fault of Windows.
If that makes me a idiot by holding two different ideas in my head, so be it, but you are spending time with us, so thank you for elevating us!
I’m waiting for the post mortem before declaring this to not be anything to do with MS tbh. It’s only affecting windows systems and it wouldn’t be the first time dumb architectural decisions on their part have caused issues (why not run the whole GUI in kernel space? What’s the worst that could happen?)
I agree it’s possible. But if you’re a software as a service vendor, it is your responsibility to be in the alpha and beta release channels, so if there is a show stopping error coming down the pipeline you can get in front of it.
But more tellingly, we have not seen Windows boot loop today from other vendors, only this vendor. Right now the balance of probabilities is in the direction of crowd strike
Because it isn’t. Their Linux sensor also uses a kernel driver, which means they could have just as easily caused a looping kernel panic on every Linux device it’s installed on.
There’s no way of knowing that, though. Perhaps their Linux and Darwin drivers wouldn’t have paniced the system?
Regardless, doing almost anything at the kernel level is never a good idea
Also, it’s less about “their” drivers and more about what a kernel module can do.
Saying “there’s no way to know” doesn’t fit, because we do know that a malformed kernel module can destabilize a linux or mac system.
“Malformed file” isn’t a programming defect or something you can fix by having a better API.
It’s not impossible. Crowdstrike has done it recently to linux machines.
Kernel panic observed after booting 5.14.0-427.13.1.el9_4.x86_64 by falcon-sensor process:
https://access.redhat.com/solutions/7068083
This is, in a lot of ways, impressive. This is CrowdStrike going full “Hold my beer!” about people talking about what bad production deploy fuckups they made.
You know you’ve done something special when you take down somebody else’s production system.
I’m volunteering to hold their beer.
Everyone remember to sue the services not able to provide their respective service. Teach them to take better care of their IT landscape.
Typically auto-applying updates to your security software is considered a good IT practice.
Ideally you’d like, stagger the updates and cancel the rollout when things stopped coming back online, but who actually does it completely correctly?
Applying updates is considered good practice. Auto-applying is the best you can do with the money provided. My critique here is the amount of money provided.
Also, you cannot pull a Boeing and let people die just because you cannot 100% avoid accidents. There are steps in between these two states.
What’s the saying about dying a hero or becoming the villain?
A real Anakin arc right here.
Now threat actors know what EDR they are running and can craft malware to sneak past it. yay(!)