IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.
Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: "It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers."
He isn't alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.
Sadly, for our administrator, things are less than ideal.
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.
I remember a few career changes ago, I was a back room kid working for an MSP.
One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.
I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.
It was our air-gapped encryption key backup.
I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.
I wish you were right. I really wish you were. I don't think you are. I'm not trying to be a contrarian but I don't think for a large number of organizations that this is the case.
For what it's worth I truly hope that I'm 100% incorrect and everybody learns from this bullshit but that may not be the case.
I get storing bitlocker keys in AD, but as a net admin and not a server admin....what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?
Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?
You need at least two copies in two different places - places that will not burn down/explode/flood/collapse/be locked down by the police at the same time.
An enterprise is going to be commissioning new computers or reformatting existing ones at least once per day. This means the bitlocker key list would need printouts at least every day in two places.
Given the above, it's easy to see that this process will fail from time to time, in ways like accicentally leaking a document with all these keys.
I think that's a better plan than physically printing keys. I'd also want to save the keys in another format somewhere - perhaps using a small script to export them into a safe store in the cloud or a box I control somewhere
Companies use crowdstrike so they don't need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.
Automatic updates should still have risk mitigation in place, and the outage didn't only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
It shouldn't, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.
I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it's only some or most, but not all.