It’s a systematic multi-layered problem.
The simplest, least effort thing that could have prevented the scale of issues is not automatically installing updates, but waiting four days and triggering it afterwards if no issues.
Automatically forwarding updates is also forwarding risk. The higher the impact area, the more worth it safe-guards are.
Testing/Staging or partial successive rollouts could have also mitigated a large number of issues, but requires more investment.
The update that crashed things was an anti-malware definitions update, Crowdstrike offers no way to delay or stage them (they are downloaded automatically as soon as they are available), and there’s good reason for not wanting to delay definition updates as it leaves you vulnerable to known malware longer.
And there’s a better reason for wanting to delay definition updates: this outage.
If a single person can make the system fail then the system has already failed.