CrowdStrike Blue Screen Recovery Steps

Updates and recovery steps for the IT community regarding the Windows blue screen of death (BSOD) issue currently affecting devices and VMs running CrowdStrike Falcon Sensor.
Varonis Threat Labs
3 min read
Last updated July 19, 2024
Crowdstrike blue screen recovery steps

This post contains updates and recovery steps for the Windows blue screen of death (BSOD) issue currently affecting devices and VMs running CrowdStrike Falcon Sensor.  

This blog post is intended to help the IT community quickly recover affected machines. We will update the post as new details emerge. 

Varonis' cloud infrastructure is fully operational. Customers who are running Varonis Collectors on machines alongside CrowdStrike should follow the recovery steps and contact our support team with any questions. 

Background 

On Thursday, July 18, 2024, CrowdStrike reported widespread BSODs on Windows hosts due to a defective Falcon Sensor update. You can find CrowdStrike’s official statement and recover steps here.

The defective update began propagating at 04:09 UTC. We can confirm the affected update has been pulled by CrowdStrike and any Windows hosts booted up after 5:27 AM UTC should not be affected.

Machines with an available backup from before 04:09UTC can be recovered by restoring to that backup.  

Recovery involves putting the affected machine into Windows Recovery Mode and then deleting the affected file, “C:\Windows\System32\drivers\CrowdStrike\C-00000291.sys”. Additionally, some machines may recover through a series of (up to 15) reboots.

Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance. 

Recovery Steps 

There are two main options to recover a Windows host that has blue screened: 1.) restore from a snapshot prior to 04:09 UTC or 2.) recover the machine using the following steps: 

  1. Boot Windows into Safe Mode or the Windows Recovery Environment 
  2. Navigate to the C:\Windows\System32\drivers\CrowdStrike directory 
  3. Locate the file matching “C-00000291*.sys”, and delete it. 
  4. Boot the host normally. 

The steps can be different if the Windows host is a physical or virtual device, and steps for cloud-hosted virtual machines vary depending on the cloud provider.

If the machine is running BitLocker or other disk encryption software, additional steps may be required to boot into a Windows Recovery Environment. 

BitLocker recovery-related KBs: 

Recovery Steps for Cloud Services 

Microsoft Azure 

Official guidance can be found here: https://azure.status.microsoft/en-us/status

Microsoft’s Azure status page is reporting that some customers can recover affected devices through a series of as many as 15 reboots. Customers can attempt to do so as follows: 

  • Using the Azure Portal - attempting 'Restart' on affected VMs 
  • Using the Azure CLI or Azure Shell (https://shell.azure.com

https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart 

Additional options for recovery: 

Mirroring the general guidance above, if possible, Microsoft recommends that customers restore from a backup prior to 04:09 UTC on the 18th of July, when the faulty update started rolling out. 

Customers leveraging Azure Backup can follow the following instructions: 

Alternatively, customers can attempt repairs on the OS disk by following these instructions:  

Once the disk is attached, customers can attempt to delete the following file: 

Windows/System32/Drivers/CrowdStrike/C00000291*.sys 

The disk can then be attached and re-attached to the original VM. 

AWS 

Official guidance can be found here: https://health.aws.amazon.com/health/status 

For EC2 instances, there are currently three paths to recovery.

First, in some cases, a reboot of the instance may allow for the CrowdStrike Falcon agent to be updated to a previously healthy version, resolving the issue. However, this is not successful in all cases, in which case an alternative recovery strategy will be needed.

Second, the following steps can be followed to delete the CrowdStrike Falcon agent file on the affected instance: (Latest revision: July 19, 7:01 AM PDT)

  1. Create a snapshot of the EBS root volume of the affected instance
  2. Create a new EBS volume from the snapshot in the same Availability Zone
  3. Launch a new instance in that Availability Zone using a different version of Windows
  4. Attach the EBS volume from step (2) to the new instance as a data volume
  5. Navigate to the \windows\system32\drivers\CrowdStrike\ folder on the attached volume and delete "C-00000291*.sys"
  6. Detach the EBS volume from the new instance
  7. Create a snapshot of the detached EBS volume
  8. Create an AMI from the snapshot by selecting the same volume type as the affected instance
  9. Call replace root volume on the original EC2 Instance specifying the AMI just created

Finally, customers can relaunch the EC2 instance from a snapshot or image taken before 9:30 PM PDT. We have been able to confirm that the update that caused the CrowdStrike Falcon agent issue is no longer being automatically updated, so the relaunched instance will no longer be affected by the issue.

For Amazon Workspaces, we recommend a reboot of the affected Workspaces. As with EC2, this may recover the instance, but it does not work in all cases. Alternatively, we would recommend restoring to a recent backup of the workspace.

If you need assistance with any of these actions, please contact AWS Support via the AWS Support Center. 

GCP

Official guidance can be found here: https://status.cloud.google.com/

Windows VMs that continue to experience this issue need to be manually patched. Please follow the instructions on the CloudStrike support portal. 

  1. Follow offline repair (Step 1 to 3) 
  2. Use rescue VM (Test VM), attach boot disk of the affected VM as a secondary disk **NOTE: Ensure that the boot disk image of the recovery VM differs from the boot disk that is being repaired; failure to do so may result in duplicate disk or partition GUID and unpredictable results as confirmed by Microsoft. 
  3. Navigate to the D:\Windows\System32\drivers\CrowdStrike directory 
  4. Locate the file matching “C-00000291*.sys”, and delete it. 
  5. Power down the Rescue VM detach the secondary disk 
  6. Re-attach the VM to the original VM and boot 

We will update the post as new details emerge.

This blog post is intended to help the IT community quickly recover affected machines. Cloud status pages include:

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

aws-misconfigurations-lead-to-exposed-data
AWS Misconfigurations Lead to Exposed Data
Discover how a recent data extortion campaign exposed sensitive files in thousands of AWS environments and what you can do to prevent it from happening to you.
introduction-to-aws-devops:-skills-+-tools-you-need
Introduction to AWS DevOps: Skills + Tools You Need
Amazon Web Services has capabilities that help you accelerate Devops journey. In this article, we discuss the AWS tools, architecture and how they help organizations increase the ability to deliver and deploy applications at high velocity and better quality.
how-to-create-s3-buckets-in-aws-with-cloudformation:-step-by-step-guide
How to Create S3 Buckets in AWS with CloudFormation: Step-by-Step Guide
Use AWS CloudFormation to create resources such as S3 buckets. Infrastructure as code enables a repeatable, reliable deployment process. Learn more here.
sisense-data-breach:-what-you-need-to-know
Sisense Data Breach: What You Need to Know
The U.S. Cybersecurity and Infrastructure Agency (CISA) issued an alert this week warning Sisense customers of a data breach. Here's what you need to know.