Featured image

IT Briefing: CrowdStrike Update Issue Leads to Global Microsoft Outages

This article was updated July 2024 to brief readers on threat actors exploiting this outage in malicious activities.

Following global Microsoft outages caused by a faulty CrowdStrike update, multiple reports indicate that threat actors are launching multiple phishing and impersonation attempts based on this outage to fool end users into giving them access to their critical data and environments.

The day after global outages began (July 20th, 2024), CrowdStrike issued an advisory warning users about Spanish-speaking threat actors circulating malicious software labeled as a “CrowdStrike hotfix.” According to the report, these cyber criminals appeared to be targeting CrowdStrike customers in the Latin American region. The software itself loaded a private crypting service and malware loaders connecting to Command-and-Control (C2) servers. On July 24th, CrowdStrike issued another advisory warning German customers about a fraudulent “CrowdStrike Crash Reporter” file being used in spear-phishing attacks.

Reports from MSSP Alert also shared that threat actors registered over 40 typo-squatting and phishing domains in the first day after outages begin. They also shared details around threat actors circulating a malicious Microsoft Word document, which contained a copy of Microsoft’s remediation instructions, but also contained an infostealer.

CrowdStrike partners and customers are advised to exercise caution and remain alert for messages potentially impersonating CrowdStrike or Microsoft support. CrowdStrike has also created threat hunting queries and listed Indicators of Compromise for security analysts and IT practitioners to use to detect malicious activity in the event a threat actor successfully tricks an end user.

ORIGINAL IT BRIEFING

On July 19th, 2024, a CrowdStrike software update for Windows agents has caused global outages, impacting Microsoft endpoints and creating Blue Screen of Death (BSOD) issues. The CyberQP team can confirm that they were not directly impacted by this technical issue, and we express our sympathy for MSPs and IT practitioners worldwide, who continue to work on restoring their customer systems following this event.

This quick technology briefing compiles everything we know regarding the update in question for our partners to reference, and includes action steps that they can take to bring endpoints back online.

Key Takeaways

According to security researchers and media coverage, CrowdStrike’s software update was a “content update” adding new security measures against evolving or emergent threats. Since the update in question was not a major patch, experts believe that this update was not staged or gradually rolled out like a larger release or new version of the CrowdStrike Falcon sensor would normally be, and pushed as quickly as possible to secure clients immediately.

However, according to a report from ThreatLocker, this update contained a “faulty channel file,” which CrowdStrike users report has created reboot loops, leading to the blue screen errors being widely reported across news media on July 19th.

Action Steps

CrowdStrike has issued a statement on their website that is regularly being updated with technical details, workarounds, and mitigation steps, and has continued to update users on their subreddit. Microsoft and Amazon Web Services has also issued remediation steps for Azure and AWS virtual machine environments, which we’ve summarized for you below.

CrowdStrike recommends the following workaround:

  • Reboot your Windows environment in Safe Mode or the Windows Recover environment
  • Delete a channel file named “C-00000291*.sys” in the directory C:\Windows\System32\drivers\CrowdStrike
  • And reboot your Windows endpoint normally.

CrowdStrike also recommends using a recovery key for endpoints encrypted with Bitlocker.

For Microsoft Azure environments, MSPs can:

  • Restore from an Azure backup from before 19 July 2024 at 04:09 UTC.
  • Or create a rescue VM, duplicate the problematic environment’s OS disk and attach the OS disk as a data disk to the rescue VM.
  • Next, run Microsoft’s mitigation script (az vm repair run -g RGNAME -n BROKENVMNAME — run-id win-crowdstrike-fix-bootloop — run-on-repair — verbose)
  • Start the original VM, and run the following prompt: az vm repair restore -g RGNAME -n BROKENVMNAME” –verbose

For Amazon Elastic Compute Cloud (EC2) instances and Amazon WorkSpaces using CrowdStrike:

  • Reboot your environment to restart/update your CrowdStrike Falcon agent.
  • Or restore your WorkSpace to a state 12 hours before the outage.

If a reboot fails, Amazon has prepared commands that work with the AWS Systems Manager automation runbook, and outlines steps for manual recovery.

For more information, we recommend MSPs and IT technicians refer to the resources and mitigation steps linked and outlined above.