Why We Need To Reframe the False-Positive Problem
The concept of false positives has been pushed and pulled around for years in the cybersecurity industry. Countless vendor-sponsored studies reinforce the idea that false positives are directly contributing to the problem of alert fatigue. And as a security vendor, it’s no surprise that one of the top burning questions on our customers’ minds is, “What’s our false-positive rate?”
There’s no doubt that security analysts and IT admins are frustrated by a constant barrage of alerts. But false positives aren’t solely to blame; the reason is largely due to poorly targeted detection logic. Without experienced teams and large datasets, targeting threat detection can result in large volumes of noise. And because the nature of administrative work can also overlap with attacker patterns, the effort to tune or build behavior- or signature-based threat identification requires time and effort that most organizations don’t have.
Moreover, these alerts generated by the logic are something that the industry quickly becomes addicted to. If you previously saw hundreds of alerts from an external scan from your firewall’s intrusion prevention system and now you see zero, it can be hard to accept that this was background noise rather than a significant problem.
The Importance of Monitoring Behaviors
If your password manager is resulting in password spraying behavior, that’s not a false positive — it’s a no-action finding that may require safe-listing and tuning of the detection logic to reduce noise and improve accuracy. However, historically, most people in the industry are trained to think, “Not an active threat? Must be a false positive.”
We should reframe that thinking to, “What’s creating these behaviors? Did the behavior happen, and do I care that it happened/will happen again?” By targeting your methodology for what you want to see around the negative behavior, it makes it easier to generally maintain a high true-positive rate. When organizations combine the effort of compliance or infrastructure alerting with detection of threats, this behavior can quickly grow within teams. By and large, the industry is trained through repetition to ignore or be annoyed at loud alarms that are not correct.
This behavior tends to expand into threat detection because of the nature of alerting and can quickly result in missed opportunities to halt threats. It can be easy to think you are ignoring something you have seen before; perhaps a file from your Internet Information Services (IIS) environment that looks like normal Exchange activity is executing a large number of POSTs and is interacting with cmd.exe. Making the assumption that this is normal noise because something similar has been seen before and immediately jumping to “false positive” can and has resulted in situations where a breach is not responded to in a timely fashion. Since defenders must act quickly, the need to stop and review without bias is more important than ever.
Attackers can only perform a finite number of actions to gain access to an environment — although attackers are adapting their methodologies to evade antivirus and endpoint detection tools. These methods could include using IEX in PowerShell, or it could be patterns associated with Word spawning unexpected processes. Often when an attacker has successfully landed a malicious Word document in an environment, it will have a macro that contains the actual attack logic. When the user enables macros on their machine, Word immediately runs the Visual Basic Script within the document in a way that’s generally not identifiable by the user or antivirus unless commonly seen.
When that script is run, it results in the actual Word process (winword.exe) spawning other processes such as cmd.exe or powershell.exe to load their backdoor onto the machine itself. Backdoor loaders like Cobalt Strike are very good at avoiding antivirus systems, and attackers often update Cobalt Strike and similar exploit kits to continuously evade signatures. By looking for Word spawning these processes, defenders can quickly identify when a macro or user is performing malicious or at least odd actions within the Word environment.
Pure, signature-based detection is important, but security tools should focus more on potentially threatening behaviors. Detections that tell you when one IP address has attempted and failed to log into at least 20 users on the domain controller or host itself over a 10-minute period will always tell you that a password spraying pattern is occurring. However, telling you that 20 users failed to log in over a 10-minute period is a guaranteed false-positive detection in most cases — and this happens much more than most organizations realize. While this can be helpful information, daily reporting on failed user logins will help you solve operational issues, while behavior-based detection will support your effort to stop threats quickly.
Building Security Maturity Through Detection
The reality is that detections derived from products themselves always have some bump in false positives. Endpoint tools that rely on AI detection, for example, will never be perfect — nor can they be — and will find things that are not threats, especially in situations where engineering is happening in-house.
It’s important for you at that point to decide if you have the time to review these kinds of findings, or, if you only want to review positive threats to your environments, whether those are behaviors from Sysmon or general CrowdStrike detections.
Either way, monitoring behaviors will always lead to internal security maturity growth. As you monitor behaviors, you’ll begin to recognize patterns and develop a deeper understanding of what’s going on within your network. Having that visibility is one of the first steps in maturing your organization’s security posture.
Read More HERE