Security

The Value of a False Positive. Part One: Measurement

TEN18 by Exabeam

Have you ever calculated the actual true positive rate of your security program? Have you ever considered WHY you might want to track this metric? Is there even an acceptable industry standard you should strive to attain? In this two-part blog series, we will dive into these questions and see how we can use the insights given by tracking this metric to improve cybersecurity programs, evaluate the effectiveness of our tool stack, and maybe come to some answers around an “acceptable” true positive rate.

Defining a false positive

When discussing true positive rates (TPR), defining a true positive is the first place to start. For this exercise, I will give some accepted definitions of false positives (FP) and remove those from our data set. By definition, whatever remains will be our true positives (TP). We can use this basic formula to calculate our true positive rate:

TPR = TP ÷ (TP + FP)

Historically, we used the term “false positive” to say that our vulnerability scanner “​​incorrectly indicates that a vulnerability is present”1. This is still the first definition in the NIST online glossary of terms. However, the definition of false positive has expanded to include statements such as “An alert that incorrectly indicates that malicious activity is occurring.”1 or “Incorrectly classifying benign activity as malicious.”1 Due to the broad expansion of the term, classifying an alert as a false positive can be a contentious topic, depending on who you’re talking to. Try bringing up a false positive rate with a tool’s vendor and you’ll see what I mean.

Testing an example

As a thought exercise, let us take the following example: if I have a Network Administrator run an unscheduled Nmap scan as part of their daily tasks, which triggers my SIEM rule to detect network scans, does this count as a true positive or a false positive? On the one hand, my SIEM was responding precisely how I built the rule to run; it detected someone kicking off a network scan, so it’s a true positive. On the other hand, this scanning activity is acceptable in my organization given the scanner’s role as a Network Administrator, so this would not be considered “malicious” behavior and could also be regarded as a false positive. Should I have my SOC analysts close this ticket as a true positive or a false positive? Should my Tier 3 analyst be reviewing the ticket, submitting this detection for tuning, and removing all network admins from the detection? 

You can start to see why having a solid definition of a false positive is so important. If we classified this detection as a false positive (because it wasn’t malicious), our downstream process could create additional work for our staff responsible for rule tuning and could also create a security gap. For example, removing a specific group of users from the rules could have a significant impact if one of those accounts were compromised. Leaving the detection in place as is could create complacency in the SOC team via alert fatigue. “Oh, it’s Bob scanning again; just close it out.” 

Introducing a new definition

To tackle this rather complex metric, we must introduce a third definition: that an alert can fall under True Positive: Benign (TP:B). You could expand this concept further and create another measurement called True Positive: Acceptable Risk (TP:AR), which we can use to capture all the alerts we receive but dismiss because the alerting source has been classified as an “exception.” I know you have them. We all do.

Adding in these granular measurements allows us to correctly classify and establish paths of work around alerts that a yes-or-no, true positive vs. false positive classification inherently restricts. We can then use the “true positive” to track improvement in our security programs. Our true positive rate formula changes to the following:

TPR = (TP + TP:B + TP:AR) ÷ (TP + TP:AR + TP:B + FP)

With false positive being defined as:

FP = Total alerts – (TP + TP:AR + TP:B)

In our next post, we’ll explore the expanded true positive definitions and associated workstreams around the classifications. 

User and Entity Behavior Analytics for Advanced Threat Detection

You’re facing a constant barrage of threats, some of which you do not even know exist. As the typical point of entry for an attack, users are a difficult vector to monitor and secure. To confront the tidal wave of attacks, you need to hone your attention on users by harnessing the power of user and entity behavior analytics (UEBA).

In this ebook, learn all about UEBA security and how it helps reduce cyber risk by enabling you to respond more quickly to user-based attacks. Download your guide now.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button