suchin writings

Applying Machine Learning to Security Problems

Anomaly detection is a hard process ridden with false alarms. The security community has been increasingly interested in the potential for data-driven tools to filter out noise and automatically detect malicious activity in large networks. However, while capable of overcoming the limitations of static, rule-based techniques, machine learning is not a silver bullet solution to detecting and responding to attacks.

Adaptable models require a continuous flow of labeled data to train with. Unfortunately, the creation of such labeled data is the most expensive and time-consuming part of the data science process. Data is usually messy, incomplete, and inconsistent. While there are many tools to experiment with different algorithms and their parameters, there are few tools to help one develop clean, comprehensive datasets. Often times this means asking practitioners with deep domain expertise to help label existing datasets. But ground truth be hard to come by in the security context, and may go stale very quickly.

On top of that, bias in training data can hamper the effectiveness of a model to discern between output classes. In the security context, data bias can be interpreted in two ways.

First, attack methodologies are becoming more dynamic than ever before. If a predictive model is trained on known patterns and vulnerabilities (e.g. using features from malware that is file-system resident), it may not necessarily detect an unprecedented attack that does not conform to those trends (e.g. misses features from malware that is only memory resident).

Second, data bias also comes in the form of class representation1. To understand class representation bias, one can look to a core foundation of statistics: Bayes theorem.

Bayes theorem describes the probability of event A given event B:

Expanding the probability P (B) for the set of two mutually exclusive outcomes, we arrive at the following equation:

Combining the above equations, we arrive at the following alternative statement of Bayes’ theorem:

Let’s apply this theorem to a concrete security problem to show the emergent issues of training predictive models on biased data.

Suppose company has employees, and a security vendor has deployed an intrusion detection system (IDS) alerting the company when it detects a malicious URL sent to an employee’s inbox. Suppose there are malicious URLs sent to employees of company per day. Finally, suppose the IDS analyzes incoming URLs to company per day.

Let denote an incident (an incoming malicious URL) and denote a non-incident (an incoming benign URL). Similarly, let denote an alarm (the IDS classifies incoming URL as malicious) and denote a non-alarm (the IDS classifies URL as benign). That means and .

What’s the probability that an alarm is associated with a real incident? In other words, how much can we trust the IDS under these conditions?

Using Bayes’ Theorem from above, we know:

Put another way,

Now let’s calculate and , given the parameters of the IDS problem we defined above:

These probabilities emphasize the bias present in the distribution of analyzed URLs. The IDS has little sense of what incidents entail, as it is trained on very few examples of it. Plugging the probabilities into the equation above, we find that:

Thus, to have reasonable confidence in an IDS under these biased conditions, we must have not only unrealistically high hit rate, but also unrealistically low false positive rate. For example, for an IDS to be percent accurate, even with a best case scenario of a 100 percent hit rate, the IDS’ false alarm rate must be . In other words, only out of alarms can be false positives to achieve this accuracy.

In the real world, detection hit rates are much lower and false alarm rates are much higher. Thus, class representation bias in the security context can make machine learning algorithms inaccurate and untrustworthy. When models are trained on only a few examples of one class but many examples of another, the bar for reasonable accuracy is extremely high, and in some cases unachievable. Predictive algorithms run the risk of being ”the boy who cried wolf” – annoying and prone to desensitizing security professionals to incident alerts.

Security data scientists can avoid these obstacles with a few measures:

1) Apply structure to data with supervised and semi-supervised learning.

2) Undersample the majority class and/or oversample the minority class. See scikit learn’s stratified data splitting functions 2 and this repo 3

3) Generate synthetic data from minority class via algorithms like SMOTE. See this repo3 again.

5) Build models that penalize classification to the majority class.

6) Focus on organization, presentation, visualization, filtering of data - not just prediction.

7) Encourage data gathering expeditions.

8) Encourage security expertise on the team. Security expertise can help you think of viable solutions to problems when data is insufficient.

9) Weigh the trade-off between accuracy vs. coverage. The effects of false positives are particularly detrimental in the security space, meaning that for some applications it may be more useful to sacrifice the volume of accurate classifications for higher confidence.

Machine learning has the potential to change how we detect and respond to malicious activity in our networks by weeding out signal from noise. It can help security professionals discover patterns in network activity never seen before. However, when applying these algorithms to security we have to be aware of caveats of the approach so we can address them.