Learn more about IronNet's IronDefense

Resources

IronNet Blog

Executive Commentary, Threat Research, and Analysis from the IronNet team.

An "expert" way to overcome alert fatigue

Webster defines an expert system as “computer software that attempts to mimic the reasoning of a human specialist.” At first glance, that seems like a pretty broad definition that could be applied to nearly any operation for which a computer is responsible, but we believe the key word here is "specialist."

Even the best-trained AI model derives its value from the vast experiences and insights of human specialists. In fact, it is human insight that offers a way to combat “alert fatigue” known to most SOC analysts. At IronNet, we look to specialists to solve this problem. Specifically, as part of our Network Detection and Response solution, our Expert System takes the intellect of our highly qualified nation state offensive and defensive computer network operation specialists (a.k.a., threat hunters) and packages it with anomaly and behavioral analytics skillfully crafted by our data scientists to produce the most relevant and effective prioritized alerts.

When we sought to develop behavioral analytics focused on network traffic, we did so for three important reasons. We knew that:

  1. Adversaries require network communications to perform the majority of their desired actions.
  2. There are gaps in endpoint detection due to the ability for malware to evade detection on the endpoint.
  3. Taking a passive approach to analysis would allow operation over the data without compromising performance or hindering business operations, unlike a firewall or intrusion detection system (IDS).

As it turns out, there is a prevalent problem with behavioral analytics: the amount of false positives this approach produces. The reason for what essentially amounts to “alert fatigue” is straightforward: adversaries that are attempting to attack our customers are doing so using similar behaviors to what is occurring on the networks as part of their normal business operations. Adversaries know what typical network behavior looks like, and they seek essentially to hide within it. From our offensive experience, we knew how to succeed as an attacker. From our defensive experience, we knew better solutions were needed. Behavioral analytics alone wasn’t getting the job done. This is where the specialist part of the equation is so critical. 

Where human expertise makes the difference with threat detection

It’s easy to find examples of behaviors that could be malicious but are not always bad. Triggering alerts on these suspicious traffic patterns too often results in false positives, distraction, and alert fatigue. Let’s take beaconing as an example. In this case, the behavioral analytic is looking for repeated, structured network requests that occur at given frequency within a certain duration of time. 

Innocuous beaconing 

Beaconing attack

calls to public facing infrastructure, giving information about the application such as version number & device type 

calls to attacker command and control server, confirming that the malware was installed successfully, sending location & device type

calls out to trusted software developer to retrieve application updates

establishes communications with the command and control server to receive & execute malicious actions

software updates often eliminate security vulnerabilities

provides status and pattern of life information of both the malware and infected device

Immediately, you can see the issue. Beaconing detection analytics are also likely to detect software update requests. This isn’t good enough to be useful. IronNet built a robust testing platform to prove that our analytics could detect known malware and known malware techniques using only their behaviors, without any signatures in play. But we found that our analytics were also detecting benign activity. At this point, we needed a way to refine our results such that we were definitively detecting only bad behaviors and not simply alerting on behaviors that could potentially be bad.

Reality check: The dark side of "eliminating" false positives

This brings up an important point about the IronNet solution and one of the key advantages of our product: we recognize and accept the fact that false positives will occur. If ever during a sales pitch, a cybersecurity vendor charged with finding unknown bad activity makes a claim that they have zero false positives, you should end the conversation immediately. This means they aren’t trying hard enough, in turn destined to leave your SOC in the dark about real threats. Of course the goal in finding something that no one else has found is to do so with as few mistakes as possible,  but the reality is that the system must fail from time to time to learn, refine its capabilities, improve its expertise, and do better next time. Reminds me of my college football coaches who pushed me to play a little harder each time, and I did. (Sorry about that. Trip down memory lane.)

So what about tackling false positives? Here are the three approaches we have taken:

1. Safe listing

Our first approach at addressing the false positives was safe listing. This approach works and is still an important part of our product offering. It does have flaws, however. Sophisticated adversaries know how to abuse safe or trusted lists by leveraging services that are typically believed to be benign, and thus are ignored when they show up as detected threats. Additionally, what is approved as normal in one customer environment, may not be normal for a different customer. Also, there are many, many services and browsing activities running in an enterprise environment that are benign but will never be put on a safe list, simply because there are far too many to keep up with.

2. Threat intelligence

The second approach to reducing the false positives was to use threat intelligence. This method wasn’t going to help us find sophisticated unknown threats, but it does help to indicate when a suspicious behavior has leveraged IOCs that have been previously used in a cyber attack. This was effective and enabled us to deliver higher fidelity results to our customers. Our system was smart, however, we needed even more… we needed to push a little harder.

3. Expert insights

We needed a way to take the network anomalies and behaviors that our analytics had detected and prioritize the most likely to be malicious findings, letting them bubble to the top to the SOC’s list. Safe listing and threat intelligence eliminated some of the chaff based on experience and observation, but neither technique was helping us find new and unknown bad activity. The practice of finding the unknown is not about delineating between known good and known bad. It’s about operating in the uncertain, in turn rapidly drawing on expert insights to analyze and assess whether the activity is good or bad. This is what led to the development of the IronDefense Expert System. 

Winning with threat hunting expertise on your side

IronNet’s specialist team has a lot of experience hunting to identify threats. Our hunt operations start with either threat intelligence or qualified leads discovered in the network by our NDR solution IronDefense. We studied how our team operated, thinking about ways to make their job easier so that they didn’t have to comb through so many alerts that resulted in false positives. At the same time, we realized a few key things. First, scaling is a huge challenge for cybersecurity. There is precious little talent capable of turning an unknown detection into a known bad discovery. Second, the manually intensive process of hunting threats takes a significant amount of manual labor. Third, alert fatigue is real. Focusing on real threats is essential. 

Insert automation 101. This is the entire reason the computer exists. 

I recall a situation earlier in my career at the National Security Agency when a gentleman in another organization described a problem that he thought was unsolvable and asked how my team would approach it. A colleague of mine, in a very smart aleck way, replied “computers.” When we know how to describe a routine function, software can be designed so that the computer can do it for us. Brilliant. This is when the system we had been working on truly became an Expert System.

Enriching automated alerts with IronDefense Expert System

By studying the patterns of our skilled cybersecurity team, we identified various enrichments they performed on features extracted from our network anomalies and behaviors. A portion of these enrichments required access to publicly available information such as popularity indexes in Cisco Umbrella and Majestic Million. Some required purchasing licenses to enrichment data like domain WhoIs registration information. Still others were enrichments from other network traffic observations made on the customer environment, such as community of interest. The Expert System was developed to take in all these specialized inputs and automatically perform the enrichments.

Our customers’ SOC analysts perform these same functions, alert in and alert out, every day. Then they leverage the enrichment information to access whether the anomalies and behaviors are more likely to be malicious or benign. The trouble here is that this isn’t a boolean operation. It isn’t if we see X, it’s bad, or if we see Y, it's good. It is more like this: if we see X, it’s 20% more likely that it’s bad, and if we see Y, it’s 50% more likely that it’s good. Thus, we created a scoring matrix and a rules engine. The Expert System was developed to operate over the anomalies and behaviors detected from the network traffic, automatically enrich them using numerous external and internal sources, and, within their context, assign the behaviors scores to indicate if they were more likely related to something malicious or something benign. The most likely malicious threats would bubble right to the top. 

That solution seemed easy enough to build but we couldn’t stop there. Our Expert System was initially seeded with rules that adjusted the priority of the detections. And, we also knew that by studying the feedback or analyst ratings from our customers of the detections we could use machine learning to improve the Expert System prioritization. In other words, the rules that triggered and resulted in confirmed bad feedback from our customers were adjusted to further heighten priority of similar threats in the future, and, in turn, the confirmed benign activity helped reduce the emphasis of similar threats in the future. The Expert System keeps getting smarter. All of this is done continuously as part of the IronNet product improvement process.

See Expert System for yourself in this demo related to a credential phishing threat:

 

The Expert System is constantly evolving as the team continues to learn more and more about the characteristics of network behaviors that are indications of malicious activity. As the solution evolves, routine functions by the SOC are baked into the product to make the SOC more efficient and allow them to focus their attention on alerts of greatest significance. IronNet recognizes that ‘you can’t make an omelette without breaking a couple eggs.’ Detecting new malicious behaviors is always going to be a little messy. We use our expertise to build a system that pushes a little harder, turning the unknown into the known so that we can defend companies, sectors, and nations.