IronNet Blog

Building a better detection ecosystem

Written by IronNet Threat Research | Mar 30, 2023 3:59:37 PM

The Threat Research/Threat Hunting/Detection Engineering Ecosystem

In the past couple of months, there have been numerous discussions on social media forums about how threat hunting methodologies overlap with detection engineering. Kostas (@Kostastsale), who’s a member of TheDFIRReport, recently wrote an excellent blog post on detection engineering vs. threat hunting. 

He succinctly describes the distinction between the two, noting that threat hunting is “a proactive practice of looking for evidence of adversarial activity that conventional security systems may miss,” while detection engineering is “the process of developing and maintaining detection methods to identify malicious activity after it has become known.” Several months earlier, Florian Roth (@cyb3rops) also wrote an article in September on detection engineering, where he highlighted the role of detection engineering in relation to other roles within the detection and response ecosystem. 

Both of these articles made us consider our own hunting and detection processes at IronNet and how we can improve our workflow when it comes to threat research, threat hunting, and detection engineering. Expanding on Kostas’s and Roth’s commentary on the detection ecosystem, our goal with this blog is to not only discuss how IronNet implemented a more holistic detection process, but to go beyond the technical details and talk about the second and third order effects of improving our detection cycle in cyber operations.

Developing a Detection Engineering Platform

Threat Research, Detection Engineering, and Threat Hunting form the three pillars of IronNet’s new detection workflow. In order to bridge these together, IronNet’s Cyber Operations Center (CyOC) developed a detection engineering platform that enables a repeatable process for us to detect adversary tactics, techniques, and procedures (TTPs) over netflow data. At a high level, this platform enables us to convert the intelligence from our Threat Research team and work with our Hunt Operations team to create rules using a Sigma-like query syntax from TTPs and evaluate their outputs for efficacy. Using those outputs, we can engineer new detections that will generate alerts if those behaviors are observed, going forward, across our entire customer base.

In this way, our Detection Engineering Platform enables a full detection cycle: Threat Research provides intelligence associated with a potential threat, hunters use queries to create and optimize a rule that will detect the threat, the new rule alerts on potentially malicious activity and can be reviewed and hunted on to find additional information, which then feeds back into our Threat Research and detection rule creation.

 

Breaking down the detection lifecycle 

 

1. Threat Research

Threat Research is threat information drawn from open and closed intelligence sources, which can include reports on new threat actor TTPs, research on offensive security tools/exploits, intel on new threat groups, or sensitive data from a threat intelligence partner. Referencing these sources, IronNet analysts work to determine the relevance of this threat intelligence to IronNet customers and collaborate with hunters to create detection rules for this activity. 

Working Example: An IronNet Threat Analyst comes across industry reporting on a known threat actor commonly using Powershell to download tools to a compromised host to facilitate post-compromise actions. Knowing that behavior can be detected with network traffic (assuming no encryption is being used or in-line decryption is available), the analyst can begin to query IronNet’s network telemetry to determine the prevalence of Powershell throughout our customer base. Based on these results, the analyst can determine whether the behavior is relevant enough to collaborate with a Threat Hunter to dig further and potentially create a detection for this.

Reflection Questions: 

  • How popular is this TTP amongst known threat actors or malware families?
  • How is this TTP used by threat actors? Are there multiple ways that threat actors utilize that TTP?
  • Is the TTP targeting a specific industry or region?

 

2. Threat Hunting

Threat Hunting is looking for the existence of suspicious or malicious behavior that may be indicative of an infection. It is an iterative process of continuously refining queries and analysis to identify malicious behaviors. Once it is determined that a TTP would make an acceptable detection rule, the hunter will collaborate with the analyst to craft query logic that captures the specific network traffic exhibited by the observed behavior. This includes collecting the network traffic relevant to the use case, triaging results, creating baselines, finding malicious activity and anomalies, and capturing these outputs to feed into the detection engineering process. 

Working Example: For the Powershell activity use case mentioned above, they could start out by querying network telemetry for HTTP user agent strings that contain “PowerShell”, as this is an indicator of Powershell’s Invoke-WebRequest cmdlet being used in its default configuration. But this won’t filter out legitimate usage of Powershell, so the hunters use additional intelligence from the Threat Research phase to inform how they can filter down the results to a manageable size. According to recently published research, Qakbot often uses Powershell to download additional DLL files to a compromised host, so the hunter adds additional query logic to only include Powershell activity sourced from an internal network outbound to the Internet.

The hunters can then analyze the URLs that Qakbot actors are using for the distribution of those DLLs and identify any similarities within the structure of them. Noticing that Qakbot often uses direct-to-IP communications, the hunter takes the next step of identifying suspicious Powershell activity related to Qakbot. Once they have a set of results believed to be Qakbot activity, the hunter digs deeper to analyze these events, which often includes full packet capture (PCAP) analysis and malware analysis of downloaded files. 

Reflection Questions: 

  • Do I have the right data sources to detect this? If not, can I add this data source to my collections?
  • What do I want to safelist / not consider? Are there any downfalls to doing that?

 

3. Detection Engineering

Detection Engineering is the process of developing detections that attempt to capture a specific activity. This approach will typically result in one of two outcomes: creating a reliable detection that can be pushed to production across all of our customer base, or creating what IronNet calls an exploratory query that can be executed manually, periodically in a roll-up report, or however preferred. Although this is a fairly simple example, it describes the process that IronNet uses to continuously iterate queries and engineer new detections for our customer base.

Working example: Once hunting is completed, the hunter can begin to develop a detection for this threat. In this step, there’s a lot of questions that need to be answered, such as what level of specificity in the detection logic is acceptable, if they want alerts in real-time or a daily/weekly rollup report, etc. After determining these variables, the hunter is then able to develop a detection in a Sigma-like format, continuously iterating and refining the detection logic to best fit the needs of the purpose of that detection. 

Reflection Questions: 

  • Did the rule capture the intent of the activity that I was trying to capture?
  • Can we make our detections high fidelity by creating a broader exploratory query while creating mini higher-efficacy queries?

 

4. Analysis

Analysis is the process by which rules are assessed, iterated, and refined. New intelligence about a threat, discovered via open-sources or in a customer environment, can influence changes in detection rules to allow for more accurate identification of malicious activity. The analysis step can differ depending on the rule and its detections. If the rule is generating a lot of false positives, analysis includes determining why the rule has a high alert volume and how it can be refined to capture malicious instances of the activity rather than benign. If there is a true positive hit on one of the rules, the hunter would investigate the alert and hunt on the activity to determine the source of the alert and its surrounding characteristics. They would then work with the customer to mitigate the identified compromise and use the additional intelligence gleaned from investigation to refine the detection rule, send any new indicators and TTPs discovered to the Threat Research for further analysis and intelligence assessment and potentially create other rules on TTPs used in the same attack chain.

Reflection Questions:

  • False Positives: Did the rule capture the correct traffic? I.e. was the REGEX correct or need to be redefined?
  • True Positives: What’s our “on target” %? I.e. When this rule fires, how often is it confirmed malicious activity? 

 

5. Feedback

Feedback on how rules are performing informs the entire detection cycle and can lead us to identify gaps and make changes in the process and in specific detections as necessary. Feedback is generated throughout the entire cycle and consistently implemented to ensure any problem areas or roadblocks are identified and addressed.

One thing to consider is - as actors change their TTPs over time, do the previous detections need to be broader, changed, or edited with additional filtering in order to capture new variants? Also, it's important to consider the intelligence lifecycle and if these indicators are making it all the way back to our Threat Intel Platform (TIP). For organizations like ourselves that perform services that are MSSP-style, we need to make sure that the intelligence from each detection isn’t lost.

Reflection Questions:

  • Are we missing detections because the threat actors have altered their TTPs and the rule is too specific?

 

Recent Wins

While we’ve been threat hunting and creating automated network-based detections for years, the use of our detection platform is fairly new and has already helped our analysts find malicious activity across multiple customer environments. Let’s discuss some recent intrusions that our hunting and detection engineering process has detected:

  • In mid-March 2023, IronNet identified a threat actor using Powershell to download multiple suspicious files, including a reverse tunnel binary for Windows, an RDP proxy binary, and another unknown binary on the network of a local municipality government in the United States. The destination IP address that these files were downloaded from was also a known Viper C2 server, which was identified by IronRadar around a month prior to the intrusion, which gave our hunters additional context into this activity and enabled them to prioritize it.
  • In mid-February 2023, IronNet detected suspicious Powershell activity in a partner network, which ended up being a download of Plink, a command line execution tool for the SSH and telnet client PuTTY. Further enriching and giving more context to this activity, IronDefense also generated Domain Analysis TLS, Encrypted Comms, and TLS Invalid Certificate Chain alerts for this network traffic. After further investigation, hunters found the Powershell activity was related to intrusion activity in the network of a Polish logistics company, which shared similarities to this Rapid7 blog post detailing exploitation of CVE-2022-47966, a remote code execution (RCE) vulnerability affecting a variety of ManageEngine products. 
  • Over the past several months, IronNet has also detected numerous malware strains within an IT service provider’s network in a Middle Eastern country, where we regularly see victims downloading additional malware stages and communicating back to threat actor controlled C2  servers based on the rules that are triggered. Specifically, our threat hunting has uncovered: Raccoon Stealer, Smoke Loader, Trickbot, Powershell Empire, SystemBC, Mozi, and other malicious network behaviors, such as phishing pages and direct-to-IP connections downloading suspicious RAR files.

 

Lessons Learned

Before we developed our Detection Engineering Platform, we faced a series of inefficiencies in our hunting and analysis efforts. Every analyst performed their own TTP-based hunts and the knowledge was rarely shared. Analysts would store queries locally and their queries often only reflect the specific ways they have seen them abused. These hunts differed depending on the person's background and experience; as a result, this knowledge was not centralized, nor was it repeatable by any means and so it remained tribal. 

Additionally, the lack of a standardized and efficient hunting process led to an incomplete feedback loop where new intelligence found during hunting was not always fed back into the detections. Thus, we realized our hunt process needed to improve around the way we detect and utilize TTPs in customer environments. 

In revamping our detection process and implementing effective Detection Engineering at our scale, we learned:

  1. Intel needs to communicate trends of new TTPs to hunters on a regular basis
  2. During rule creation, it’s okay to have both broad and specific rules be created from a single idea. Examples: a TTP of Powershell usage could spawn rules for PowerShell to IP or all PowerShell outbound - both are okay!
  3. Every environment is unique, so tightly tailoring a rule could result in missed detections
  4. Effective detection engineering requires collaboration to be a conversation, not a one-way street
  5. A comprehensive detection process improves threat landscape awareness for multiple analysts, not a small group
  6. An effective hunting process forces discussion on what “normal” looks like for a network protocol or host process
  7. Every rule may have it’s own tolerance of false positives depending on what the analyst is trying to catch, it’s okay to have varying levels
  8. Developing a detection engineering platform exposed areas for processes improvement, where we lacked proper protocols to document or communicate
  9. Creating a standardized detection process forces tribal knowledge into a repetitive process that everyone can learn from
  10. Capturing these events enables us to detect these at a global scale where everyone can benefit from the intelligence