The Bicycle of the Forensic Analyst

Florian Roth
9 min readSep 10, 2022

--

I started my journey in a digital forensics lab crammed with hardware and a table with two dozen external hard drives. Each of these hard drives contained one or more disk images of systems possibly compromised.

The CERT that called in all kinds of different experts was able to process a few images per week, revealing piece by piece and day by day the actual extent of the incident. I had been called in because of my expertise in security monitoring and previous incident response engagements involving all kinds of network worms.

I immediately noticed that the attackers moved much quicker than the CERT tracing their tracks. We were fighting a losing battle.

This blog post is about efficiency in digital forensics during incident response.

The Problem

In 2012, we didn’t have much to work with. Forensic tools of that time helped us create timelines. We could narrow the scope to specific time frames and search automatically for filenames or registry keys that the attackers used, but still — processing a single image took several hours and wholly occupied one of the few forensic analysts.

Several systems in that set of images just appeared in the logs of a compromised system but weren’t compromised themselves. Nonetheless, we had to process them and spend precious time refuting the suspicions.

But the slow process wasn’t the only problem. Every system we processed contained evidence pointing to 1 to 4 other systems whose hard drive images we had to request. Fortunately, we already knew some of them, but on average, we asked for two more disk images for every disk image we processed.

The Bicycle of the Mind

A few years earlier, I watched several short video clips in which Steve Jobs talked about the computer being the bicycle of our minds. In one of the video interviews, he said this:

“I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts. And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”
Steve Jobs

Processing the large stack of disk images manually felt like being deprived of something essential: the bicycle of forensic analysts.

Detection Engineering in Digital Forensics

Usually, what you’ll get during and as a result of the investigations, is a timeline of the events, extracted evidence, and a set of IOCs. The timeline describes the course of action and is used to conclude the purpose of the attack. It is also used to identify patient zero and possible further targets. The IOCs and evidence are usually shared with other stakeholders, analysts, and partners. (in 9 of 10 cases in the form of an excel spreadsheet)

I would define detection engineering like this:

Detection engineering transforms information about threats into detections.

Detections can be indicators, rules, or tools — anything that allows automatic alerting. Although the term is tightly associated with security monitoring, it is by no means limited to it. Applied detection engineering in digital forensics improves the efficiency and thoroughness of investigations.

In digital forensics, the detection engineer usually takes the evidence and extracts or derives IOCs and detection rules. IOCs encompass the usual suspects: C2 IPs/FQDNs, file names, file hashes, URLs, registry keys, user names, service names, task names, mutex values, or named pipes — in short: just anything that can indicate a compromise.

By “deriving”, we speak of a process that pivots on the extracted IOCs. The following mind map shows the attempt to visualize this pivoting process for a malware sample as the starting point.

Malware IOC / Rule Extraction Process (Nextron Systems)

The process overlaps in parts with the work of threat intelligence and malware analysts. For a long time, detection engineering has been an incidental activity carried out by security monitoring, threat intelligence, malware, or forensic analysts.

The critical point is that the single evidence provided by the forensic analyst often gets transformed into multiple detections. A single file hash of a sample found on a compromised system can often be transformed into rules to detect the sample’s activity, its traces on disk, and C2 communication.

And thus, a single IOC seed 🌱 grows into a detection tree 🌳.

But let’s get back to the incident.

Enter – The Scanners

We had all the evidence collected but lacked the tools to apply the IOCs. The customer had no security monitoring and a completely heterogenous network—a perfect victim. What needed a tool that would help us process images much faster than before and — simultaneously — allow us to perform the same automatic assessment on running systems. We had to get ahead of the stack of cold images in the lab.

At that time, I started working on a simple scanner named THOR. In the following ten years, we grew this simple scanner into a complex and powerful tool, added support for YARA very early in April 2013, and Sigma scanning in June 2018. But this blog post isn’t about the scanners — it’s about any automatic process that applies detection rules in digital forensics.

For anyone interested in the scanners, I published an open-source scanner named LOKI, a Bash-based scanner named Fenrir, and Nextron released a free version of THOR called THOR Lite. But there are more scanners, each with slightly different focus areas. Nex/Botherder published a YARA scanner named Kraken. Hilko Bengen released a scanner called Spyre. Some focus on specific evidence, like Wagga’s Zircolite or WithSecure Labs’ Chainsaw, or Yamato Security’s Hayabusa, which scans only Windows Eventlog (EVTX) files. (see the link list below)

The automatic processing of the images accelerated our analysis a lot. From a situation where we processed only three disk images daily, we started scanning every disk image on our desk in a single day. And we could prioritize them for further manual analysis based on the scan reports.

The following graphics visualize the tremendous advantage in efficiency.

Example: disk image processing without detection engineering and automatic processing
Example: disk image processing with detection engineering and automatic processing

The process in which we analyzed and scanned the systems looked very much like this:

Detection Engineering Process in Digital Forensics

It usually took us two to three iterations to discover all possible evidence AND determine the extent of the incident within the victim’s network.

The following highly professional comparison visualizes the different situations before and after automation:

Comparison of forensic disk image processing before and after automation

IOCs And Beyond

As I said, in the early days, we started with simple IOCs like filenames, hashes, and keywords, which allowed us to triage systems quickly, but we can’t use them to identify new threats, methods or anomalies.

To improve the bicycle for our forensic analyst, we had to find new rule formats that would allow us to transform a detection idea into a rule, which can be applied automatically in scanners or monitoring systems and shared with others as simply as a list of IOCs.

As a detection engineer, you need ways to express a detection idea. We already have several rule formats at our disposal.

Available Rule Formats

Let’s say we noticed that the threat actor tends to rename a PsExec.exe (SysInternals tool to execute commands remotely) to svchost.exe to avoid detection by making admins or analysts believe that the running process is a legitimate system process.

We could write a specific rule to detect the modification used by that particular actor:

the specific rule for renamed PsExec as used by actor X

But we could also write a generic rule that would detect just any renamed PsExec. Usually administrators don’t rename tools before using them, but threat actors do that quite often.

the generic rule for renamed PsExec files

The idea is to create an extensive set of generic rules to transform a scanner, which was originally designed to verify the presence of IOCs, into a scanner that detects hundreds or thousands of different anomalies.

Further examples for anomalies:

  • UPX packed binary with Microsoft copyright
  • Executable in Fonts folder
  • Executable started from C:\Users\Public
  • Local admin account created on a Sunday
  • System tool like certutil.exe in an uncommon location (like C:\Windows\Temp)
  • Debugger registered for sethc.exe
  • and 3000+ more

We can create a tool that detects the specific and the generic by using a score that indicates the certainty/severity of a finding.

Using such a tool we can scan images and end systems on day 1 of the investigation to give the forensic analysts a first impression and highlight interesting elements.

Automated triage with scanner on day 1

Where do we find these IOCs and rules?

  • IOC sharing platforms & communities (e.g., MISP)
  • Threat intelligence service providers
  • Public reports and blog posts (see the sources of my custom search engine)

The Best of Both Worlds

To explain why it is a good idea to combine manual analysis performed by a human forensic analyst with the automatic processing of a scanner, we have to look at their weaknesses first. The good thing is that one of them is strong where the other one is weak.

The book “The Design of Everyday Things” by Don Norman describes human errors and distinguishes between slips and mistakes. A slip occurs when a person intends to do one action and ends up doing something else. A mistake occurs when the wrong goal is established or the wrong plan is formed.

I’ve put together an overview of these errors/limitations:

Over the last ten years, I’ve come across all kinds of scan engine limitations and weaknesses:

Reviewing these weaknesses closely, we notice that the scan engines can compensate for the errors made by human analysts and vice versa. This means that a combination of both methods has strong synergetic effects.

Let’s look at some of these compensating effects.

  • While a scanner may miss a new web shell dropped into a staging directory used by the attackers, the human analyst can identify it by correlation: “A folder contains tools used by the actor. An single ASPX file found in the same folder is suspicious.” So, the human analyst compensates for the missing signature.
  • While the human analyst doesn’t have the time to check all 3000+ PHP files in a web application folder, the scanner detects the appended tiny web shell to one of the files based on a generic detection rule.
  • While the scanner excludes executables bigger than 200MB from scanning, the analyst notices the big file size and analyzes it regardless of its size because the “created” timestamp is in a relevant time frame
  • While the human analyst overlooks a suspicious filename in the Shim Cache while scrolling over the entries, the scanner reports the file named svchosts.exe because its Levenshtein distance from a typical system file name svchost.exe is precisely 1.

Conclusion

  • Digital forensic analysis in incident response can be accelerated and intensified by using automatic scanners equipped with known and newly derived indicators and rules.
  • While IOCs are suitable for detecting specific threats, we can use generic rules and patterns to describe anomalies, which may be new and unknown threats.
  • We can maximize the efficiency of the investigation by turning obtained evidence into detection rules that can be reused and shared.
  • The strengths of the manual analysis offset the weaknesses inherent in the automated process and vice versa.
  • Thus, I consider forensic scanners bicycles of the digital forensic analyst, multiplying their work efficiency and increasing the thoroughness of each forensic investigation.

Links

Steve Jobs on Efficiency
https://www.bikeboom.info/efficiency/

LOKI Scanner (Open Source, Python)
https://github.com/Neo23x0/Loki

Fenrir (Open Source, Bash)
https://github.com/Neo23x0/Fenrir

THOR Lite (Closed Source, Free)
https://www.nextron-systems.com/thor-lite/

Kraken YARA Scanner
https://github.com/botherder/kraken

Spyre IOC and YARA Scanner
https://github.com/spyre-project/spyre

Zircolite (Eventlog scanner)
https://github.com/wagga40/Zircolite

Chainsaw (Eventlog scanner)
https://github.com/WithSecureLabs/chainsaw

Hayabusa (Eventlog scanner)
https://github.com/Yamato-Security/hayabusa

--

--

Responses (1)