Using Personal Activity Reviews to Uncover Adversary Activity
The recent revelations regarding the Solarwinds compromise and the problem of detecting adversary activity that aligns with legitimate user activity reminded me of a solution that we had developed in a small team over a decade ago. In this blog post I’d like to describe the concept of what we have called “personal activity reviews” in security monitoring, its benefits and limitations.
A typical security monitoring environment consists of users and services that cause and generate events with their activity, systems that collect and visualize the collected events and a group of analysts that evaluate these events with the help of aggregations, detection rules and visualizations.
This works well in cases in which malicious activity follows and can be defined in certain patterns. A series of failed logins, a certain service start or some unauthorized access can be considered malicious or suspicious activity, composed into a detection rule or dashboard panel for the analysts review. In a best case scenario, the analyst is able to evaluate the criticality of a highlighted event completely on his own.
He says e.g. “this series of failed logons is abnormal activity” supplemented by the statements “we haven’t seen anything similar before” or “such a high amount of failed logons in a short time frame can only be caused by some broken program or a brute force attack”.
As I stated before, this approach works well in a world in which adversaries act abnormal, e.g. discover a network, exploit a vulnerability, guess a password, escalate their privileges or try to access data to which they don’t have access to. But what if an adversary uses legitimate accounts, legitimate remote access solutions, works in the victim’s time zone’s typical working hours? What if an actor behaves exactly like a legitimate user?
The popular opinion is that in this case it’s impossible to distinguish between legitimate and unauthorized activity? I’d reply: “yes — and no”.
It is correct that an analyst is unable to distinguish between legitimate and unauthorized activity in cases in which a malicious actor uses valid credentials and aligns its behavior with that of a typical user. The key to the solution lies in the question “who is unable to distinguish between legitimate and unauthorized activity?”. There is someone who can identify activity of an adversary that blends in perfectly — the user himself.
The idea is to ask the user if a certain activity of one of his accounts is legitimate. He is the only one who’s able to answer that question. But we don’t want to bother him with these questions all the time and in a form that is disturbing. The idea is to distill the user’s daily activity in a report and present him a composed form of this activity at the end of the work day for checking purposes.
We’ve quite successfully implemented the so-called “Personal Activity Reviews” (PAR) in a rather small admin, security operations and monitoring team in the years 2007 to 2009. Each user in our group of almost 20 team members received a report of his/her daily remote logins including timestamp, source and target systems as well as protocol (only SSH and RDP at that time).
We first worried that users would be annoyed by the emails, but, as a matter of fact, most team members — not all of them—liked these reports and took their reviews seriously. In the first two weeks we chased some saved credentials on Windows RDP boxes that caused logins without actual user activity, scripts that used “scp” for file transfer and similar activity but after this baselining phase we really felt great.
We knew that even if not every user reviewed his reports frequently we had at least created a chance to detect malicious activity with legitimate credentials and felt confident that we could detect such activity especially when an actor worked in our network for a longer time and with different accounts.
After remembering this great method I’ve decided to push our small internal security monitoring team of two analysts to develop such daily “Personal Activity Review” reports.
Since we live in the year 2021, we don’t have to write a log parser in Perl, generate HTML tables and use a SMTP module from CPAN to achieve our goal. Today’s log analysis platforms support a variety of visualization options that can be used to generate comprehensive HTML reports for each user.
I’ve created a simple sketch of how I would visualize the activity so that users can easily identify activity that has not been their own. The most basic form of a chart included in such a report would look like this:
I’d plot the activity over time, so that users can rethink their work day and remember their activity over the day. They’d able to say to themselves “oh, yes, I started off with a login to backend1 in the morning. That was around the time when I fixed the bug in the config. After lunch, when Jay asked me to do that, I’ve restarted the web service on backend1 and then restarted the collector2. And then, before leaving the office I’ve copied files to the collector via SCP. That’s okay. All good.”
I’d always include a table with details like the used user account names, exact time stamps, source IPs/host names and used protocols.
You could also include other data in that chart / table depending on the available data, e.g.:
- local logins to the user’s workstation (first login, last logoff)
- session duration
- login method (SSH: password, private key; RDP: NLA, without NLA, client certificate)
Depending on your user’s field of activity, you could include other graphs in the daily report.
For example, in a development focused team, you could include a chart of all pushed commits to the internal git. We at Nextron Systems always asked us, “we sign every commit with our private keys, but what would happen if an malicious actor compromised one of our systems, gained access to one of our keys and pushed a signed commit with malicious code to one of our repositories?” It’s the nightmare of every software development company since you’d think that this activity is impossible to distinguish from legitimate activity.
But, plotting the commits to different target repositories in a chart and including it in the user’s daily report could allow a developer to identify such activity. He would be able to say “Ho! What’s that? I haven’t been working on that repository for weeks and definitely haven’t pushed a commit to it this morning.”
Other scenarios would include reports like:
- Remote access activity of remote workers (VPN, SSL/VPN, Virtual Client; field staff, sales representatives, teleworker)
- Rule changes in firewall systems (firewall admins)
Frequency and Timing
From my perspective, a daily personal activity report is perfect.
Most people (like me) have problems remembering what they did yesterday, let alone what they did several days ago. The memories are still fresh when users get their activity report for that day in the late afternoon. Since employees start and end their work at different times, the right timing could be challenging. In an optimal scenario, a user would get his report 30 minutes before he usually logs off the last time.
I also though about a desktop integrated notification service that wouldn’t report activity in a summary at the end of the day but immediately in form of a desktop notification. This would obviously require some kind of agent that polls a notification server, but the resulting solution could be worth the effort.
The user you would get an immediate heads up. The notification could include a way to react to that activity and trigger the attention of analysts in the security monitoring team as “user f.roth claims not to be responsible for logon from … to ….” and activate a certain response playbook.
Mandatory Activity Review
I am no fan of what we Germans call “Gängelung”, the constant tendency to push the user to an expected behavior while treating him like a child. Users typically react negatively to mandatory tasks that cannot be controlled in detail. Pushing a user to click a certain link or button to confirm the review of his or her activity could lead to the very opposite behavior, frustration and defiance.
Remember, even if only some your users decide to constantly review their activity, you’re chances to detect unauthorized activity within large amounts of events of alleged legitimate activity are much higher than with no user reviews at all.
It’s your turn
This time I cannot provide you with a tool that works for everyone, because every environment and setup is different. Nevertheless, I hope that I’ve been able to inspire you and guide you to your own solution.
Please add comments and ideas to the thread on Twitter.