This chapter is about the problem of collecting and analyzing data when dealing with insider threat. Insider threat involves attacks coming from a member of an organization. When planning and executing attacks, insiders can take advantage of physical location, trust, and better knowledge of the organization. Where an outsider will blindly search within a network to find valuable targets, the insider will know (and possibly have created) the highest-value information. Where an outsider relies on rainbow tables and exploits, the insider can charm other users out of passwords or use common admin tools she needs as part of her job. Where the outsider’s behavior is obviously aberrant, the insider can hide it, or, if caught, explain it away.
For a network security analyst, insider threat work should focus on collecting and synthesizing data, rather than detection. Insider threat investigations begin and end with people—cues from inside the organization that someone is at risk, and interviews with the insiders at the end. The network security team should expect to support other investigators by providing and analyzing data that forms a part of a larger picture.
Insider threat detection is hard; it involves a low-frequency, high-threat event that has a significant and damaging risk of blowback. Many of the biggest cues about insider threat involve indicia that someone is isolated or on his way out of the job—problems at work, antagonistic relationships with coworkers, and so on. However, at some point, everybody is going to have a bad day; therefore, distinguishing between daily grumbles and actual threats is critical to an effective program. Insider threat is best handled preventatively, by the organization addressing and eliminating the conditions that risk insider threats showing up in the first place. If the insider threat program consists exclusively of generating and following up on alerts, then the ops floor will be overburdened, and users will chafe under the constraints.
The analysis team should consequently be prepared to support investigations raised in response to common insider threat risks: specifically, the integration and synthesis of data from diverse (often legacy or embedded) sources such as physical access logs, video monitors, and traces of network traffic and assets.
When insider threat analysis does involve detection, it will rarely find definitive evidence; rather, insider threat detection will uncover hints that something is amiss that combine with other evidence. Insider threat detection involves managing an enormous number of false positives, and making a judgment call about when to move from simple monitoring to more focused analysis to action.
This chapter is organized around two core concepts: differentiating insider attacks from external attacks, and a discussion of the types of attacks insiders conduct and how to observe them. The remainder of this chapter covers each topic in depth, followed up by pointers to supporting material. Several notable examples of insider threat cases are covered in sidebars; pointers are provided in each case to some material describing what happened and when.
Before diving into insider threat behavior in depth, let’s emphasize that “insider threat” does not necessarily mean malice, and insider threat detection is not simply a matter of finding the villain. There are a good number of insider cases that involve sysadmins “adopting” systems within a network and managing them long after they’ve left, inadvertently adding security holes and backdoors in the process. Insider threat is about risk, and while malice is part of that risk, so are fear, panic, and stupidity.
The simplest definition of an insider threat is that it’s a threat to an organization posed by a member of that organization. Being a member of the organization gives the insider significant advantages: better knowledge of the organization, trust to exploit, physical presence, and the like.
Insiders differ from outsiders in that they have knowledge of their environment that the outsiders lack. Leveraging this knowledge means that defensive detection techniques that rely on the attacker making mistakes are less likely to apply to an insider. Consider, for example, how an insider and an outsider will approach the problem of finding and copying valuable information from a network.
We’ll assume the outsider is smart: she uses a spear-phishing attack to drop an exploit kit into the network. With the exploit kit, the attacker probes inside the network and identifies a fileshare—she copies the contents of the entire fileshare, compresses the results, encrypts them, and then slowly transfers the results to an external server.
Now let’s consider an insider. He opens up the fileshare on his desktop, copies the three most valuable files to a USB stick, and walks out the door.
Table 16-1 shows the differences in various areas. The outsider’s behavior is moderated through the network, while the insider can rely on hard-to-detect behaviors like direct physical access. The outsider is ignorant of the network’s structure, while the insider knows where things are. Finally, the outsider must forge or steal credentials, while the insider already has them. Each of these impacts the defender’s ability to find hostile behaviors.
| Behavior | Insider | Outsider |
|---|---|---|
Access |
Can exploit physical access and resources |
Network moderated |
Resources/targets |
Aware of targets and value |
Must probe to identify targets of value |
Credentials |
May already have credentials, can acquire out of band |
Must acquire credentials, using password cracking, exploits, etc. |
Tools |
More likely to rely on existing sysadmin tools and privileges |
More likely to rely on malware |
Monitoring |
More likely to be aware of monitoring, will intentionally evade |
Evasion will not be tailored to specific network, relies on delay and encryption |
Attacks |
Data theft, specific sabotage |
May be completely unaware of network’s value |
For our purposes, the largest difference between insider and outsider attacks is that the outsider is moderated via the network. Control, exploit, transfer, communication, credential theft—everything must be done through the network. That communication may be delayed, it may be encrypted, or it may be hidden, but it must be done via some network channel. Insiders can exploit physical access and out-of-band communications. This means a much heavier reliance on host- and service-based monitoring, as well as potentially accessing physical logs and integrating them with network data.
The second most significant difference is that insiders will often exploit their knowledge of the system to ensure that they aren’t detected, and to tailor damage specifically to their environment. Attacker behaviors that we generally expect to see from outsiders, particularly reconnaissance and fumbling (see Chapter 13), will be much rarer in the case of insider attacks.5 Insiders stealing intellectual property or other assets are also likely to be less obvious than an outsider in the data they take.6
Third, insiders can exploit an organization’s trust in ways that outsiders just can’t. Insiders can rely on their own credentials, using administrative tools or social connections to gain the access they need. Insiders are also more likely to use their knowledge to tailor what they steal, whereas outsiders will try to steal whatever’s available
Insider threat investigations can backfire when they push too hard. Properly managing an insider threat program means recognizing not only that yes, a trusted employee can go bad, but also that trust is bidirectional. Insider threat recognition requires that users trust their security personnel; if the relationship between security and employees goes toxic—if the security team assumes all users are guilty—then insider threat programs risk becoming self-fulfilling prophecies.
Insider threat investigations are crises; investigators should expect to be in constant communication with the C-suite with regular briefs, updates, and status information moving up the chain of command. During the crisis, the team should update information daily. Once the crisis is done, it should be put away. I feel it’s a good idea to rotate analysts out of insider threat investigations, keeping them from doing them consecutively because each investigation degrades the participants’ trust in everything.
On that note, I’ll reiterate: the best way to handle insider threats is to keep them from happening in the first place. If you are handling a disproportionate number of insider threat cases, that is a sign of deeper organizational problems. Big brother is not the solution here.
When an insider is apprehended, it’s always a gut-punch for the organization. Insiders exploit institutional trust, and once you find an insider, you’re shaking that trust. The situation is exacerbated with false positives, where you risk damaging institutional trust, losing a valuable employee, and making the insider aware you’re looking for them.
I will now discuss the types of attacks an insider may uniquely conduct. These attacks rely on the insider’s particular capabilities within the organization: knowledge of the internal structure and trust. In the following subsections, I will discuss several modes of attack, as well as observable data for identifying them.
By far, the most common form of insider attack involves the exfiltration and theft of data for future use.
Monitoring file access requires that current versions of the files be in a monitorable location, such as a common Sharepoint, Google Drive, or the like. A well-defined checkin/checkout process for shared documents can ensure that the documents are in a location where they can be monitored for usual access.
Observables for data theft include excessive file access or copying, indicated either by an increase in data volume or users accessing files they have never accessed before (see Chapter 14 for information on volume thresholds and locality violations). If a user starts to fumble on the filesystem, that is also a potential indicator (see Chapter 13 for more information). Also pay attention to physical indicators, such as file access in off-hours, or use of physical tools such as USB drives.
Credential theft occurs when the insider needs privileges that she doesn’t have for the current attack. This kind of behavior is a precursor to other types of attacks, most notably sabotage and data exfiltration. Insider credential theft differs from external credential theft because it’s more likely to be a form of social engineering (such as the ever popular “Hey Bob, I need your password to fix your computer!”).
Since the act of credential theft will likely be conducted out of band, a defender is more likely to see indications that the credentials are being used anomalously after the fact.
Observables of credential theft include logins from unusual hosts (the user has never touched the host before, or the host is outside of the network or new) and logins from unusual physical locations. Multiple logins from diverse locations is suspicious, and may indicate two users working with the same account. Fumbling (see Chapter 13) is a good indicator that the user is unfamiliar with the host, and may be an indicator that she is looking for specific files or exploring the host.
Sabotage scenarios involve the insider damaging company assets, such as by installing malware on the network. The Duronio case is an example of a sabotage attack. Duronio not only took advantage of his administrative knowledge to plant his attack within the network and damage UBS activity, but engineered it specifically to hit at UBS’s core functions.
Observables for sabotage include identifying changes to software or the subversion of systems—change control of critical applications or administrative software is helpful here (see Chapter 19 for more discussion). Understanding what core functions exist within your system is critical to managing sabotage; inventory and mapping (see Chapter 18) will help you to understand what systems require more monitoring and represent the highest risk.
Insiders are usually aware that they’re being monitored. What they usually aren’t aware of is the extent of monitoring. Anecdotally, insiders will be very cautious where they know they’re being watched, and careless when they assume they aren’t.
This means, for the defender, that the more diverse the data collected is, the better. Strategies include both collecting data from diverse sources, and collecting data redundantly—the same phenomenon observed at different locations. The problem, of course, with collecting all this data is that you then have an enormous pile of data to sift through. Triggered data collection—that is, accessing specific sources as needed rather than continuously feeding them to the SIEM—is important here.
Because analysts will often be using older, lower-priority, more obscure, and in many cases proprietary embedded data, there’s a strong need for prior preparation and inventory. Insider threat investigations can stretch back to years’ worth of data, and the team is well served to know ahead of time how hard it will be to acquire this data. In particular, data acquisition at these scales is often an ongoing process involving staging up data from multiple archives.
Given the high false positive rate and enormous amount of data to process when dealing with insider threat, insider threat monitoring lends itself well to sector-based workflow (see Chapter 20 for more information on this). In this case, the sectors are groups of users based on the risk they represent.
In this approach, the ops team breaks users into different sectors based on risk. A simple staging model can break activity down by risk combined with trust, as follows:
This should be the organization’s default category, and represent the majority of users. These users are subject to default monitoring, which is to say nothing tailored to a specific user, not associated with a specific identity, and focusing primarily on external threats.
This category includes administrators, security analysts, and other personnel who are trusted, but who can cause exceptional damage. It may also include users who are outside world–facing, depending on your organization. These users will have more activities audited—for example, sysadmins may have all of their administrative tasks logged and achievable only from specified accounts. High-risk trusted users should be aware of the extent that they are being monitored; it is part of the responsibility.
This category includes new hires and employees who have recently resigned but did not have significant responsibilities. These users may have some additional monitoring in place, or they may be subject to additional controls.
These are users who are subject to extensive and potentially tailored monitoring depending on the threat and the circumstances.
In this breakdown, analysts would spend the majority of their time checking the HR/HT and HR/LT groups. HR/HT users may be regularly audited, while HR/LT users have additional monitoring that they are not aware of. The expectation is that HR/HT should be a fairly static group, without much turnover, and the HR/LT group should be small, ideally zero.
Different events (organizational or technical) may cause different transitions. Examples of events that might move a user into a lower trust sector include:
If a user has been written up for disciplinary violations, threatened other employees, etc.
Declarations of bankruptcy, gambling addiction, and other situations where the insider needs cash.
If the employee quits, then elevated monitoring is likely to be part of the exit plan.
The risk factors listed here refer to a user’s position within an organization, and are likely to remain relatively static over time. These cues are also of use for identifying targets for APT and spear phishing attacks:
Senior members of an organization (managers, CEOs) have elevated access and authority.
Senior executives live by their assistants, and the easiest way to subvert, access, or damage them is often to work through their assistants.
Users who are publicly noticeable represent an elevated risk.
An indication of how critical a user is to your organization outside of the org chart. The term “bus factor” comes from software engineering circles and refers to the damage that would happen to projects if that particular user were run over by a bus.
When dealing with insider threat locations, it is useful to be able to reconstruct where within a facility a particular event occurred. To this end, the analyst can look at physical data sources such as mobile device records and physical access control logs, and if necessary use network-based techniques.
Mobile devices (tablets, cellphones) usually have at least GPS tracking built into them, and mobile device management software (examples of which include MobileIron, Cisco, Meraki, and the like) will usually report this information.
Physical access records include video recordings and logs from physical access tools such as Datawatch Systems or Kastle Systems card logs (among other vendors). These are the logs showing badge access into a facility and can be helpful for associating physical access with network access. Note that access control log formats will be vendor-specific, and interpreting them may require using a specific vendor-provided tool. Be prepared to develop some interstitial software to export and process the data in your preferred console.
Basic network-based techniques will focus on tracking where within an
organizational network (if the traffic is within the network) a host
is located. This can be as simple as checking the IP address of a
host, or it can require running a traceroute to the host to determine
where it is within the organization’s routing infrastructure.
In addition to access records, expect to need to use multiple redundant data sources to track insiders. If insiders are, for example, aware that they have to move through a web proxy to access the outside world, expect them to move their traffic outside of that proxy server and that you will likely have to fall back to NetFlow.
Be aware that insider threat investigations may involve setting up new log collection capabilities, as the analysis team may be called in several months before any final decision.
Since the analyst is usually in a supporting role for an insider investigation, he often will have a clear idea of what particular user he needs to monitor. The hard part is associating that user’s identity with observable phenomena, in particular when using redundant sources.
D. Cappelli, A. Moore, and R. Trzeciak, The CERT Guide to Insider Threats: How to Prevent, Detect, and Respond to Information Technology Crimes (Boston, MA: Addison Wesley Professional, 2012).
1 See P. Early, “Brian J. Kelley, My Friend the Spy Expert” and M. McKay, “To Catch a Spy: Probe to Unmask Hanssen Almost Ruined Kelley” (60 Minutes Transcript).
2 See D. Kravets, “San Francisco Admin Charged with Hijacking City’s Network”.
3 See G. Newsom, “Why Government Should Outsource Technology”, P. Venezia, “Why San Francisco’s Network Admin Went Rogue”, and R. McMillan, “Terry Childs Juror Explains Why He Voted to Convict”.
4 See United States Department of Justice, US Attorney, District of New Jersey, “Former UBS Computer Systems Manager Gets 97 Months for Unleashing ‘Logic Bomb’ on Company Network” and M. Worman, “Information Ordnance: Logic Bombs, Forensics, and the Tragical History of Roger Duronio”.
5 As always, there are many exceptions. Insiders may probe inside the network if they’re not sure where an asset is, but we’ll focus on their distinctive behavior here.
6 Note that this is also a function of motivation; an insider looking to steal data for profit has less motivation to take everything than an insider looking to publish everything she can get her hands on on a leak site.