Empathy is a powerful tool in human relationships, but often it seems to be disregarded when talking about more technical realms such as security.
Sometimes even talking about empathizing with a threat actor causes discord: throughout history, empathizing with an enemy has often been seen by the masses at best as foolish, and at worst as treachery. But it is not lost on tacticians and decision makers that knowing the enemy intimately is a powerful strategic advantage.
Empathy (deeply understanding someone else's thoughts, feelings, and experiences) is sometimes confused with sympathy (care for someone else's suffering). Understanding is not the same as agreeing with - an important distinction for getting past certain mental barriers that can inhibit understanding.
So it is very possible to have deep empathy (understanding) of someone we vehemently disagree with, while having no sympathy for their suffering or challenges. I know this is probably covering old ground for most of you, but I think it's important to be extra clear.
In security, threat modeling can sometimes be a dry exercise consisting of statements like "The threat actor may compromise user credentials, leading to unauthorized disclosure of sensitive information". This is certainly technically correct, however it tends to make future readers eyes glaze over and misses key human information that can add context and understanding - leading to better solutions.
When it comes to threat modeling, or even just thinking about problems from multiple angles, I frequently use empathy to attempt to put myself in another place to see how they might perceive the situation and act. I generally consider myself to be pretty good at empathizing, and use it regularly in both my personal and professional life.
It isn't hard to imagine being a threat actor and everything that entails. The motives are generally understandable and easy to empathize with. There are a few archetypes I usually think about:
- Government actors - to gain military or economic advantages for their own home and people (often labeled as APT).
- Working for financial gain - theft of personal information for identity theft, valuable IP, crypto-mining, or direct monetary theft (Cyber crime).
- Ideological - causing financial or reputational harm to political, ideological, or economic rivals in an attempt to prop up their own belief system.
- Classical Hackers - someone who is interested in how things works and wonders if they can make a particular hack work, similar to how the MySpace Samy worm came about. I haven't seen many attacks in this class in recent years, possibly due to the rise of bug bounties and other ways that those interested in this kind of work have proper outlets for their energy.
There is one type of person, though, that I struggle to understand. This is the person who causes harm only for the sake of causing harm (also known as a griefer). It seems to be a phenomenon I have noticed more in recent years, and have sought to better understand and identify with.
I was reading an article recently about the video game Journey (a social game that depends on collaboration between strangers online), and all the various steps they had to take to prevent this kind of behavior. In the game, it manifested with players pushing others off of cliffs, or leading a partner to a place where they become stuck, unable to continue.
The article hypothesizes that some players gain more joy from watching another person display frustration than from helping, which seems to match other descriptions of why griefing occurs.
I once read a comment saying that it was funny when done to others, but almost suicidal when done to themselves. Most of the discussion of this behavior revolves around online gaming, where it doesn't impact people's livelihoods or other significant real world consequences, but may result in destruction of digital art or work of personal significance to players.
In the end, it appears that there is a class of people who really do find significant enjoyment from the suffering of others. I may not have much luck empathizing with them, but knowing their reasoning is enough to predict what they might do: They are likely to do things that cause pain to owners and admins. For instance denial of service (to cause problems for admins), or defacing a site, or taking over social accounts.
When turning all of this back into a threat model, we can clarify not only the threat, but also the likely follow up actions and the reasoning attackers would use. With this, we can craft a story that non-security team members are more likely to buy into.
After all, it is much more interesting to think about a foreign agent compromising an embedded device as part of a multi-stage intelligence operation, or a criminal organization siphoning money, than it is to describe a "threat actor compromising user credentials".
Management still tends to see these scenarios as far fetched (which is why news examples are so helpful), but at least they tend to understand the ideas and importance of security when presented as a story instead of a series of dry statements.
So if you haven't tried using empathy in your security work, give it a try and see if it helps your thought process and perhaps also your internal sales process.