Cognitive Bias Can Hamper Security Decisions

A new report sheds light on how human cognitive biases affect cybersecurity decisions and business outcomes.

It’s a scenario commonly seen in today’s businesses: executives read headlines of major breaches by foreign adversaries out to pilfer customers’ social security numbers and passwords. They worry about the same happening to them and strategize accordingly – but in the text, they learn the breach was in a different industry, of a different size, after different data.

This incident, irrelevant to the business, distracted leaders from threats that matter to them.

It’s an example of availability bias, one of many cognitive biases influencing how security and business teams make choices that keep an organization running. Availability bias describes how the frequency with which people receive information affects decisions. As nation-state attacks make more headlines, they become a greater priority among people who read about them.

“At the organizational level, we have major decision-makers deciding how much to spend on cybersecurity solutions,” explains Dr. Margaret Cunningham, principal research scientist at Forcepoint and author of “Thinking about Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science.” These execs may be aware of more frequent threats like phishing, but the real problem is they’re interpreting risk based on what’s available in today’s news cycle.

Understanding these biases can improve understanding of how employees make decisions, Cunningham continues. An academic who describes herself as “obsessed with the type of mistakes people make,” she noticed human error was a common topic in cybersecurity. However, what human error is, and the many types of mistakes people make, were not.

“There’s no way we can be robotic about human perception,” she says. “As security specialists, we need to mitigate these risks by understanding them better.” Bias, or the tendency for people to favor a person, group, or decision over another, is swayed by past and present experiences. It’s common in situations where the correct choice isn’t always clear, she explains.

Cybersecurity: An Emotional Roller Coaster

Cognitive bias isn’t specific to cybersecurity; it’s universal. Many factors that influence human behavior fly under our radar, especially in stressful situations that security pros often face.

“Bias is fluid,” says Cunningham. “It’s shaped by past experiences, how tired we are, how emotional we are, and honestly, being in tech is highly emotional.” She points to the “warlike” language often used in cybersecurity: attack, breach, firewall. Security practitioners are especially prone to bias because they work in a field that’s highly emotional and often abstract.

Six types of biases can skew security strategies: availability bias is one, but teams should also be aware of aggregate bias, which is when people infer something about an individual based on data that describes broader populations. For example, Cunningham reports, older people are often considered riskier users because of a perceived lack of familiarity with new technologies.

Confirmation bias is when someone seeks to confirm their beliefs by exclusively searching for information that supports their hunch while excluding opposing data. This is especially common in security, she says. It’s typical among analysts who enter an investigation digging for an answer they really want. “[This] creates avenues for ignoring the whole picture because we’re looking for what we know,” says Cunningham, instead of considering other possibilities.

Another is anchoring bias, which occurs when a person locks onto a specific feature, or set of features, of data early in the decision-making process. If a value is introduced early in a sales pitch, for example, all the numbers that follow will be in the same ballpark. This plays into cybersecurity when, for example, an analyst clings to a certain value early in an investigation and fails to move away from it – even when the solution to the problem requires them to do so.

The framing effect, which affects how choices are worded, often manipulates those who buy security tools. For example, a vendor may say “one in five companies never got their data back after a ransomware attack,” placing focus on the one organization that lost data instead of the four that didn’t. This strategy causes security admins to buy pricey tools for low-probability risk.

Finally, there’s the fundamental attribution error, which is the tendency to view people’s mistakes as part of their identities rather than contextual or environmental factors. It’s seen throughout security, where analysts or developers blame users for creating risks because they’re less capable. There’s also a self-serving bias here, as the users often blame IT.

Breaking Biases

Security pros cannot “cure” biases, says Cunningham, just as they can’t cure people making mistakes.

“What we can do is become better acquainted with the types of decisions, or decision points that are frequently and predictably impacted by bias,” she explains. If someone is aware of the potential for bias in a situation, they can use this awareness to avoid bias and potentially make different, more informed, choices as a result. After all, she points out, attackers understand how they can manipulate human emotion, and they’re using this knowledge to their advantage.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial … View Full Bio

More Insights

Read More HERE

Leave a Reply