5 common decision-making biases in cybersecurity

5 common decision-making biases in cybersecurity

Biases in decision-making can contribute to adverse cybersecurity outcomes. Find out why being empathetic and giving others the benefit of the doubt are key when addressing these biases.

It seems we humans make choices based to a significant extent on biases we’ve accumulated over the years. Those biases are not helpful, in particular when making decisions regarding cybersecurity.

“By improving our understanding of biases, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions,” writes Margaret Cunningham, PhD, principal research scientist, in her Forcepoint report Thinking About Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science.

“Our efforts to build harmony between the best characteristics of humans and the best characteristics of technology to tackle cybersecurity challenges depend on understanding and overcoming bias,” says Cunningham.

The psychology behind bias

Put simply, bias is discrimination: The tendency to favor one group, person, or thing over another. “For better or worse, bias is an inescapable feature of the human experience,” explains Cunningham. “We are shaped by a combination of our environment, our genetics, and our cognitive ability to process and make sense of our world. This means that our decisions, behaviors, and experiences are influenced by the experiences of the past and the present.”

Think about thinking

Sometimes we are aware of the biases, other times we are not, and that fact lulls us into believing we have everything under control. “This is because biases dwell beneath the surface of our awareness as automatic thought processes,” contends Cunningham. “Building awareness of cognitive biases can help us move beyond biased decision-making, and more importantly, help us avoid designing systems that perpetuate our own biases in technology.”

Cunningham continues, “To achieve this type of awareness, we have to challenge ourselves to think about thinking. Thinking about, and understanding, how we think and reason is especially beneficial when we identify situations where bias is likely to have a significant negative impact on our choices or behaviors.”

Cunningham compiled a list of known decision-making biases meaningful to cybersecurity professionals, along with information on how each bias functions and how they can impact an organization’s cybersecurity. Here are a few examples.

Aggregate bias: This happens when we infer something about an individual using data that describes the broader population with which the individual is associated. Cunningham suggests, “This results in bias because information used to understand groups of people cannot be assumed to be accurate at the individual level, as individuals often have many other variables that impact their behavior.”

Addressing aggregate bias is paramount during security investigations, and when developing data-security platforms addressing human error and other risk factors. Cunningham suggests that behavioral analytics allowing for self-to-self, self-to-peer, and self-to-global comparisons can provide context for understanding individual behaviors.

Availability bias: The more frequently individuals encounter specific types of information, the easier it is for their minds to access it, which in turn impacts how they perceive similar events. This bias, according to Cunningham, requires working with existing technology and data to get an accurate representation of existing threats. Cunningham writes, “We do this so that our decisions are not made based on news cycles that potentially inflate or misrepresent the probability of certain types of threats. This means that coping with availability bias requires both humans and technology. Humans are required to create an organizational culture and communication strategy that values the expertise of security personnel, and technology can help provide more accurate probabilities of various types of threats.”

Confirmation bias: Confirming beliefs by searching for and building information around an argument, while excluding opposing viewpoints. “Confirmation bias not only affects our reasoning strategies, but it also impacts our memory of information,” adds Cunningham. “People tend to focus on and remember information that confirms or aligns with their beliefs while discounting or forgetting information that opposes their viewpoint.”

Overcoming confirmation bias requires accepting different points of view, fostering an attitude of “let’s hear what everyone thinks,” and thinking outside the box.

The framing effect: Another factor that impacts how people make choices is how those choices are worded. Saying something is a sure bet–whether it is or not–will more often than not result in its being perceived as the best option. “Framing effects are somewhat fragile, and their impact depends on the one-sided nature of the phrasing of a question,” writes Cunningham. “When a person knows that they need to pay attention to how questions are phrased, they can overcome their initial knee-jerk reactions when making a choice.”

Fundamental attribution error: This bias is the tendency to see another person’s failures or mistakes as part of their identity rather than attributing the failure or mistake to contextual or environmental influences. A common example, according to Cunningham, would be the bi-directional finger-pointing between IT personnel and end users.

As to the answer, Cunningham suggests personal insight and empathy, adding that, although hard, we need to determine when we’re placing blame on a person rather than on factors that impacted a person’s behavior. “It is also difficult to acknowledge when we are responsible, due to our shortcomings, for adverse events or outcomes,” writes Cunningham. “What we can do is practice empathy and build our capacity for giving others the benefit of the doubt. For supervisors and leaders, acknowledging imperfections/failures can help create a more resilient and dynamic culture. For the people designing complex software architectures, consider that your perspective is highly security focused—while your users’ motivations may not be—and that their failures are not because they are stupid, but because they’re human.”

Final thoughts

Personally, I have found answers that fall in the middle, in that gray area, end up being the best solutions. As to how that is accomplished, Cunningham offers the following advice, “Human weaknesses and cognitive shortcuts that result in bias require us to foster a sense of intrinsic motivation to address bias while requiring us to turn towards one another and towards technology to minimize the impact of predictable biases in the cybersecurity community.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *