The trick to blocking cognitive biases from clouding IT security decisions

In his book "You Are Now Less Dumb" and his blog You Are Not So Smart, David McRaney documents a number of cognitive biases that people have in their daily lives. These are innate thought processes that help us to ignore objective evidence and misunderstand the world around us. Ultimately, it results in our having a happier (if slightly fictitious) view of the world.

In IT security we experience similar biases and often ignore hard evidence that contradicts our worldview. This might make us more trusting of technology and, therefore, happier using it, but we are taking bigger personal risks than we realise.

Common cognitive biases include the sunk cost fallacy, confirmation bias, post hoc fallacy, the common belief fallacy, and enclothed cognition. This article focuses on how the latter two biases manifest in our technological lives.

SEE: Security and Privacy: New Challenges (ZDNet/TechRepublic special feature)

Common belief fallacy

The principle in psychology: If many people believe something, you are likely to think they are correct.

The principle in security: The more people use a technology, the more likely you are to believe it is secure.

The reality: The number of users of a technology is an unreliable indicator of its security.

We often base our personal risk decisions on the risk decisions of those around us. If everyone else uses a service, how could it be so bad? Likewise, if particular settings were bad for us, why would they be the default settings?

Security decisions are complicated, and your neighbour may know less than you do. Their decision to use an online service is probably influenced by someone else they know. In 2008, research estimated it would take the average person about a month every year to read the privacy policies of every website they used; this excludes the end user license agreements and privacy policies of mobile apps, which weren't considered in the research. So it is clear that you, your friends, and your family may not be making informed decisions about security.

The default settings in an application are set mainly for the benefit of the software's manufacturer. You can believe that if something is complicated, generates support calls, or would prevent a user from having a positive first impression, then it is disabled by default. Likewise, if there is a way to maximise revenue for the software manufacturer by having the user opted into some feature or service by default, then that is usually how it will be set. This is not to imply an evil conspiracy theory -- it's just that there are trade-offs between security and usability and between privacy and revenue. The defaults usually do not favour security and privacy.

Enclothed cognition

The principle in psychology: We think that we make objective decisions, disregarding superficial indicators like clothing.

The principle in security: We think we are evaluating security risks and are not persuaded by superficial indicators like fonts, icons, and brands.

The reality: The "virtual clothing" on our software influences our decision far more than we realise.

Enclothed cognition is a term coined by Hajo Adam and Adam D. Galinsky in a paper published in the Journal of Experimental Social Psychology. The idea is that when you dress differently, you think differently. Typical users would consider a computer or mobile device to be more secure if they saw lots of icons related to security programs on the desktop -- regardless of whether those programs were up to date, redundant, or even worthwhile to begin with.

It is arguable that most users can't possibly verify the security claims of most apps. For instance, one developer created a paid Android "antivirus" application that did nothing except tell users they were safe. Another popular "secure vault" app on Android and iOS used the weakest possible protection short of not protecting the data at all.

We urge users to use security indicators like padlock icons and authenticity seals as proxies for security evaluations. It is an efficient optimisation, and it works a lot of the time. If we did thorough due diligence for every decision, we would never get anything done.

Outwitting ourselves

It is important, however, to recognise when we are making a very important decision and avoid giving too much weight to appearances. The trick to correcting these kinds of behaviours in information security requires the same sorts of practices that scientists use: externalising decision making.

In order to make educated, unbiased decisions, scientists use a scientific method that obliges them to gather data and evaluate the data objectively in a repeatable method (among other things). Doctors, for example, conclude that a treatment is good based on the evidence of benefit and risk decisions about possible harm.

If we want to make intelligent security decisions, we need to slow down and externalise the risk discussion. It may seem tedious to write out the benefits and risks of new technology and discuss them explicitly, but it's a very effective way to prevent cognitive biases from dominating your decision making.

Also see

Note: TechRepublic, ZDNet, and Tech Pro Research are CBS Interactive properties.

Open bundled references in tabs:

Leave a Reply