Lean Risk Management for Nonprofits 06 – We Need Risk Management Because of the Way We Think

 

We tend to think that humans objectively accept information about their surroundings and circumstances, process that data accurately, make careful predictions about the likely effects of various actions, determine the best course of action, and follow that course. This flawless computational model, however, is inaccurate.

Human brains are complex systems with predictable shortcomings. By reflecting on human fallibility in individuals and groups, we gain insight into why risk management can help organizations make better decisions under conditions of persistent uncertainty. Thus, in this post, I will provide a brief survey of some of the predictable fallibilities of human brain function that impact risk awareness and decision making.

 

An Overview of Informative Sources On Human Cognition

The fields of cognitive science and neuroscience are burgeoning. No single resource will provide authoritative guidance, and any particular popular treatment of the issues will contain simplifications and over-generalizations. The eight resources listed below, however, are excellent introductions to the observations that follow:

 

Human Brain Function Impacts Decision-Making

1. We Create Our Own Realities

Initially, the “real world” we interact with is not nearly as undebatably “real” as we think. We tend to believe that the world is “out there” and is perceived equally by every rational human being. That’s simply not true.

Perception is not passive. Although we like to believe that we receive information passively, like a recording device, the reality is much more complex. Preexisting mindsets and mental models color what data we select and how we interpret it. These mindsets allow us to rapidly process information, but they also “color and control our perception to the extent that an experienced specialist may be among the last to see what is really happening when events take a new and unexpected turn. When faced with a major paradigm shift, [those] who know the most about a subject have the most to unlearn.” (Heuer 1999.) As a result, two staff members in a nonprofit may “record” and “interpret” the the same incident in completely different ways.

Perceptual biases impact cognition. We tend to give vivid, concrete, and personal information greater priority than ambiguous and generalized data. Anecdotes are often dramatically more powerful than statistics. (Heuer 1999.) We store and recall personal stories and details about individuals, while discounting background information about large groups. We tend to overvalue novel events. (Johnson 2004.) Thus, different staff members may recall items that strike them as “unusual” while discounting elements of a situation that seem within the norm.

We exhibit “naive realism.” To help us make quick decisions, we have evolved to rely on past patterns of behavior and past perception. Although evolutionarily helpful, this categorization leads to exaggeration among categories and attribution of old information onto new circumstances. (Tavris & Aronson 2007.) Furthermore, we tend to believe that our own perceptions are

“accurate, realistic, and unbiased. Social psychologist Lee Ross calls this phenomenon ‘naive realism,’ the inescapable conviction that we perceive objects and events clearly, ‘as they really are.’ We assume that other reasonable people see things the same way we do. If they disagree with us, they obviously aren’t seeing clearly.”

(Tavris & Aronson 2007.) As a result, each staff member prejudges, and each is inclined to believe that his or her interpretation is right and others’ are wrong.

 

2. The “We” That Creates Our Reality Is Largely Hidden from Us

Emotions profoundly color our perceptions and decisions. We also tend to believe that, as “rational” creatures, we are controlled by logic and reason and can set aside our passions. Contemporary research suggests that is a badly flawed model. To the contrary, emotions experienced while perceiving an event profoundly color perception. Furthermore, research suggests that we routinely underestimate the impact of such emotions. In Predictably Irrational, researcher Dan Ariely recounts a series of fascinating and disquieting studies involving college students behavior under the influence of arousal. His conclusions were provocative: under the influence of powerful emotions, we act differently than we would in an unaroused state. Furthermore, we cannot accurately predict how our aroused selves will act:

“[E]very one of us . . . underpredicts the effect of passion on our behavior. . . . Even the most brilliant and rational person, in the heat of passion, seems to be absolutely and completely divorced from the person he thought he was. Moreover, it is not just that people make wrong predictions about themselves—their predictions are wrong by a large margin.”

(Ariely 2010.)

A host of other subconscious factors in addition to emotion also influence our “conscious” behavior. In the colorful language of Jonathan Haidt, reason may be the rider, but he rides an elephant of unconscious intuition. The rider, moreover, is the evolutionary latecomer, having evolved to serve the elephant:

“[The rider/elephant cognitive collaboration is] a dignified partnership, more like a lawyer serving a client than a slave serving a master. Good lawyers do what they can to help their clients, but they sometimes refuse to go along with requests. . . . The elephant is far more powerful than the rider, but it is not an absolute dictator.”

(Haidt 2011.)

We see causes everywhere. In an attempt to make sense of our environment, we develop causal hypotheses rapidly. We prefer not to see accidents, but instead impose explanations on them, discarding data that is inconsistent with those cause – effect explanations. (Heuer 1999.) As a result, we suffer from hindsight bias, imposing causal connections on events after the fact. It is hard for us to shake our conclusions. Once we’ve decided what to believe, it is challenging to force ourselves to rethink circumstances:

“We are especially affected when we are forced to make a spur-of-the-moment judgment, in which case we tend to perform a kind of self-anchoring. Revising an intuitive, impulsive judgment will never be sufficient to unto the original judgment completely. Consciously or unconsciously, we always remain anchored to our original opinion, and we correct that view only starting from that some opinion.”

(Piattelli-Palmarini 1994.) “Impressions tend to persist even after the evidence that created those impressions has been fully discredited.” (Heuer 1999.)

Together, these factors mean that each nonprofit staff member will likely believe that he is more rational than he is — and will predictably underestimate the extent to which emotion and other subconscious factors color his perceptions. He will prejudge, make unwarranted causal inferences, and jump to conclusions that are hard to dislodge.

 

3. Our Memory and Imagination Are Imperfect

Our memories are malleable. Although we often commonly think of our memories as similar to computer hard drives, that metaphor is also flawed. In our brains, we change data as it is stored, we change data as it is retrieved, and data changes within our minds over time as our memories degrade and additional data comes in. (Heuer 1999; Wright 1994.) As one result, our memories tend to serve as vehicles of self-justification: “memory smoothes out the wrinkles or dissonance by enabling the confirmation bias to hum along, selectively causing us to forget discrepant, disconfirming information about beliefs we hold dear.” (Tavris & Aronson 2007.)

When predicting the future, we rely too much on the present and past. Psychologist Daniel Gilbert expressed this vividly:

“If the past is a wall with some holes, the future is a hole with no walls. Memory uses the filling-in trick, but imagination is the filling-in trick, and if the present lightly colors our remembered pasts, it thoroughly infuses our imagined futures. More simply said, most of us have a tough time imagining a tomorrow that is terribly different from today, and we find it particularly difficult to imagine that we will ever think, want, or feel differently than we do now.”

(Gilbert 2006.)

These factors mean that no team member in a nonprofit has perfect memory. We have neither totally reliable data nor totally reliable predictive powers. Individually, we are each subject to error.

 

4. We Want to Be Honest, But Often Act Dishonestly

Organizations tend to premise their structures on the bedrock principle that most people are honest. That premise is shaky. Ariely and others have performed substantial research into the extent to which honesty guides human behavior. Their conclusion is a mixed bag.

“We care about honesty and we want to be honest. The problem is that our internal honesty monitor is active only when we contemplate big transgressions . . . . For the little transgressions, like taking a single pen or two pens, we don’t even consider how these actions would reflect on our honesty and so our superego stays asleep.” (Ariely 2010.)

Ariely generalizes as follows:

“In a nutshell, the central thesis is that our behavior is driven by two opposing motivations. On one hand, we want to view ourselves as honest, honorable people. We want to be able to look at ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the other hand, we want to benefit from cheating and get as much money as possible (this is the standard financial motivation). Clearly these two motivations are in conflict. How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people? This is where our amazing cognitive flexibility comes into play. Thanks to this human skill, as long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll call the ‘fudge factor theory.’”

(Ariely 2013.)

Thus, organizations may err in presuming that all members of their team will act consistent with notions of “honesty” at all times. This doesn’t mean humans are bad, and it doesn’t mean that people cannot be trusted. It means, however, that in order to comprehensively address threats and opportunities, an organization must account for the often sizable gap between human aspirations and human actions.

 

5. We Self-Justify and Don’t Like to Admit Mistakes

We are inherently self-justifying. Humans have a protective mechanism that helps them maintain their self-concept:

“If [] new information is consonant with our beliefs, we think it is well founded and useful: ‘Just what I always said!’ But if the new information is dissonant, then we consider it biased or foolish: ‘What a dumb argument!’ So powerful is the need for consonance that when people are forced to look at disconfirming evidence, they will find a way to criticize, distort, or dismiss it so that they can maintain or even strengthen their existing belief.”

(Tavris & Aronson 2007.) Once we make a decision between two possibilities, we will reduce any cognitive dissonance about that choice by discounting prior arguments and information supporting the losing choice. (Festinger 1957.)

We don’t like to admit mistakes. Our culture celebrates success and denigrates failure. Furthermore, we equate “mistakes” with “failure”:

“Most Americans know they are supposed to say ‘we learn from our mistakes,’ but deep down, they don’t believe it for a minute. They think that mistakes mean you are stupid. Combined with the culture’s famous amnesia for anything that happened more than a month ago, this attitude means that people treat mistakes like hot potatoes, eager to get rid of them as fast as possible, even if they have to toss them in someone else’s lap.”

(Tavris & Aronson 2007.)

Our ignorance leads to overconfidence. An emerging body of evidence suggests that human beings are truly unaware of their own failings. One ruefully humorous example of this is the so-called “Dunning-Kruger” effect, which is a cognitive bias in which relatively unskilled people assert superiority in their areas of vulnerability.

Researcher David Dunning, who pioneered investigation of this effect, notes that this bias underlies those late-night Jimmy Kimmel and Jimmy Fallon videos where the proverbial “man on the street” holds forth at length and with great confidence concerning the subject about which he knows nothing. Dunning explains that his research shows that

“in many areas of life, incompetent people do not recognize — scratch that, cannot recognize — just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack.”

(Dunning 2014.) Ironically, this unawareness breeds overconfidence rather than hesitation: “What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.”

These factors make accountability challenging in any workplace. Human beings are programmed to justify their own behavior, avoid acknowledgment of their own mistakes, and often assert more expertise than they can muster.

 

6. We Are Particularly Bad at Evaluating Risk

We use imprecise words when thinking about and discussing risk. We use language about uncertainty (possible, probable, certain, unlikely, could, may) that are inherently ambiguous and subject to misinterpretation. (Heuer 1999.) We think everyone knows what “probably” means. Nobody really does.

We tend to recall extremes, rather than everyday experiences. “A heuristic that appears to influence our recall of facts” relating to risk “is one that Daniel Kahneman named the peak end rule: We tend to remember extremes in our experience and not the mundane.” (Hubbard 2009.) This experience is related no doubt to our inclination to remember the vivid and forget the unexciting and ambiguous. As a result, when making predictions using our background knowledge, we draw from an inherently unreliable sample.

We ignore “base rate” data. Because we favor vivid, extreme, concrete, and recent information, we may discount statistical information about the background likelihood (called the “base rate”) of any given event. A scenario that has greater detail is likely to be deemed more plausible or probable than a scenario with less detail. We also tend to judge probability based upon whether we can imagine the circumstances readily and whether we recall other similar events. (Heure 1999.)

We suffer from “anchoring.” When discussing probabilities, our brains tend to “anchor” on initial figures provided, regardless of whether the figures have any relation to the probability of an event at all, in order to create some arbitrary coherence in our minds. (Ariely 2010.)

We are confused by extremes. Our brains are confused by the difference between “no risk” and “extremely small risk.” In fact,

“The truth is that we are impressed by nothing less than big differences in probability, and only when those occur at either pole (near certainty and near-certainly not). We all understand perfectly well the difference between a 3 percent probability that we will suffer a financial disaster and the certainty (e.g., thanks to an insurance policy) that we will not suffer such a disaster at all. We understand far less well the difference between this ‘certainly not’ and a risk represented by a probability of 1 in 10,000. The fact is that many of us want to have (such as when we face doctors) insurance that guarantees no risk at all. If there is risk, but an extremely limited risk, such as 1 in 10,000, we are worried by the fact that there is a risk of some sort, as though between a risk of 1 in 10,000 and a risk of 1 in 100, there were almost no difference at all.”

(Piattelli-Palmarini 1994.)

We misunderstand chance. Douglas Hubbard observes:

“If you flip a coin six times, which result is more likely (H = heads, T = Tails): HHHTTT or HTHTTH? Actually, they are equally likely. But researchers found that most people assume that since the first series looks ‘less random’ than the second it must be less likely. Kahneman and Tversky cite this as an example of what they call a representativeness bias. We appear to judge odds based on what we assume to be representative scenarios.”

(Hubbard 2009.)

We are overconfident about our predictions. When untrained subjects “say they are 90 percent confident in an answer they gave, the average of the studies show that they are closer to having a 66 percent chance of being right. When they say they are 95 percent sure, they have a 70 percent chance of being right.”
(Hubbard 2009.) “It is therefore most important to be wary of our overconfidence, for this over-confidence is at its greatest in our own area of expertise—in short, just where it can do the most damage.” (Piattelli-Palmarini 1994.)

Fear affects perception of risk. As David Ropeik and George Gray note,

“Ultimately we react to risk with more emotion than reason. We take the information about a risk, combine it with the general information we have about the world, and then filter those facts through the psychological prism of risk perception. What often results are judgments about risk far more informed by fear than by facts.”

(Ropeik & Gray 2002.) As a result, “What feels safe might actually be dangerous: driving instead of flying, antibiotics against anthrax instead of flu shots, arming yourself against a phantom risk.” (Id.)

Accordingly, we can reasonably assume that any discussion of risk is fraught with peril. Despite walking around with an average of three pounds in our heads of the best organic computational processor we have ever identified, humans will make errors — particularly when it comes to assessing and addressing risks.

 

Human Cognitive Challenges Cause Numerous Risks and Problems

These cognitive errors and predictable irrationalities cause a host of problems in the workplace.

Wasted effort. Organizations waste an enormous amount of effort. After decades of consulting with some of the most successful companies of the 20th century, management guru Peter Drucker concluded that no more than 15% of the efforts in even the best organization produce 80 or 90 percent of the results. The other 85 or 90 percent of those efforts, no matter how efficiently taken care of, produce nothing but costs. (Drucker 1977.) That suggests that at least 85 percent of an organization’s efforts are either unproductive or, worse than that, actually harmful to the organization. Recent research stemming from the “lean management” revolution in manufacturing and service operations confirms Drucker’s anecdotal finding. Consultants routinely find non-value-added activities constitute up to 95 percent of activity in office and service settings. (Venegas 2010.)

Disputes. Human errors and omissions cause disputes. Most litigation involves smaller organizations. Small business owners and nonprofits are increasingly in the crosshairs, and business disputes cost them tens of billions of dollars each year. These lawsuits hurt reputation and credit worthiness, increase the cost of goods and services, and reduce employment. And that’s even when a company is not liable. If they are, they can face hundreds of thousands of dollars in penalties.

Resource misallocation. Inability or unwillingness to address risks, both positive and negative, can lead to mismatches between people and positions. An organization cannot tell whether the right person is in the right seat in an organization when one is not sure what the threats and opportunities are. If you don’t know what you need, it’s only happenstance if you get what you need.

Reduced employee engagement. Inability to address risks undermines employees’ perceptions of an organization’s ability to learn from its mistakes. They don’t see opportunities for advancement. They disengage.

Reputational harm. Investor Warren Buffett famously said that it takes 20 years to earn a reputation and five minutes to lose it. Building a reputation is incremental, but loss of reputation may be instantaneous.

In short, organizations should not ask, “Do we need risk management?” We answer that question by looking in the mirror. To quote Walt Kelly, we have met the enemy, and he is us. The relevant question is, “How do we build an effective risk management process that accounts for predictable human failings?”

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*