5 keys to an evaluative mind

5 keys to an evaluative mind


What a Chinese legend teaches us about making better modern-day decisions

What a Chinese legend teaches us about making better modern-day decisions

An old legend tells the story of a wise Chinese farmer. He was considered to be an odd sort of fellow because of the way he had of looking at things. One time, one of his horses ran away. His friends rallied around to console him. “That’s too bad,” they said. “We are sorry for you.” “How do you know it’s bad?” the old man asked them. A few days later, the horse returned and had two wild horses with him. This brought the neighbors on the run. “Good, good!” they said, rejoicing with him. “How do you know it is good?” the old man asked them. They could not answer that one either. So they said no more. In training the two wild horses, the man’s son was thrown and his leg was broken. Again the neighbors came to commiserate with him. “Oh, that’s too bad,” they said. “We sympathize with you and your misfortune.” “How do you know that this is bad?” the old man said again. The very next day, a warlord and his army came through the land conscripting all able-bodied young men to fight for them. But not the son of the old man. He was not able-bodied. Listen here to Alan Watts, the late British-American philosopher and popularizer of Eastern philosophy for Westerners, tell the story of the farmer and the moral that there is neither good, nor bad, but thinking makes it so.

The human mind is quick to make decisions such as a “good idea” or a “bad idea”; it did, after all, help decide whether the rustling sound in the grasses was a lion on the savanna (run away: good idea) or simply the wind (run away: bad idea). Taking time to stop and think might mean we end up as lunch. Modern living requires hundreds, if not thousands, of decisions each day. Much of the time, decisions are required without having all the information. Or maybe all the information isn’t knowable, “because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know”, quipped Donald Rumsfeld on the lack of evidence linking the Iraqi government with weapons of mass destruction, while serving as the US Secretary of Defense for the GW Bush administration.

So how, then, do we evaluate and make decisions without all the information? Heuristics — the experimental, trial-and-error learning processes often running in your subconscious. Heuristic learning processes of evaluation are becoming popularized across fields as diverse as business and management, sports and coaching, military and education, in order to make better decisions. However, species up and down the Animal Kingdom use heuristic learning processes to learn from past decisions and apply that knowledge to future decisions. Most animals don’t stop and think about the rustling sound in the grass; most animals are bad decision-makers. Bringing heuristics into your conscious thought, and applying processes to an evaluative mindset can lead to better decision making.

1. Explore first.

Recall that the mind works in two distinct modes, the exploratory mode, and the evaluative mode; studies in neuroscience often refer to these as the focused mode and the diffuse mode of thinking. Because the mind defaults to what it’s seen before (heuristics, again here), rather than work harder for something new, it is primed to quickly switch from an idea-generating state to an idea-evaluating state. Resist the urge to jump to conclusions, to evaluate the outcome. Like the Chinese farmer, gain as much information as possible. Separating exploratory from evaluative thinking will lead to more, and therefore better, ideas to evaluate.

2. Outcome does not equal decision.

The best decision-makers apply a process to their decisions, especially with unknown or unknowable information. Poker is a quintessential example of a game where players make rapid, high-stakes decisions, and uncertainty is held in every hand. “A key distinction between chess and poker is that chess has a right answer, no chance, no randomness; it is all objective, with no subjectivity,” says Annie Duke in her recent book Thinking in Bets. “Poker is the opposite, where you have to consider with the other player is thinking, doing, thinking about what you’re thinking.”

Duke goes on to write that “processes are often opaque, we cannot see inside them or know all of the details, so we use outcome as a heuristic, as a shortcut, as a proxy for process.” Poker players call this ‘resulting’. “The outcome can skew your perception of whether or not the decisions, and the processes, were good ones.” Duke, a World Series of Poker bracelet winner, the winner of the 2004 Tournament of Champions, and the only woman to win the NBC National Poker Heads Up Championship, describes this as a fatal, decision-making mistake.

Instead, the success and outcome of the decision are unrelated to the process to make that decision. In poker, and in life, luck plays a strong hand. It’s lucky to draw a pair of Aces in poker. It’s lucky when your run-away horse returns to the farm with two others, but unlucky when you break your leg. “Luck is a way we can explain short-term outcomes,” says Duke. “Human nature takes credit for the good stuff, and discounts bad outcomes to luck, a type of self-serving bias to preserve long-term identity” according to Duke. But herein lies the short-circuit to our learning, our heuristics. Again, Duke says “if there was an element of luck, and it was truly lucky, then there is nothing to learn. But if we are just using luck as a way to explain away our decisions, or to take a shortcut, then there’s no point in discussing luck, because it was just lucky.”

Instead, Duke applies a process to her decision-making, independent of the outcome. In Thinking in Bets, she tells of countless poker hands where she applied her process to evaluate her play, and lost; conversely, she tells of poker hands where she made a poor decision and won. As she became a better player, she learned the success and the outcome of the decision are unrelated to the process to make that decision, especially in situations of uncertainty and chance.

3. Get brutal feedback.

We are our worst critics. We don’t spend enough time exploring (see point 1); we quickly take credit for ‘good’ outcomes and dismiss ‘bad’ outcomes to poor luck (see point 2). And when we finally point our skepticism inward, we are prisoners of our own prototypes, and evaluate everything from the lens of prior events. When evaluating an idea or course of action, the more time and experience you spend typically means being more entrenched in it. You need help.

“Getting feedback is a key element of original thinking,” says Adam Grant, an organizational psychologist at the Wharton Business school and New York Times best-selling author of Originals. “Fellow creators are the best judges; colleagues have no risk aversion and, importantly, have nothing invested,” says Grant.

Feedback must be brutally honest. Early in Annie Duke’s poker career, she got tough feedback. Duke recounts a time she was ‘resulting’ and bemoaning the bad luck she had on a particular hand. Her friend and eventual mentor said, “I don’t want to hear it. I’m not trying to hurt your feelings. I don’t have anything to say to you if you had no control over the outcome.” But in this instance, she had control over the outcome but wasn’t owning her mistake. Duke eventually formed a group of fellow players with a ‘truth-seeking charter’ to:

  1. Focus on accuracy over confirmation: reward truth-seeking;
  2. Accountability of behavior: no resulting;
  3. Be open to a diversity of ideas: no good ideas, or bad ideas

While brutal feedback might be uncomfortable, “in the long run, the more objective person will win over the more biased person”, says Duke.

4. Think in probabilities.

“60% of the time, it works every time,” the character Brian Fantana famously said in the comedy movie Anchorman. While the math of this statement might be questionable, Fantana is thinking in probabilities. Probabilistic thinking is the attempt to estimate the likelihood of any outcome of a scenario using tools of statistics and math [1]. In a world of uncertainties, applying probabilistic thinking can lead to a better evaluative process and better decisions.

“The key is to accept the idea that life is uncertain (not black and white), and to think in probabilities”, says Duke. “The less we know about a topic, or the more luck is involved, then the more uncertain we are.” Duke suggests expressing uncertainty, such as when discussing: facts (dinosaurs were herd animals); predictions (is there life on other planets?); and future decisions (I think the company will be better off if I fire the president). By taking an objective stance, and thinking in probabilities, Duke says it is easier to downgrade percent confidence rather than moving from right to wrong, certain to uncertain, good to bad. Further, thinking in probabilities means you are less likely to succumb to motivated reasoning — a type of logic that looks for evidence to support a preformed notion. [Ed. Note: I’m currently 80% sure this is a good idea, but 20% sure I can pull this off.]

How does this logistically work? Duke tells of how military institutions use probabilistic thinking in ‘scenario planning.’ In this type of planning, all of the possible outcomes of a decision are considered (see point 1, again). Then, a scenario planning decision tree lays out all possible outcomes and applies probabilities to each happening. Considering even the most unlikely outcomes allows the decision-makers to cover all possible space, avoid looking only under the lamp post, and consider the likelihood of each outcome. The scenario planning exercise results in the expected probability of each outcome (by multiplying the probabilities of dependent scenarios) and a logic of evaluation for a decision.

Human intuition about the statistical likelihood of a possible outcome is severely flawed. Studies in behavioral economics, notably by Amos Tversky and Daniel Kahneman, winner of the Nobel Prize in Economics in 2002 [footnote 2], showed that bias hijacks rationale and logic. One type of bias that intuition relies heavily on is the ‘availability heuristic’. A person’s judgment of a situation is viewed through the subjective probability of an outcome, based solely on other experiences that person has had before. This is the ‘availability’ part. Intuition leads to miscalculations of true probabilities because human judgment is heavily distorted by memorable events. We cannot rely on intuition alone; we need statistics.

Evaluating often means that we cannot know the outcome of a decision in advance. “But it’s about acknowledging that we’re making a prediction about the future,” says Duke. “It’s OK to guess.” Avoiding extremes, adopting probabilistic thinking, and language of uncertainty are steps away from bias and towards a better evaluative mind.

5. Learn from the process.

“Learning occurs when feedback is tied closely in time to decisions and actions,” writes every Psychology 101 book ever. Although this is a heavily Pavlovian or Skinnerian view of the world, it is generally accepted that in order to learn you need to have consequences. Put a rat in a box and it runs a maze, and at the end of the maze the rat finds cheese; it then figures out how to run the maze faster. Learning. This loop of decisions, actions, outcomes, and rewards or punishment is how learning occurs.

While establishing a process of evaluation and decision making, including a step to evaluate your evaluating — very meta — means that you can learn from your process and become better at it. Evaluating after a decision oftentimes is linked to resulting, because you will have known the outcome of the decision. Some of the best decision-makers evaluate the decision-making process before making a decision. How? By bringing your future self into the decision. Duke says that “future us can influence current us, based on past us decisions.” Duke gives an example of a process she uses called 10-10-10. She asks herself the following questions, “what are the consequences of the current decision in 10 minutes, 10 months, 10 years?” This question queues her to recruit her future self into the current decision. For example, by moving regret in front of decision, she will make a decision that avoids future regret. The key here is to think long-term. Learning from your past self’s decisions means that you can anticipate what your future self would say about the current decision at hand.

One shortcut to do this is to make a Ulysses pact — when a past version of you makes a deal with a current version of you to prevent a future version of you from making a poor decision. In Homer’s Odyssey, the hero Ulysses and his crew must sail past the Sirens, beautiful women who sing such mesmerizing songs that any men who hear them will become so captivated that they will steer their ship straight into the island’s rocks and drown. Ulysses devises a plan to hear the Sirens’ song by instructing his crew to tie him to the mast of his ship, holding him at sword point, only releasing him once they’ve sailed past the island. He plugs his crew’s ears with wax allowing them to pass safely, and Ulysses is the only man ever to live to tell of the beautiful Sirens’ song.

A Ulysses pact is essentially a personal rule you establish with yourself to avoid future regret. Odysseus made certain that no matter what he said to his crew in the moment, they would not set him free or let him take control of the ship. He wanted to avoid certain future regret of that decision. Personal rules can be an easy heuristic to make better decisions. For example, I don’t schedule meetings before X time; I eat Y vegetables at every meal; I will save $Z each month. It’s not that rules remove evaluation from the decision-making process, but instead recruit your future self into the current decision. If you want to protect more of your time, eat healthier, or plan for retirement, these contracts or rules are one way to make better current-day decisions for whatever you have deemed is important in the future.

But you cannot have a rule for every decision you make, for every situation. Therefore, it’s more practical to establish a process to learn from decisions. One prime example of learning from decision-making is the observe–orient–decide–act (OODA) loop, developed by military strategist and United States Air Force pilot John Boyd. The OODA loop was originally designed by Boyd as a practical concept to outmaneuver enemy MiG-15 fighter jets in air-to-air combat in the Korean War. The four steps of the OODA loop are:

  1. Observe: collect information
  2. Orient: analyze and synthesize data to form a current mental perspective; this is a defining element of this model before deciding and acting
  3. Decide: determine the best action based on that perspective
  4. Act: do it

The most important step is after 4, which is to loop back to 1. It is called the OODA Loop for good reason. In fact, Boyd’s OODA Loop includes coming back to step 1 at any point in the process. According to Boyd, the key to victory [in dogfights] is to be able to create situations where you can make appropriate decisions more quickly than your opponent. While this construct was originally a theory for achieving success in air-to-air combat, it has since been applied to other fields, ranging from law and medicine to politics and sports, namely because it applies the structure of rational thinking in confusing or chaotic situations to make better decisions on how to act. The learning process of an evaluative mind happens before, during, and after the decision (and action), and is constantly updating.

The human mind is notoriously adept at misremembering what you thought at the time. Hindsight bias, as it is called, describes how we fail to accurately remember something in the past and apply a false narrative to it. While the human mind is capable of several impressive feats, mentally transporting ourselves back to the time and place to recall what we thought about a decision or an event is not one of them. Writing down your processes, such as in a ‘decision journal’, can overcome this cognitive shortcoming. Recording the process of evaluation and eventual decision allows us to time travel at some future date back to the decision. Did you explore all possibilities? How brutal was the feedback? What did you think the outcome would be? Were your probabilities accurate? With this record, you close the learning loop and get feedback on your evaluative process and decision. [Ed. Note: I’m currently at 30% that I will adopt this practice.]

An evaluative process is essentially how you make a decision about choices in the setting of unknowns; with two distinct steps: having a process to make a decision about choices before you have made the decision, and updating your processes by re-evaluating the decision, after you made the decision. Very Donald Rumsfeld-ian.

Established processes are key for developing an evaluative mind and more robust decision-making. Expression of certainty is a sign we are heading toward a poorly calibrated decision, such as the farm villagers saying “Oh that’s too bad.” Bad is a conclusion, not a rationale. And particularly useless, because it is too certain. Concluding something is bad also goes against the language of probability. Like the Chinese farmer, the only thing that is certain is that uncertainty lies ahead, and the outcome is neither good nor bad, but thinking that makes it so.

Further Reading

1. Mental Models on Probabilistic Thinking at Farnam Street

2. Daniel Kahneman had a long collaboration with Amos Tversky, and wrote that he believes that Tversky would have shared the Prize with Kahneman had Tversky not died prematurely in 1996 from metastatic melanoma.