When Sunk Costs Make us Happier
Sunk costs are non-recoverable investments of time or money. Since sunk costs were already spent (i.e., we can't re-spend the $100 that we already spent on non-refundable soccer tickets), it’s irrational to include them in our decision making. The fact that we spent $100 on soccer tickets should not influence whether we subsequently attend the game; we spent $100 regardless of our future choice. However, we frequently make decisions based on sunk costs; we decide to attend the cold and rainy soccer game even though we’d prefer to watch it from the comfort of our home, simply because we feel attached to the $100 we already spent. This error is called the sunk cost fallacy.
While sunk costs often lead to bad decisions (e.g., over-investing in a dead end project, a failing relationship, or a boring book), we can harness this bias for good. A few obvious examples come to mind:
Gym: Even though the gym membership is a sunk cost, some people visit the gym more frequently in order to lower the effective unit cost. While using the gym more often “because I paid for it” is irrational, this sunk cost-based thinking likely leads to a positive outcome.
Attire: After investing in an expensive clothing article, such as a tuxedo, tailored suit, or a designer purse, we often look for occasions where we can make the most of our purchase. Perhaps we attend more parties, meet new people, feel more confident, or receive more compliments as a result. There’s little harm in falling prey to the sunk cost fallacy in this way.
Vacation Properties: Once we spend money on a timeshare or a vacation home, we may feel a need to vacation more in an attempt to make the most of our investment. Given that most vacation days go unused, the sunk cost fallacy might nudge us in a normatively positive direction.
Our biases around sunk costs suggest a general strategy in which we should make investments in domains that lead us to behaviors that we normatively prefer. By tapping into our deep tendency to feel drawn to sunk costs, we can potentially harness our biases to help achieve our goals.
In general, we try to make decisions that we think will maximize our expected utility. Typically, we conceptualize expected utility as a property resulting from the product we buy - i.e., we buy chocolate cake because it tastes good.
So why do people buy booger flavored Jelly Beans?
To answer this question, we have to revisit how we think about utility. In addition to the utility of a product, we might also derive utility from the process of buying a product. In other words, we buy booger flavored Jelly Beans not because they taste good, but because they are fun to buy.
In fact, there’s a class of products that offer little utility themselves, but from which we derive some utility through the process of buying. It’s also likely that the fleeting utility we experience in the process of buying these products leads us to mispredict the utility we’ll subsequently experience when we consume or use the product. I’d bet that after a few bites of booger Jelly Beans, the rest will end up in the pantry for months before they’re finally discarded.
The gap between these two forms of utility means that at least some of the products we enjoy buying the most are also the ones we’re most likely to regret. Buyer beware!
All choices involve tradeoffs. Nowhere is this more evident than in the realm of medical decision making, where we sometimes face tradeoffs that require us to value a specific health outcome or capability.
For example, you might decide whether to start a course of treatment for low back pain that has some probability of alleviating your symptoms, but will be accompanied by potential side effects, such as weight gain or nausea. These are difficult tradeoffs, especially since we often have little prior experience to aid us in weighing each factor. While we often know our favorite foods - i.e., we know the relative utility of different options for situations we encounter frequently - we have trouble assigning utilities and making decisions about our desired health outcomes when we've never had to make such tradeoffs in the past.
Economists often assess individual utilities through a method called the standard gamble, which we can derive from expected utility theory. In a standard gamble, we present an individual with a simplified choice between (1) a known health state, such the continuation of low back pain, and (2) a treatment option that has some probability of a better outcome and a worse outcome. For simplicity, we often designate the treatment option as a probability of a permanent cure vs. a probability of guaranteed death.
Here's an example from a recent experiment I ran: "Imagine that today you have been diagnosed with a disease that will lead to total vision blindness in both eyes with 100% certainty. There is a drug that you can take that will prevent your blindness with 100% certainty. However, the drug comes with a risk of immediate death. You can decide whether to take the drug. If you don't take it, you will definitely lose your vision in the next month. If take it, you will not become blind, but there is some risk that you will immediately die. What is the largest risk of death that you would accept in order to take the drug?"
The logic is simple: the risk of death that you'd accept is equal to the utility of vision.
The standard gamble is a useful method to determine the utility of various health states and can aid individuals in making difficult decisions. Like all methods, the standard gamble suffers from several pitfalls, which we'll explore in the future.
As I wrote in an earlier post, there are multiple forms of rationality. When psychologists find deviations from rationality, they usually refer to violations of rational choice axioms - i.e., procedural rationality. There are multiple ways to think about procedural rationality, but the most common axioms are derived from utility theory. In the literature, these axioms are expressed in first-order logic, which can be difficult for the casual observer to unpack. To make things simpler, I've explained the five most important axioms using visuals.
Weak stochastic transitivity
Weak stochastic transitivity demands that preferences be ordered and internally consistent. For example, if you prefer bananas over apples, and apples over oranges, then you must prefer bananas over oranges under the same conditions. Violating weak stochastic transitivity - e.g., choosing the orange instead of the banana - is irrational because you should always pick the option that maximizes utility. (The curved greater than symbol is best read as "is preferred to" in the diagram below.)
Strong stochastic transitivity
Strong stochastic transitivity requires consistency in the strength of the preference ordering. For example, if I prefer bananas over apples, and apples over oranges, the strength of my preferences for bananas over oranges (i.e., the weak transitive relationship) should be equal to or greater than the strength of my preference for bananas over apples, or apples over oranges. (In the diagram below, 5 represents the strength of preference.)
Independence of irrelevant alternatives
The independence of irrelevant alternatives axiom holds that a preference ordering must not change when the choice set is enlarged. For example, imagine that your are given the choice between a banana and an apple. Assume you prefer the banana. Now, imagine that I add an extra option, an orange. Regardless of how you feel about oranges, you can’t suddenly prefer apples over bananas. Adding an orange to the original banana and apple choice set is irrelevant to your preferences between bananas and apples. Whether you prefer apples or bananas is strictly a consideration between those two options, so your preferences between those options must be independent from other options, such as the orange.
Regularity, sometimes referred to as property α, is a stronger form of the independence of irrelevant alternatives. It’s best to think about regularity at the market level. Regularity holds that the absolute preference for an option cannot increase when options are added to the choice set. For example, imagine a market only comprised of bananas and apples, and assume that 60% of sales are for bananas and 40% are for apples. Now imagine we add oranges to this market. Regularity holds that the absolute number of bananas sold should not increase beyond the original 60% of absolute (total) sales in the pre-orange market. With the addition of oranges, the market share of apples and bananas should either maintain or decline, but never increase.
The constant-ratio rule is a weaker form of the independence of irrelevant alternatives. It demands that the ratio of preferences between choices (i.e., relative preferences) hold constant as the choice set expands. For example, in a market with bananas and apples where bananas are preferred at a 1.5x ratio to apples, the addition of oranges should not change the initial relative preference ratio of bananas over apples.
Defining rationality is challenging, but Nobel Prize-winning economist Herbert Simon has provided us with three ways to think about this important topic.
Substantive rationality concerns the degree to which a decision maximizes utility - or, from an evolutionary biology perspective, the degree to which it maximizes reproductive fitness. Evaluating substantive rationality requires a comparison between the content of a decision and the desired outcome. When economists speak of rationality, they’re usually referring to substantive rationality.
While substantive rationality is simple in concept, it’s nearly impossible to measure in practice. For example, we can’t quite answer the question that substantive rationality demands: “Given all the choices you could have made, was eating a banana the best one?” Still, substantive rationality is the logical theoretical yardstick for evaluating decisions.
In contrast, procedural rationality concerns the degree to which a decision is the outcome of a reasonable computational process. Procedural rationality requires consistency and coherence of preferences and choices - a topic I'll write more about later. For example, a choice that is procedurally rational should abide by transitivity: if I prefer bananas to apples, and apples to oranges, then I should prefer bananas to oranges.
Economists generally ignore procedural rationality and assume that we'll abide by procedural axioms, such as transitivity. However, decades of research tells us that we break the fundamental rules that underlie procedural rationality: we resort to heuristics, fall prey to biases, and exhibit inconsistent preferences. As a result, psychologists are acutely interested in procedural rationality as a benchmark against which we can measure our systematic decision biases.
It's important to realize that our inability to abide by procedural rationality isn't the only roadblock to substantive rationality. Even if we could achieve procedural rationality, the option space is often so vast that we cannot make an optimal decision - i.e., it is computationally intractable. For example, there are about 10120 possible combinations in a chess game, which is many times larger than the number of atoms in the universe. With this understanding, we arrive at Simon’s concept of bounded rationality, which is defined as maximizing within the constraints of limited knowledge, limited time, and limited cognitive processing power.
While no specific criteria exist to determine whether a decision is boundedly rational, it’s a helpful concept that frees us from the unattainable and unhelpful standards derived from neoclassical economics. Most importantly, it reminds us that when we discover departures from procedural rationality, we shouldn’t sound the alarm bells and decry the foibles of the mind. Sometimes we simply reach the limits of the universe.