By Sharmen Mir, Harvard College Class of 2013

We believe that we are in control: of our lives, of what we do, and of how we think and feel. We imagine that we must of course know ourselves quite well. However, researchers are only just beginning to understand the nebulous workings of the human mind, and it turns out that we do not know ourselves as well as we think we do. Economics Nobel laureate and psychologist Daniel Kahneman remarks, “Whether you state them or not, you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.” In his bestseller Thinking, Fast and Slow, Kahneman, in his fluid anecdotal style, elegantly sews together the myriad of research in thinking and decision-making that exposes many surprising ways by which we make our choices.

Kahneman probes into the two different systems of thought that dominate our decisions and actions: (1) a fast thinking, impulsive system of thought, commonly known as intuition, which Kahneman refers to as System 1, and (2) a slower, more controlled and deliberate process of thinking, commonly known as reasoning, which Kahneman refers to as System 2. Kahneman, however, draws particular attention to the surprising influences of our fast thinking. For example, we are not always aware of all of the errors that we make in reasoning. These errors, called cognitive biases, affect the judgments we make in political elections, disputes, media coverage, investments, our daily interactions with other people, and even how we carry out research. It is our System 1 that is the nucleus of Kahneman’s work and of many of the things that we do.

We most clearly see the bearing of our fast, impulsive mode of thinking with optical illusions. With the classic Müller-Lyer illusion, for example, Kahneman points out that even when we reason out length and space with only modestly more careful thinking, we still cannot help but see what our instant System 1 shows us. Our System 1 can also misleadingly guide us to the easy solutions to difficult questions, instead of to the more rational solutions. In applying heuristics, or rules of thumb, we often approach a difficult question by substituting an easier question. For example, “Should I invest in company X?” might easily become “Do I like company X?” We tend to rely on our intuitions when deciding where to invest; we have the impulse to make a decision by examining our feelings and sentiments, rather than by consulting objective information about the company.

In one of the opening passages of the book, Kahneman makes the curious remark that we are natural grammarians, but not natural statisticians, even among the more statistically trained of us. Let us take, for example, Kahneman and Tversky’s fictive Tom W. Imagine that Tom W. is a student at our own college. Since there are many more economics concentrators than physics concentrators at our college, Tom W. is more likely to be one of the many students in economics. The actual base rate probability that Tom W. is majoring in economics rather than physics is high. However, suppose you are given the following description of Tom W., and are told that the following description of may or may not be true:

“Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.”

(Kahneman and Tversky, 1973)

What might we now imagine? Certainly no longer the gregarious economics concentrator we had first thought! After being given this description, we might think that Tom W. is more likely majoring in physics than in economics, even though there is a higher actual chance of Tom W. being an economics major. Notice that we are instantly inclined to narrow down our perceptions of Tom according to what we see, in spite of more reliable base rates and the stated lack of reliability of the piece of information given.

A description of “Linda” serves another excellent case in point. The Linda case was first presented in a study by Kahneman and Tversky (1983). In the study, Linda was described as follows:

“Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.”

(Kahneman and Tversky, 1973)

When participants in the study were asked whether it is more probable that Linda is a bank teller, or that it is more probable that Linda is a bank teller who is active in the feminist movement, the vast majority (85%) of them picked the second option. However, in actuality, the probability of Linda being both a bank teller and a feminist is lower than the probability of Linda being a bank teller in general—for the same reason that the probability of a coin landing heads up two times in a row is lower than the probability of a coin landing heads up at least one out of two times. In determining the occupation of the imaginary individual, we tend to ignore the actual probability of a possibility in favor of a stereotypical description, regardless of the validity or reliability of the information. By our conjunction fallacy, the so-described active and outspoken Linda, illogically enough, will always seem more likely to be a feminist banker than a banker. Our impulsive System 1 will instantly make a coherent story from just what we see.

Because our occasionally more scrutinizing machinery of System 2 is prone to lazy passivity, often “what you see is all there is,” or by acronym, WYSIATS. We easily ignore other potential information that we do not have.

Disturbingly, WYSIATS may be all we want there to be. In what Kahneman cites the “law of small numbers”; for example, researchers can mistakenly assume that the distribution of properties that they observe in small samples are representative of the distribution of properties present in the larger sample that it is representative of. This is tempting to do when the data acquired from observations of a small sample fit into a coherent story that supports the hypothesis.

The reason why small samples are often not representative of larger phenomena is because sampling error exists. For example, if there is a bag containing 10 red marbles and 10 blue marbles, the real chance of picking a red marble is 50%. However, if you only pick 2 marbles out of the bag, there is a 25% chance that both of the marbles that you will pick will be red. In other words, it is quite likely (25% chance) that by looking at a small sample of marbles, you will end up incorrectly assuming that all the marbles in the bag are red. Therefore, to minimize sampling error, data from many different small samples must be acquired and analyzed with appropriate statistics.

The reaches of our System 1 are particularly alarming in other ways. For example, in one experiment, various posters depicting enlarged eyes above a donation box perceptibly alter how much people donate relative to not having the posters. Another example of what happens when we unconsciously rely on our intuitions is called the anchoring effect. The anchoring effect occurs when people are influenced by numbers to which they just happened to be exposed. For example, one study showed that judges unknowingly tend to mete out prison sentences in a number of years close to the numbers that they had randomly rolled out on dice before the trial.

Also, as a result of what is called the halo effect, we tend to extend our impressions of people’s appearances to their character, even though how a person looks is not always related to his behavioral and moral qualities. We anticipate, for example, that a speaker’s speech will be better if he simply appears more confident.

We also have an optimism bias, where we tend to believe that bad things that happen to others are less likely to happen to us. The optimism bias can lead to a planning fallacy, where we tend to overestimate the benefits of projects or inadequately weigh their costs, and ultimately spend staggeringly large amounts of time and money on projects that turn out to be fruitless. Kahneman describes his own dogged pursuit of a curriculum endeavor in writing a textbook years ago for which the base rate of success in comparable cases was abysmal and the undertaking ultimately resulted in costs that far exceeded gains.

Such observations are certainly disconcerting. It might even be harder to accept the troubling reality of what Kahneman dubs “an industry built on chance”. Kahneman describes the oft-purely statistical nature of the successes of various businesses, and accordingly criticizes the reward for delusory skills in certain financial decisions.

This is perhaps Kahneman’s most controversial point, and readers might be inclined to take it with a grain of salt. After all, one might argue that in extensive studies of markets and businesses, expert eyes in their System 2 mode should be able to readily appreciate and take into account the statistical forces to which human forces fall subordinate in the rise and fall of companies. Regardless of the debate here, the end lesson is that we tend to undervalue or altogether neglect the role of chance in matters that we imagine we have more control over.

In fact, we do not naturally think statistically, nor are we swayed by statistics even when explicitly given them. For example, in a 1970s experiment carried out by Richard Nisbett and Eugene Borgida, participants were told that the vast majority of them did not assist a stranger struggling with a seizure. The participants were told the “base rate,” or the percentage of them who helped the stranger. Yet, when the participants were showed clips of two random other participants who came off as generally friendly, viewers automatically assumed, despite the low actual base rate, that these previews were of the helpful participants. In other considerations, such as the gravity or frequency of an issue or concern, we apply the availability heuristic. We do not consult actual base rates of earthquakes in dear old Cambridge, for example, but if such a rarity transpired in the recent past, we might tend to assume a greater natural disaster level on campus than what statistical reality indicates. Similarly, our take on issues and their importance are molded by how frequently and seriously the media chooses to cover it, whether or not they represent statistical reality.

Of course, we should not forget the real value of our System 1. Thinking, Fast and Slow merely illustrates quirks of our behavior that may not be readily apparent. Certainly, our System 1 is nimble enough to meet our practical quotidian needs. Our System 1, for example, is skilled in tapping into emotions, finding quick answers to urgent questions, and being efficient in daily mental tasks that are unreasonable for our slower System 2 to tackle. It is our System 1, oddly enough, that makes us prudently loss averse, that is, averse to loss more than we are inclined to comparable gain.

Loss aversion is a component of prospect theory, which is the work of Daniel Kahneman with the late Amos Tversky for which Kahneman was awarded the Nobel Memorial Prize in Economics in 2002. Prospect theory has greatly influenced our current understanding of behavioral economics, and is based on two older ideas about modeling decision-making under uncertainty: weighted averages and Bernoulli’s utility theory.

To determine value of a gamble by weighted averages, we consider, for each possible outcome of an event, the probability of an outcome occurring multiplied by the value of the outcome. The value of a gamble is the sum of each of these products for all the possible events. So, a gamble between a 50% chance of gaining $20 and a 50% chance of losing $20 takes a value of 0, but would we really be indifferent between taking or refusing this gamble? Bernoulli’s utility theory considers the utility of wealth rather than the objective valuation of weighted averages. While his idea of using utility, or satisfaction, improved upon the application of weighted averages to real situations, it cannot explain why a person with a wealth of $110 who loses $10 might not be as happy as the person with a wealth of $90 who gains $10. According to Bernoulli, since their final states of wealth are the same, they should be equally happy. This is where prospect theory comes in to account for the missing link between real values of wealth and our feelings about them.

While heuristics and biases already reveal that people are not so rational, prospect theory formally assails the particular economic assumption of the rational agent. Changing the focus from absolute states of wealth to changes in states of wealth more accurately captures the decisions we make in real life. In our previous example, though the two individuals have the same final wealth, the $10 loss is still a loss and the $10 gain is still a gain. However, we dislike loss more than we like comparable gain and we become risk-seeking against a sure loss.

Prospect theory captures other realities of our choices absent in the theory’s predecessors. We become risk averse and we avoid gambles in favor of sure gains, even if gambles give a higher expected average return. In retrospect, the sharp intuitive appeal of prospect theory’s evaluation of the changes of states of wealth accentuates the longtime irrationality of the traditional economist’s taken-for-granted rational agent!

The quirks of the mind do not end here. Kahneman continues with a crown topic of interest: happiness. The findings here, too, are startling. For example, patients looking back on two colonoscopies of comparable pain will prefer the trial that takes place for a longer duration of time. This might at first seem masochistic, but patients are actually judging experiences by the peak end of pain. When the longer procedure ends less painfully than the shorter procedure, we remember that and prefer the former option the next time around. Of course, at the moment, we might all prefer the shorter procedure. Ironically, our experiencing self and our remembering self do not hold mutual interests.

Thinking, Fast and Slow deserves a careful read by the 21st-century audience and certainly by psychology, economics, and statistics concentrators and thinkers alike. For the more eager mind, Kahneman encloses in his appendices his original scholarly works co-written with Amos Tversky that formally detail their work on heuristics, biases, and prospect theory. Altogether, Kahneman’s book is, at its finest, a plethora of powerful insights, and will remain a classically luminous contribution to the layman’s understanding of thinking for years to come. In terms of contextualizing academic research in applicable, practical terms, it offers everything for the critically liberal mind and at least something for the diehard skeptics.

 

Sources

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011. Print.

Tversky, Amos and Daniel Kahneman (1973) “On the Psychology of Prediction,” Psychological Review, 80, 237-251.

Tversky, Amos and Daniel Kahneman (1983) “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review, 90, 293-315.

Comments:

NO COMMENTS

LEAVE A REPLY