* * * * *
Modules
What they are. "A module is an information-processing mechanism that is specialized to perform some function." One can think of modules as "little programming subroutines, or maybe individual iPhone applications." The human mind consists of many different modules.
Why they're advantageous. Because "specialization yields efficiency." We see examples of this in engineering, computer science, economics, and biology. We see how specialization benefits many animals -- e.g., look at the many different type of finch beaks -- and how it benefits the human eye: "vision scientists, in no small part driven by the many discoveries over the last several decades, have looked for and found many mechanisms with exquisitely specialized and narrow functions."
How they're activated. "Different modules come online and go offline at different times, having more or less influence depending on the situation." For instance, if I encounter an attractive potential mate, my mate-acquisition modules comes online, and I might find myself trying to impress this person. If I instead encounter a predator, my predator-evasion modules will be activated, and I'll find myself running.
* * * * *
Evidence for Modules
Kurzban argues that the existence of modules best explains different phenomena. First, he points to Gazzaniga and LeDoux's studies on split-brain patients to show that it is possible for information to be gained by one part of the brain but not the other. Second, he points to Jonathan Haidt's moral dumbfounding experiments to show that we believe certain things are right/wrong without knowing why we have these beliefs; in other words, parts of our brain have this moral belief while other parts do not. Third, Kurzban points to some other facts about humans which are best explained by positing the existence of modules.
Self-Deception. One's ability to form friendships and alliances no doubt played a large role "in reproductive success over human evolutionary history." In order to do this, it was important to convince others that you had valuable qualities -- e.g., that you were a good hunter, good friend, good mate, etc. And to convince others that you have valuable qualities, it's helpful to believe yourself that you possess those qualities (even if you don't), because by believing that you possess those qualities, you're more likely to act as though you possess those qualities, which in turn can persuades others that you do in fact possess those qualities. An indeed evidence shows that people tend to "(1) think they have more favorable traits than would be realistic, (2) think they have more control over what will occur than they do, and (3) are more optimistic about the future than facts justify."
Kurzban gives the example of a man named Fred who has terminal cancer. "When people ask [Fred] how he feels, he tells them that he is going to surprise the doctors and pull through, making a full recovery. In fact, he's so sure that he is going to be fine that he says he doesn't even need the painful treatments once a week." Fred says that he'll go ahead with the treatments just to appease his sister, a statement which seems to indicate that on a subconscious level he knows that he alone cannot conquer the disease.
So what's going on with Fred? Some argue that people in such situations often engage in self-deception in an attempt to "protect the self." Kurzban rejects this explanation because natural selection designed our modules "to bring about outcomes that contribute to reproductive success," not to make us happy.
Tons of research has been done on self-esteem, and there's no evidence that having high self-esteem is a major predictive of anything. Kurzban emphasizes that, according to natural selection, feeling good about oneself is a purely instrumental good. He adopts the view of researchers Mark Leary and Deborah Downs. He writes:
They developed what they call “sociometer theory.” They liken self-esteem to a measurement tool, like a fuel gauge. When your gas tank is empty, they reason, you don't want to solve that problem by taking your finger, sticking it in the gas gauge, and moving the meter from empty to full. Just manipulating the gauge wouldn't do much. Rather, you want to, you know, fill the tank. This will have the effect of moving the gauge because it measures how full the tank is.
Self-esteem, they argue, is like a gauge. It's measuring how well you're doing, socially. Do people like you? Do they value you? Are you included in different social groups? Are they ones you want to belong to? Do you have a lot of Facebook friends? Do they comment on your status message? Leary and Downs argue that self-esteem is a measurement tool that is keeping track of the state of your various interpersonal relationships. When you're not valued, the meter is low, and you feel bad. When you are valued, the meter is high, and you feel good.
On this view, the reason it looks like people are trying to raise their self-esteem is that they're really trying to do something else (having to do with the world outside rather than inside one's head) -- in particular, to become more valuable to others -- which, if successful, will have that effect.
So Fred is not self-deluded in an attempt to feel better about himself. Rather, different modules believe different things for strategic reasons. His unconscious self-preservation module knows that his disease is serious and urges him to get treatment for his disease, but his self relations module tells the lie about his cancer being transitory "to persuade others that he's still a good investment." It's unlikely that "people would have completely abandoned Fred if they thought his death was imminent," but "in a highly competitive world, people can be expected to spend their limited resources on people who will be around to give something back, one way or another; Fred's PR system is designed to make some marginal difference."
Self-Control. Kurzban argues that people don't have stable preferences. For instance, I can't say that I prefer coffee over wine, as I might prefer coffee in some contexts (e.g., in the morning) and wine in other contexts (e.g., in the evening). It's also a fact that people's preferences change when their preferences are measured differently. When people in one study, for instance, were asked to choose between $6 and a fancy pen, one-third of participants chose the pen. When another group was asked to choose between $6, a fancy pen, and an inferior pen, nearly half of participants chose the fancy pen. "If context changes preferences," Kurzban writes, "and even the means of measuring itself changes preferences, then there seems to be no sense in which people 'really' have preferences, in much the same way that there frequently is no sense in which people 'really' have beliefs."
So why do our preferences change? Kurzban argues that some modules have high discount rates, meaning they're impatient and demand to be satisfied immediately; examples include modules concerned with eating and reproduction. Natural selection designed these modules to be impatient "because being impatient makes sense in a competitive world. Putting off eating means that I might not get the benefits of the calories in question if I die, if someone else gets them first, and so on. Once we've satisfied these impatient modules, our more patient modules have a chance of taking charge.
What activates modules? First, context; e.g., I'm more likely to eat a brownie if one is sitting on my desk. Second, the state of the organism; if I'm low on calories, I'm more likely to eat the brownie.
Different modules are "sensitive to different context cues, and these influence which modules win and which modules lose in the conflicts that occur all the time." Marshmallow Test: Children held out longer if they hid the marshmallow and if they thought about the shape of the marshmallow but not its taste.
Morality. Kurzban points out that people often cannot state why they believe certain moral propositions -- e.g., see the work on Jonathan Haidt. The reason for this is simply that "different (nonconscious) modules are causing different moral judgments." This is also why we are morally inconsistent: "Because different parts of the mind, with different functions, are generating different moral judgments, there is nothing that keeps them mutually consistent."
* * * * *
Consciousness, Free Will, and Self. Benjamin Libet hooked subjects "up to EEG machines to measure certain kinds of brain activity and told to perform a simple movement -- a flick of the wrist -- at a moment of their choosing. Libet and his colleagues looked at the relationship between activity in the brain and the subjects’ report of their awareness of the decision to move the wrist." Shockingly, Libet found that "brain activity preceded subjects' reports of their wish to move their wrist." Similar studies have corroborated Libet's findings -- e.g., Brain Scanners Can See Your Decisions Before You Make Them. Kurban: "[T]alking about the 'self' is problematic. Which bits, which modules, get to be called 'me?' Why some but not others? Should we take the conscious ones to be special in some way? If so, why?"
No comments:
Post a Comment