Contributed by: filbert Friday, May 23 2014 @ 10:26 AM CST
People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people’s mistaken sense that they understand the causal processes underlying policies contributes to political polarization.
Basically, what Fernbach and his group are finding is that if you ask people to explain their Step #2, all of the sudden their views on the issue–the thing for which Step #3 is the desired end-state–becomes a lot more moderate.
Of course, to many of us who get lumped into the “conservative” side of the political spectrum, this is not exactly an Earth-shaking revelation. To do what you want to do, you have to know what you’re doing, and know what the results of your actions will be.
But, when dealing with human beings–changeable, emotional, mercurial human beings–is it really a rational thing to think that you can nail down exactly what the results of your policies will be? Open any history book and read a few pages, and you will quickly come to the conclusion that the answer is no, it is not rational to think that you can accurately and completely predict the results of a particular action or policy on a group of people. Oh, to some extent, you can certainly anticipate likely responses, but [i]likely[/i] does not mean [i]inevitable[/i]. Human beings are capable of almost infinite surprise. A population may be terrorized into compliance, until that point where terror becomes rage, and the oppressed rise up to tear down the oppressors. Nobody has an equation to calculate that tipping point where terror turns to implacable resistance.
Nobel Economics Prize winner Friedrich Hayek discussed this phenomenon in the context of economic planning, and called it “The Knowledge Problem” in his landmark paper The Use of Knowledge in Society[*2] . Who is in the best position to chart the course of any person’s life–individuals, or “society” in the form of a government? Hayek’s point is, essentially, that government can not know Step #2 in any sufficient detail to effectively solve any problem other than perhaps the keeping of basic public order, national defense, and perhaps construction and maintenance of high-capital-investment public infrastructure. Anything much beyond those #3 goals requires a depth of knowledge of Step #2 which is simply not available to a central planning organization.
The main (non-religious) thrust of the “conservative” thought is really summed up by the phrase from the Hippocratic Oath: “First, do no harm.” In order to abide by that noble thought, it is mandatory to understand–[b]really[/i] understand–Underpants Gnome Step #2. And that’s where the problem really resides with “liberal” (actually socialist) government-oriented plans and schemes.